~ibmcharmers/xenial/ibm-cinder-ds8k

Owner: achittet
Status: Needs Review
Vote: +0 (+2 needed for approval)

CPP?: No
OIL?: No

Charm for IBM Cinder DS8K Driver


Tests

Substrate Status Results Last Updated
lxc RETRY 19 days ago
aws RETRY 19 days ago
gce RETRY 19 days ago

Add Comment

Login to comment/vote on this review.


Policy Checklist

Description Unreviewed Pass Fail

General

Must verify that any software installed or utilized is verified as coming from the intended source.
  • Any software installed from the Ubuntu or CentOS default archives satisfies this due to the apt and yum sources including cryptographic signing information.
  • Third party repositories must be listed as a configuration option that can be overridden by the user and not hard coded in the charm itself.
  • Launchpad PPAs are acceptable as the add-apt-repository command retrieves the keys securely.
  • Other third party repositories are acceptable if the signing key is embedded in the charm.
Must provide a means to protect users from known security vulnerabilities in a way consistent with best practices as defined by either operating system policies or upstream documentation.
Basically, this means there must be instructions on how to apply updates if you use software not from distribution channels.
Must have hooks that are idempotent.
Should be built using charm layers.
Should use Juju Resources to deliver required payloads.

Testing and Quality

charm proof must pass without errors or warnings.
Must include passing unit, functional, or integration tests.
Tests must exercise all relations.
Tests must exercise config.
set-config, unset-config, and re-set must be tested as a minimum
Must not use anything infrastructure-provider specific (i.e. querying EC2 metadata service).
Must be self contained unless the charm is a proxy for an existing cloud service, e.g. ec2-elb charm.
Must not use symlinks.
Bundles must only use promulgated charms, they cannot reference charms in personal namespaces.
Must call Juju hook tools (relation-*, unit-*, config-*, etc) without a hard coded path.
Should include a tests.yaml for all integration tests.

Metadata

Must include a full description of what the software does.
Must include a maintainer email address for a team or individual who will be responsive to contact.
Must include a license. Call the file 'copyright' and make sure all files' licenses are specified clearly.
Must be under a Free license.
Must have a well documented and valid README.md.
Must describe the service.
Must describe how it interacts with other services, if applicable.
Must document the interfaces.
Must show how to deploy the charm.
Must define external dependencies, if applicable.
Should link to a recommend production usage bundle and recommended configuration if this differs from the default.
Should reference and link to upstream documentation and best practices.

Security

Must not run any network services using default passwords.
Must verify and validate any external payload
  • Known and understood packaging systems that verify packages like apt, pip, and yum are ok.
  • wget | sh style is not ok.
Should make use of whatever Mandatory Access Control system is provided by the distribution.
Should avoid running services as root.

All changes | Changes since last revision

Source Diff

Files changed 67

Inline diff comments 0

No comments yet.

Back to file index

Makefile

 1
--- 
 2
+++ Makefile
 3
@@ -0,0 +1,24 @@
 4
+#!/usr/bin/make
 5
+
 6
+all: lint unit_test
 7
+
 8
+
 9
+.PHONY: clean
10
+clean:
11
+	@rm -rf .tox
12
+
13
+.PHONY: apt_prereqs
14
+apt_prereqs:
15
+	@# Need tox, but don't install the apt version unless we have to (don't want to conflict with pip)
16
+	@which tox >/dev/null || (sudo apt-get install -y python-pip && sudo pip install tox)
17
+
18
+.PHONY: lint
19
+lint: apt_prereqs
20
+	@tox --notest
21
+	@PATH=.tox/py34/bin:.tox/py35/bin flake8 $(wildcard hooks reactive lib unit_tests tests)
22
+	@charm proof
23
+
24
+.PHONY: unit_test
25
+unit_test: apt_prereqs
26
+	@echo Starting tests...
27
+	tox
Back to file index

README.md

 1
--- 
 2
+++ README.md
 3
@@ -0,0 +1,83 @@
 4
+#DS8K Storage Backend for Cinder
 5
+
 6
+
 7
+##Overview
 8
+
 9
+This charm provides DS8K storage support on S390x for use with a Cinder charm deployment, allowing multiple DS8K storage clusters to be associated with a single Cinder deployment, potentially alongside storage backends from other vendors.
10
+
11
+
12
+#Usage
13
+See the Troubleshooting section below before running the deployment to make sure that your system requirements are met.
14
+
15
+To set up the backend:
16
+
17
+    juju deploy cinder
18
+    juju deploy ibm-cinder-ds8k --resource ibm_cinder_ds8k_installer=</path/to/installer.tar.zip>
19
+    juju add-relation cinder ibm-cinder-ds8k
20
+
21
+Run the next command to test the Cinder DS8K storage after successful deployment of the `ibm-cinder-ds8k` charm. Its output will list all the storage back ends which were configured.
22
+
23
+    cinder service-list
24
+
25
+
26
+####Troubleshooting
27
+
28
+* **Deploying the charm to the right machine**
29
+Note that `ibm-cinder-ds8k` is a subordinate charm to the base Cinder charm. As a consequence, it needs to be deployed to the machine Cinder is deployed to. After identifying the machine that is connected to the backend, it can be configured with:
30
+    
31
+    
32
+    juju deploy cinder --to <machine_index>
33
+
34
+* **Installing the correct driver**
35
+Driver software needs to be the exact version as is noted in the charm resource configuration, namely `1.7.0.1-b985`.
36
+
37
+* **Running OpenStack commands**
38
+In order to run commands such as `cinder service-list`, you need authentication as an administrator. If you set up the OpenStack cloud with [OpenStack on LXD](https://docs.openstack.org/developer/charm-guide/openstack-on-lxd.html), you can run their admin configuration with:
39
+
40
+
41
+    source openstack-on-lxd/novarc
42
+
43
+
44
+##Installation
45
+
46
+In order to use the charm's [resource](https://api.jujucharms.com/charmstore/v5/~ibmcharmers/xenial/ibm-cinder-ds8k-10/resource/ibm_cinder_ds8k_installers/8), download a licensed IBM DS8K Installer package following [this](http://www-01.ibm.com/support/docview.wss?uid=swg27025142) link.
47
+When deploying from the Charm Store, the terms and conditions will be presented to you for your consideration. To accept the terms, enter:
48
+
49
+    juju agree ibmcharmers/ibm-cinder-ds8k/1
50
+
51
+
52
+##Configuration
53
+
54
+Once the charm is deployed, a number of config parameters need to be passed to the driver in order to fulfill access requirements for the DS8K management interface.
55
+They can be set, among others, by running the following commands:
56
+
57
+    juju config ibm-cinder-ds8k san-ip="ip address"
58
+    juju config ibm-cinder-ds8k san-login="login_name"
59
+    juju config ibm-cinder-ds8k san-password ="login_password"
60
+    juju config ibm-cinder-ds8k volume-driver="drivername"
61
+    juju config ibm-cinder-ds8k volume-backend-name="backend-name"
62
+    juju config ibm-cinder-ds8k ds8k_storage_unit="storage_unit"
63
+    juju config ibm-cinder-ds8k san_clustername=clustername"
64
+    juju config ibm-cinder-ds8k xiv_chap="enabled/disabled"
65
+    juju config ibm-cinder-ds8k xiv_ds8k_connection_type="ds8k connection type"
66
+    juju config ibm-cinder-ds8k management_ips="ip's"
67
+    juju config ibm-cinder-ds8k ds8k_java_path="java_path"
68
+    juju config ibm-cinder-ds8k host_profile="host_profile"
69
+    juju config ibm-cinder-ds8k ds8k_jar_lib_path="jar_lib_path"
70
+
71
+##Uninstalling
72
+
73
+To Remove the relation and application, run:
74
+
75
+    juju remove-relation cinder ibm-cinder-ds8k
76
+    juju remove-application ibm-cinder-ds8k
77
+    juju remove-application cinder
78
+
79
+
80
+##Known Limitations:
81
+
82
+This charm makes use of Juju features that are only available in version 2.0 or greater.
83
+
84
+##Contact Information:
85
+
86
+For issues with this charm, please contact IBM Juju Support Team
Back to file index

bin/layer_option

 1
--- 
 2
+++ bin/layer_option
 3
@@ -0,0 +1,24 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+import sys
 7
+sys.path.append('lib')
 8
+
 9
+import argparse
10
+from charms.layer import options
11
+
12
+
13
+parser = argparse.ArgumentParser(description='Access layer options.')
14
+parser.add_argument('section',
15
+                    help='the section, or layer, the option is from')
16
+parser.add_argument('option',
17
+                    help='the option to access')
18
+
19
+args = parser.parse_args()
20
+value = options(args.section).get(args.option, '')
21
+if isinstance(value, bool):
22
+    sys.exit(0 if value else 1)
23
+elif isinstance(value, list):
24
+    for val in value:
25
+        print(val)
26
+else:
27
+    print(value)
Back to file index

config.yaml

 1
--- 
 2
+++ config.yaml
 3
@@ -0,0 +1,66 @@
 4
+"options":
 5
+  "volume-driver":
 6
+    "type": "string"
 7
+    "default": "cinder.volume.drivers.ibm.xiv_ds8k.XIVDS8KDriver"
 8
+    "description": |
 9
+      This value is denotes the name of volume driver.
10
+  "volume-backend-name":
11
+    "type": "string"
12
+    "default": ""
13
+    "description": |
14
+      This value is denotes the name of volume backend.
15
+  "san-ip":
16
+    "type": "string"
17
+    "default": ""
18
+    "description": |
19
+      This is the IP address of the ds8k.
20
+  "san-login":
21
+    "type": "string"
22
+    "default": ""
23
+    "description": |
24
+      This is the user id to login ds8k.
25
+  "san-password":
26
+    "type": "string"
27
+    "default": ""
28
+    "description": |
29
+      This is the password for ds8k.
30
+  "ds8k_storage_unit":
31
+    "type": "string"
32
+    "default": ""
33
+    "description": |
34
+      Default pool name for volumes.
35
+  "san_clustername":
36
+    "type": "string"
37
+    "default": ""
38
+    "description": |
39
+      It defines the storage pool.
40
+  "xiv_chap":
41
+    "type": "string"
42
+    "default": ""
43
+    "description": |
44
+      It defines the xiv_chap.
45
+  "xiv_ds8k_connection_type":
46
+    "type": "string"
47
+    "default": ""
48
+    "description": |
49
+      It defines the ds8k connection type.
50
+  "management_ips":
51
+    "type": "string"
52
+    "default": ""
53
+    "description": |
54
+      It defines the ds8k management ip's.
55
+  "ds8k_java_path":
56
+    "type": "string"
57
+    "default": ""
58
+    "description": |
59
+      It defines the ds8k java path.
60
+  "host_profile":
61
+    "type": "string"
62
+    "default": ""
63
+    "description": |
64
+      It defines the ds8k host profile.
65
+  "ds8k_jar_lib_path":
66
+    "type": "string"
67
+    "default": ""
68
+    "description": |
69
+      It defines the ds8k java lib path.
Back to file index

copyright

 1
--- 
 2
+++ copyright
 3
@@ -0,0 +1,16 @@
 4
+Format: http://dep.debian.net/deps/dep5/
 5
+
 6
+Files: *
 7
+Copyright: Copyright 2015-2017, Canonical Ltd., All Rights Reserved.
 8
+License: Apache License 2.0
 9
+ Licensed under the Apache License, Version 2.0 (the "License");
10
+ you may not use this file except in compliance with the License.
11
+ You may obtain a copy of the License at
12
+ .
13
+     http://www.apache.org/licenses/LICENSE-2.0
14
+ .
15
+ Unless required by applicable law or agreed to in writing, software
16
+ distributed under the License is distributed on an "AS IS" BASIS,
17
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ See the License for the specific language governing permissions and
19
+ limitations under the License.
Back to file index

hooks/cinder_contexts.py

 1
--- 
 2
+++ hooks/cinder_contexts.py
 3
@@ -0,0 +1,95 @@
 4
+# Copyright 2016 Canonical Ltd
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+from subprocess import call
19
+
20
+from charmhelpers.core.hookenv import (
21
+    service_name,
22
+)
23
+
24
+
25
+call("pip install " + 'netifaces', shell=True)
26
+call("pip install " + 'psutil', shell=True)
27
+
28
+try:
29
+    from charmhelpers.contrib.openstack.context import (
30
+        OSContextGenerator,
31
+    )
32
+except:
33
+    raise
34
+
35
+
36
+class ds8kSubordinateContext(OSContextGenerator):
37
+    interfaces = ['ds8k-cinder']
38
+
39
+    def __call__(self, data=None):
40
+        """
41
+        Used to generate template context to be added to cinder.conf in the
42
+        presence of a ds8k relation.
43
+        """
44
+        service = service_name()
45
+        print("Each values printing")
46
+        print(data["volume-driver"])
47
+        print(data["volume-backend-name"])
48
+        print(data["san-ip"])
49
+        print(data["san-login"])
50
+        print(data["san-password"])
51
+        print(data["ds8k_storage_unit"])
52
+        print(data["san_clustername"])
53
+        print(data["xiv_chap"])
54
+        print(data["xiv_ds8k_connection_type"])
55
+        print(data["management_ips"])
56
+        print(data["ds8k_java_path"])
57
+        print(data["host_profile"])
58
+        print(data["ds8k_jar_lib_path"])
59
+
60
+        volume_driver = data["volume-driver"]
61
+        volume_backend_name = data["volume-backend-name"]
62
+        san_ip = data["san-ip"]
63
+        san_login = data["san-login"]
64
+        san_password = data["san-password"]
65
+        ds8k_storage_unit = data["ds8k_storage_unit"]
66
+        san_clustername = data["san_clustername"]
67
+        xiv_chap = data["xiv_chap"]
68
+        xiv_ds8k_connection_type = data["xiv_ds8k_connection_type"]
69
+        management_ips = data["management_ips"]
70
+        ds8k_java_path = data["ds8k_java_path"]
71
+        host_profile = data["host_profile"]
72
+        ds8k_jar_lib_path = data["ds8k_jar_lib_path"]
73
+
74
+        values_to_add = [('volume_driver', volume_driver),
75
+                         ('volume_backend_name', volume_backend_name),
76
+                         ('san_ip', san_ip),
77
+                         ('san_login', san_login),
78
+                         ('san_password', san_password),
79
+                         ('ds8k_storage_unit', ds8k_storage_unit),
80
+                         ('san_clustername', san_clustername),
81
+                         ('xiv_chap', xiv_chap),
82
+                         ('xiv_ds8k_connection_type',
83
+                          xiv_ds8k_connection_type),
84
+                         ('management_ips', management_ips),
85
+                         ('ds8k_java_path', ds8k_java_path),
86
+                         ('host_profile', host_profile),
87
+                         ('ds8k_jar_lib_path', ds8k_jar_lib_path),
88
+                         ]
89
+
90
+        return {
91
+            "cinder": {
92
+                "/etc/cinder/cinder.conf": {
93
+                   "sections": {
94
+                        service: values_to_add
95
+                    }
96
+                }
97
+            }
98
+        }
Back to file index

hooks/config-changed

 1
--- 
 2
+++ hooks/config-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/hook.template

 1
--- 
 2
+++ hooks/hook.template
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/install

 1
--- 
 2
+++ hooks/install
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-elected

 1
--- 
 2
+++ hooks/leader-elected
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-settings-changed

 1
--- 
 2
+++ hooks/leader-settings-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/relations/cinder-backend/README.md

 1
--- 
 2
+++ hooks/relations/cinder-backend/README.md
 3
@@ -0,0 +1,39 @@
 4
+
 5
+Overview
 6
+-----------
 7
+
 8
+This interface layer handles the communication between `cinder` and other cinder-driver charms, such as `ibm-cinder-flashsystem`, `ibm-cinder-ds8k`, `ibm-cinder-xiv` and `ibm-cinder-storwize-svc`.
 9
+The provide charms are the Driver charms and consumer charm is `cinder`.
10
+The provider end of this interface provides driver login informations, such as san_ip, san_login, san_password and other driver specific informations to `cinder`.
11
+Once `cinder` charm gets driver specific informations, it updates the configurations and restarts `cinder`. 
12
+
13
+
14
+Usage
15
+------
16
+
17
+#### Provides
18
+Cinder Driver Charms will provide this interface. This interface layer will set the following states, as appropriate:
19
+
20
+ - `{relation_name}.available`: The relation is established, Driver charms are ready to send it's information.
21
+
22
+ - `{relation_name}.departing` : The relation has been removed. Any cleanup related to the consumer charm(cinder) should happen now on the cinder charm since the consumer is going away.
23
+
24
+
25
+#### Requires
26
+
27
+Consumer charm `cinder` will require this interface to connect to Driver charms so that they can get the information to update the conf file. This interface layer will set the following states, as appropriate:
28
+
29
+- `{relation_name}.available` : The consumer charm has been related to a provider charms, i.e Driver charms. At this point, the charm waits for Provider charm to send configuration details like san_ip, san_password, san_login and other details.
30
+
31
+- `{relation_name}.changing` : This state sets when relation changes between `cinder` and Driver charm.
32
+
33
+- `{relation_name}.departing` : The relation has been removed. Any cleanup related to the provider charms should happen now, since the relation is broken.
34
+
35
+Known Limitations
36
+-----------------
37
+This charm makes use of Juju features that are only available in version 2.0 or greater.
38
+
39
+Contact Information
40
+-------------------
41
+For issues with this charm, please contact IBM Juju Support Team.
42
+
Back to file index

hooks/relations/cinder-backend/interface.yaml

1
--- 
2
+++ hooks/relations/cinder-backend/interface.yaml
3
@@ -0,0 +1,5 @@
4
+name: cinder-backend
5
+summary: Interface to connect Openstack Cinder and IBM Storwize SVC Volume driver charms
6
+maintainer: IBM Juju Support Team <jujusupp@us.ibm.com>
7
+version: 1
8
+#repo: ""
Back to file index

hooks/relations/cinder-backend/provides.py

 1
--- 
 2
+++ hooks/relations/cinder-backend/provides.py
 3
@@ -0,0 +1,20 @@
 4
+from charms.reactive import hook
 5
+from charms.reactive import RelationBase
 6
+from charms.reactive import scopes
 7
+
 8
+
 9
+class driverProvides(RelationBase):
10
+    # Every unit connecting will get the same information
11
+    scope = scopes.GLOBAL
12
+
13
+    @hook('{provides:cinder-backend}-relation-{joined,changed}')
14
+    def changed(self):
15
+        conversation = self.conversation()
16
+        conversation.set_state('{relation_name}.available')
17
+        conversation.remove_state('{relation_name}.departing')
18
+
19
+    @hook('{provides:cinder-backend}-relation-{broken,departed}')
20
+    def broken(self):
21
+        conversation = self.conversation()
22
+        conversation.remove_state('{relation_name}.available')
23
+        conversation.set_state('{relation_name}.departing')
Back to file index

hooks/relations/cinder-backend/requires.py

 1
--- 
 2
+++ hooks/relations/cinder-backend/requires.py
 3
@@ -0,0 +1,23 @@
 4
+from charms.reactive import hook
 5
+from charms.reactive import RelationBase
 6
+from charms.reactive import scopes
 7
+
 8
+
 9
+class cinderRequires(RelationBase):
10
+    scope = scopes.GLOBAL
11
+
12
+    @hook('{requires:cinder-backend}-relation-joined')
13
+    def joined(self):
14
+        self.remove_state('{relation_name}.departing')
15
+        self.set_state('{relation_name}.available')
16
+
17
+    @hook('{requires:cinder-backend}-relation-changed')
18
+    def changed(self):
19
+        self.remove_state('{relation_name}.departing')
20
+        self.set_state('{relation_name}.changing')
21
+
22
+    @hook('{requires:cinder-backend}-relation-departed')
23
+    def departed(self):
24
+        self.remove_state('{relation_name}.available')
25
+        self.remove_state('{relation_name}.changing')
26
+        self.set_state('{relation_name}.departing')
Back to file index

hooks/start

 1
--- 
 2
+++ hooks/start
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/stop

 1
--- 
 2
+++ hooks/stop
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/storage-backend-relation-broken

 1
--- 
 2
+++ hooks/storage-backend-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/storage-backend-relation-changed

 1
--- 
 2
+++ hooks/storage-backend-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/storage-backend-relation-departed

 1
--- 
 2
+++ hooks/storage-backend-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/storage-backend-relation-joined

 1
--- 
 2
+++ hooks/storage-backend-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/update-status

 1
--- 
 2
+++ hooks/update-status
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/upgrade-charm

 1
--- 
 2
+++ hooks/upgrade-charm
 3
@@ -0,0 +1,28 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import os
 8
+import sys
 9
+sys.path.append('lib')
10
+
11
+# This is an upgrade-charm context, make sure we install latest deps
12
+if not os.path.exists('wheelhouse/.upgrade'):
13
+    open('wheelhouse/.upgrade', 'w').close()
14
+    if os.path.exists('wheelhouse/.bootstrapped'):
15
+        os.unlink('wheelhouse/.bootstrapped')
16
+else:
17
+    os.unlink('wheelhouse/.upgrade')
18
+
19
+from charms.layer import basic
20
+basic.bootstrap_charm_deps()
21
+basic.init_config_states()
22
+
23
+
24
+# This will load and run the appropriate @hook and other decorated
25
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
26
+# and $CHARM_DIR/hooks/relations.
27
+#
28
+# See https://jujucharms.com/docs/stable/authors-charm-building
29
+# for more information on this pattern.
30
+from charms.reactive import main
31
+main()
Back to file index

icon.svg

  1
--- 
  2
+++ icon.svg
  3
@@ -0,0 +1,636 @@
  4
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
  5
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
  6
+
  7
+<svg
  8
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
  9
+   xmlns:cc="http://creativecommons.org/ns#"
 10
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
 11
+   xmlns:svg="http://www.w3.org/2000/svg"
 12
+   xmlns="http://www.w3.org/2000/svg"
 13
+   xmlns:xlink="http://www.w3.org/1999/xlink"
 14
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
 15
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
 16
+   sodipodi:docname="openstack-cinder.svg"
 17
+   inkscape:version="0.48+devel r12591"
 18
+   version="1.1"
 19
+   id="svg6517"
 20
+   height="96"
 21
+   width="96">
 22
+  <sodipodi:namedview
 23
+     id="base"
 24
+     pagecolor="#ffffff"
 25
+     bordercolor="#666666"
 26
+     borderopacity="1.0"
 27
+     inkscape:pageopacity="0.0"
 28
+     inkscape:pageshadow="2"
 29
+     inkscape:zoom="2.0861625"
 30
+     inkscape:cx="100.56201"
 31
+     inkscape:cy="47.468164"
 32
+     inkscape:document-units="px"
 33
+     inkscape:current-layer="layer1"
 34
+     showgrid="false"
 35
+     fit-margin-top="0"
 36
+     fit-margin-left="0"
 37
+     fit-margin-right="0"
 38
+     fit-margin-bottom="0"
 39
+     inkscape:window-width="1920"
 40
+     inkscape:window-height="1029"
 41
+     inkscape:window-x="0"
 42
+     inkscape:window-y="24"
 43
+     inkscape:window-maximized="1"
 44
+     showborder="true"
 45
+     showguides="false"
 46
+     inkscape:guide-bbox="true"
 47
+     inkscape:showpageshadow="false"
 48
+     inkscape:snap-global="true"
 49
+     inkscape:snap-bbox="true"
 50
+     inkscape:bbox-paths="true"
 51
+     inkscape:bbox-nodes="true"
 52
+     inkscape:snap-bbox-edge-midpoints="true"
 53
+     inkscape:snap-bbox-midpoints="true"
 54
+     inkscape:object-paths="true"
 55
+     inkscape:snap-intersection-paths="true"
 56
+     inkscape:object-nodes="true"
 57
+     inkscape:snap-smooth-nodes="true"
 58
+     inkscape:snap-midpoints="true"
 59
+     inkscape:snap-object-midpoints="true"
 60
+     inkscape:snap-center="true"
 61
+     inkscape:snap-grids="false"
 62
+     inkscape:snap-nodes="true"
 63
+     inkscape:snap-others="false">
 64
+    <inkscape:grid
 65
+       id="grid821"
 66
+       type="xygrid" />
 67
+    <sodipodi:guide
 68
+       id="guide823"
 69
+       position="18.34962,45.78585"
 70
+       orientation="1,0" />
 71
+    <sodipodi:guide
 72
+       id="guide827"
 73
+       position="78.02001,46.32673"
 74
+       orientation="1,0" />
 75
+    <sodipodi:guide
 76
+       inkscape:label=""
 77
+       id="guide4184"
 78
+       position="65.586619,19.307"
 79
+       orientation="-0.087155743,0.9961947" />
 80
+    <sodipodi:guide
 81
+       inkscape:label=""
 82
+       id="guide4188"
 83
+       position="62.756032,71.583147"
 84
+       orientation="-0.087155743,0.9961947" />
 85
+    <sodipodi:guide
 86
+       inkscape:label=""
 87
+       id="guide4190"
 88
+       position="47.812194,78.049658"
 89
+       orientation="-0.087155743,0.9961947" />
 90
+    <sodipodi:guide
 91
+       id="guide4194"
 92
+       position="25.60516,42.21665"
 93
+       orientation="1,0" />
 94
+    <sodipodi:guide
 95
+       inkscape:label=""
 96
+       id="guide4202"
 97
+       position="25.60516,42.070975"
 98
+       orientation="-0.087155743,0.9961947" />
 99
+    <sodipodi:guide
100
+       inkscape:label=""
101
+       id="guide4204"
102
+       position="25.60516,42.070975"
103
+       orientation="-0.70710678,-0.70710678" />
104
+    <sodipodi:guide
105
+       inkscape:label=""
106
+       id="guide4242"
107
+       position="51.81985,44.36226"
108
+       orientation="-0.70710678,-0.70710678" />
109
+    <sodipodi:guide
110
+       inkscape:label=""
111
+       id="guide4252"
112
+       position="73.5625,75.210937"
113
+       orientation="-0.70710678,-0.70710678" />
114
+    <sodipodi:guide
115
+       inkscape:label=""
116
+       inkscape:color="rgb(140,140,240)"
117
+       id="guide4254"
118
+       position="18.34962,75.472017"
119
+       orientation="-0.70710678,-0.70710678" />
120
+    <sodipodi:guide
121
+       inkscape:label=""
122
+       id="guide4288"
123
+       position="21.871042,21.577512"
124
+       orientation="-0.70710678,-0.70710678" />
125
+  </sodipodi:namedview>
126
+  <defs
127
+     id="defs6519">
128
+    <filter
129
+       id="filter1121"
130
+       inkscape:label="Inner Shadow"
131
+       style="color-interpolation-filters:sRGB;">
132
+      <feFlood
133
+         id="feFlood1123"
134
+         result="flood"
135
+         flood-color="rgb(0,0,0)"
136
+         flood-opacity="0.59999999999999998" />
137
+      <feComposite
138
+         id="feComposite1125"
139
+         result="composite1"
140
+         operator="out"
141
+         in2="SourceGraphic"
142
+         in="flood" />
143
+      <feGaussianBlur
144
+         id="feGaussianBlur1127"
145
+         result="blur"
146
+         stdDeviation="1"
147
+         in="composite1" />
148
+      <feOffset
149
+         id="feOffset1129"
150
+         result="offset"
151
+         dy="2"
152
+         dx="0" />
153
+      <feComposite
154
+         id="feComposite1131"
155
+         result="composite2"
156
+         operator="atop"
157
+         in2="SourceGraphic"
158
+         in="offset" />
159
+    </filter>
160
+    <filter
161
+       id="filter950"
162
+       inkscape:label="Drop Shadow"
163
+       style="color-interpolation-filters:sRGB;">
164
+      <feFlood
165
+         id="feFlood952"
166
+         result="flood"
167
+         flood-color="rgb(0,0,0)"
168
+         flood-opacity="0.25" />
169
+      <feComposite
170
+         id="feComposite954"
171
+         result="composite1"
172
+         operator="in"
173
+         in2="SourceGraphic"
174
+         in="flood" />
175
+      <feGaussianBlur
176
+         id="feGaussianBlur956"
177
+         result="blur"
178
+         stdDeviation="1"
179
+         in="composite1" />
180
+      <feOffset
181
+         id="feOffset958"
182
+         result="offset"
183
+         dy="1"
184
+         dx="0" />
185
+      <feComposite
186
+         id="feComposite960"
187
+         result="composite2"
188
+         operator="over"
189
+         in2="offset"
190
+         in="SourceGraphic" />
191
+    </filter>
192
+    <filter
193
+       inkscape:label="Badge Shadow"
194
+       id="filter891"
195
+       inkscape:collect="always">
196
+      <feGaussianBlur
197
+         id="feGaussianBlur893"
198
+         stdDeviation="0.71999962"
199
+         inkscape:collect="always" />
200
+    </filter>
201
+    <filter
202
+       inkscape:collect="always"
203
+       id="filter3831">
204
+      <feGaussianBlur
205
+         inkscape:collect="always"
206
+         stdDeviation="0.86309522"
207
+         id="feGaussianBlur3833" />
208
+    </filter>
209
+    <filter
210
+       inkscape:collect="always"
211
+       id="filter3868"
212
+       x="-0.17186206"
213
+       width="1.3437241"
214
+       y="-0.1643077"
215
+       height="1.3286154">
216
+      <feGaussianBlur
217
+         inkscape:collect="always"
218
+         stdDeviation="0.62628186"
219
+         id="feGaussianBlur3870" />
220
+    </filter>
221
+    <linearGradient
222
+       id="linearGradient4328"
223
+       inkscape:collect="always">
224
+      <stop
225
+         id="stop4330"
226
+         offset="0"
227
+         style="stop-color:#871f1c;stop-opacity:1;" />
228
+      <stop
229
+         id="stop4332"
230
+         offset="1"
231
+         style="stop-color:#651715;stop-opacity:1" />
232
+    </linearGradient>
233
+    <linearGradient
234
+       id="linearGradient902"
235
+       inkscape:collect="always">
236
+      <stop
237
+         id="stop904"
238
+         offset="0"
239
+         style="stop-color:#cccccc;stop-opacity:1" />
240
+      <stop
241
+         id="stop906"
242
+         offset="1"
243
+         style="stop-color:#e6e6e6;stop-opacity:1" />
244
+    </linearGradient>
245
+    <linearGradient
246
+       id="Background">
247
+      <stop
248
+         style="stop-color:#22779e;stop-opacity:1"
249
+         offset="0"
250
+         id="stop4178" />
251
+      <stop
252
+         style="stop-color:#2991c0;stop-opacity:1"
253
+         offset="1"
254
+         id="stop4180" />
255
+    </linearGradient>
256
+    <clipPath
257
+       id="clipPath873"
258
+       clipPathUnits="userSpaceOnUse">
259
+      <g
260
+         style="fill:#ff00ff;fill-opacity:1;stroke:none;display:inline"
261
+         inkscape:label="Layer 1"
262
+         id="g875"
263
+         transform="matrix(0,-0.66666667,0.66604479,0,-258.25992,677.00001)">
264
+        <path
265
+           sodipodi:nodetypes="sssssssss"
266
+           inkscape:connector-curvature="0"
267
+           id="path877"
268
+           d="m 46.702703,898.22775 50.594594,0 C 138.16216,898.22775 144,904.06497 144,944.92583 l 0,50.73846 c 0,40.86071 -5.83784,46.69791 -46.702703,46.69791 l -50.594594,0 C 5.8378378,1042.3622 0,1036.525 0,995.66429 L 0,944.92583 C 0,904.06497 5.8378378,898.22775 46.702703,898.22775 Z"
269
+           style="fill:#ff00ff;fill-opacity:1;stroke:none;display:inline" />
270
+      </g>
271
+    </clipPath>
272
+    <style
273
+       type="text/css"
274
+       id="style867">
275
+    .fil0 {fill:#1F1A17}
276
+   </style>
277
+    <linearGradient
278
+       gradientUnits="userSpaceOnUse"
279
+       y2="635.29077"
280
+       x2="-220"
281
+       y1="731.29077"
282
+       x1="-220"
283
+       id="linearGradient908"
284
+       xlink:href="#linearGradient902"
285
+       inkscape:collect="always" />
286
+    <clipPath
287
+       id="clipPath16">
288
+      <path
289
+         d="m -9,-9 614,0 0,231 -614,0 0,-231 z"
290
+         id="path18" />
291
+    </clipPath>
292
+    <clipPath
293
+       id="clipPath116">
294
+      <path
295
+         d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
296
+         id="path118" />
297
+    </clipPath>
298
+    <clipPath
299
+       id="clipPath128">
300
+      <path
301
+         d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
302
+         id="path130" />
303
+    </clipPath>
304
+    <linearGradient
305
+       inkscape:collect="always"
306
+       id="linearGradient3850">
307
+      <stop
308
+         style="stop-color:#000000;stop-opacity:1;"
309
+         offset="0"
310
+         id="stop3852" />
311
+      <stop
312
+         style="stop-color:#000000;stop-opacity:0;"
313
+         offset="1"
314
+         id="stop3854" />
315
+    </linearGradient>
316
+    <clipPath
317
+       id="clipPath3095"
318
+       clipPathUnits="userSpaceOnUse">
319
+      <path
320
+         inkscape:connector-curvature="0"
321
+         id="path3097"
322
+         d="m 976.648,389.551 -842.402,0 0,839.999 842.402,0 0,-839.999" />
323
+    </clipPath>
324
+    <clipPath
325
+       id="clipPath3195"
326
+       clipPathUnits="userSpaceOnUse">
327
+      <path
328
+         inkscape:connector-curvature="0"
329
+         id="path3197"
330
+         d="m 611.836,756.738 -106.34,105.207 c -8.473,8.289 -13.617,20.102 -13.598,33.379 L 598.301,790.207 c -0.031,-13.418 5.094,-25.031 13.535,-33.469" />
331
+    </clipPath>
332
+    <clipPath
333
+       id="clipPath3235"
334
+       clipPathUnits="userSpaceOnUse">
335
+      <path
336
+         inkscape:connector-curvature="0"
337
+         id="path3237"
338
+         d="m 1095.64,1501.81 c 35.46,-35.07 70.89,-70.11 106.35,-105.17 4.4,-4.38 7.11,-10.53 7.11,-17.55 l -106.37,105.21 c 0,7 -2.71,13.11 -7.09,17.51" />
339
+    </clipPath>
340
+    <linearGradient
341
+       inkscape:collect="always"
342
+       id="linearGradient4389">
343
+      <stop
344
+         style="stop-color:#871f1c;stop-opacity:1"
345
+         offset="0"
346
+         id="stop4391" />
347
+      <stop
348
+         style="stop-color:#c42e24;stop-opacity:1"
349
+         offset="1"
350
+         id="stop4393" />
351
+    </linearGradient>
352
+    <clipPath
353
+       clipPathUnits="userSpaceOnUse"
354
+       id="clipPath4591">
355
+      <path
356
+         id="path4593"
357
+         style="fill:#ff00ff;fill-opacity:1;fill-rule:nonzero;stroke:none"
358
+         d="m 1106.6009,730.43734 -0.036,21.648 c -0.01,3.50825 -2.8675,6.61375 -6.4037,6.92525 l -83.6503,7.33162 c -3.5205,0.30763 -6.3812,-2.29987 -6.3671,-5.8145 l 0.036,-21.6475 20.1171,-1.76662 -0.011,4.63775 c 0,1.83937 1.4844,3.19925 3.3262,3.0395 l 49.5274,-4.33975 c 1.8425,-0.166 3.3425,-1.78125 3.3538,-3.626 l 0.01,-4.63025 20.1,-1.7575"
359
+         inkscape:connector-curvature="0" />
360
+    </clipPath>
361
+    <radialGradient
362
+       inkscape:collect="always"
363
+       xlink:href="#linearGradient3850"
364
+       id="radialGradient3856"
365
+       cx="-26.508606"
366
+       cy="93.399292"
367
+       fx="-26.508606"
368
+       fy="93.399292"
369
+       r="20.40658"
370
+       gradientTransform="matrix(-1.4333926,-2.2742838,1.1731823,-0.73941125,-174.08025,98.374394)"
371
+       gradientUnits="userSpaceOnUse" />
372
+    <filter
373
+       inkscape:collect="always"
374
+       id="filter3885">
375
+      <feGaussianBlur
376
+         inkscape:collect="always"
377
+         stdDeviation="5.7442192"
378
+         id="feGaussianBlur3887" />
379
+    </filter>
380
+    <linearGradient
381
+       inkscape:collect="always"
382
+       xlink:href="#linearGradient3850"
383
+       id="linearGradient3895"
384
+       x1="348.20132"
385
+       y1="593.11615"
386
+       x2="-51.879555"
387
+       y2="993.19702"
388
+       gradientUnits="userSpaceOnUse"
389
+       gradientTransform="translate(-318.48033,212.32022)" />
390
+    <radialGradient
391
+       inkscape:collect="always"
392
+       xlink:href="#linearGradient3850"
393
+       id="radialGradient3902"
394
+       gradientUnits="userSpaceOnUse"
395
+       gradientTransform="matrix(-1.4333926,-2.2742838,1.1731823,-0.73941125,-174.08025,98.374394)"
396
+       cx="-26.508606"
397
+       cy="93.399292"
398
+       fx="-26.508606"
399
+       fy="93.399292"
400
+       r="20.40658" />
401
+    <linearGradient
402
+       inkscape:collect="always"
403
+       xlink:href="#linearGradient3850"
404
+       id="linearGradient3904"
405
+       gradientUnits="userSpaceOnUse"
406
+       gradientTransform="translate(-318.48033,212.32022)"
407
+       x1="348.20132"
408
+       y1="593.11615"
409
+       x2="-51.879555"
410
+       y2="993.19702" />
411
+    <linearGradient
412
+       gradientUnits="userSpaceOnUse"
413
+       y2="23.383789"
414
+       x2="25.217773"
415
+       y1="27.095703"
416
+       x1="21.505859"
417
+       id="linearGradient4318"
418
+       xlink:href="#linearGradient4389"
419
+       inkscape:collect="always" />
420
+    <linearGradient
421
+       gradientUnits="userSpaceOnUse"
422
+       y2="20.884073"
423
+       x2="71.960243"
424
+       y1="20.041777"
425
+       x1="72.802544"
426
+       id="linearGradient4326"
427
+       xlink:href="#linearGradient4389"
428
+       inkscape:collect="always" />
429
+    <linearGradient
430
+       gradientUnits="userSpaceOnUse"
431
+       y2="74.246689"
432
+       x2="21.69179"
433
+       y1="73.643555"
434
+       x1="22.294922"
435
+       id="linearGradient4334"
436
+       xlink:href="#linearGradient4328"
437
+       inkscape:collect="always" />
438
+  </defs>
439
+  <metadata
440
+     id="metadata6522">
441
+    <rdf:RDF>
442
+      <cc:Work
443
+         rdf:about="">
444
+        <dc:format>image/svg+xml</dc:format>
445
+        <dc:type
446
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
447
+        <dc:title></dc:title>
448
+      </cc:Work>
449
+    </rdf:RDF>
450
+  </metadata>
451
+  <g
452
+     style="display:inline"
453
+     transform="translate(268,-635.29076)"
454
+     id="layer1"
455
+     inkscape:groupmode="layer"
456
+     inkscape:label="BACKGROUND">
457
+    <path
458
+       sodipodi:nodetypes="sssssssss"
459
+       inkscape:connector-curvature="0"
460
+       id="path6455"
461
+       d="m -268,700.15563 0,-33.72973 c 0,-27.24324 3.88785,-31.13513 31.10302,-31.13513 l 33.79408,0 c 27.21507,0 31.1029,3.89189 31.1029,31.13513 l 0,33.72973 c 0,27.24325 -3.88783,31.13514 -31.1029,31.13514 l -33.79408,0 C -264.11215,731.29077 -268,727.39888 -268,700.15563 Z"
462
+       style="fill:url(#linearGradient908);fill-opacity:1;stroke:none;display:inline;filter:url(#filter1121)" />
463
+    <g
464
+       id="g4336">
465
+      <g
466
+         transform="matrix(0.06790711,0,0,-0.06790711,-239.0411,765.68623)"
467
+         id="g3897"
468
+         xml:space="default">
469
+        <path
470
+           inkscape:connector-curvature="0"
471
+           style="opacity:0.7;color:#000000;fill:url(#radialGradient3902);fill-opacity:1;stroke:none;stroke-width:2;marker:none;visibility:visible;display:inline;overflow:visible;filter:url(#filter3831);enable-background:accumulate"
472
+           d="m -48.09375,67.8125 c -0.873996,-0.0028 -2.089735,0.01993 -3.40625,0.09375 -2.633031,0.147647 -5.700107,0.471759 -7.78125,1.53125 a 1.0001,1.0001 0 0 0 -0.25,1.59375 L -38.8125,92.375 a 1.0001,1.0001 0 0 0 0.84375,0.3125 L -24,90.5625 a 1.0001,1.0001 0 0 0 0.53125,-1.71875 L -46.0625,68.125 a 1.0001,1.0001 0 0 0 -0.625,-0.28125 c 0,0 -0.532254,-0.02842 -1.40625,-0.03125 z"
473
+           transform="matrix(10.616011,0,0,-10.616011,357.98166,1725.8152)"
474
+           id="path3821"
475
+           xml:space="default" />
476
+        <path
477
+           style="opacity:0.6;color:#000000;fill:none;stroke:#000000;stroke-width:2.77429962;stroke-linecap:round;marker:none;visibility:visible;display:inline;overflow:visible;filter:url(#filter3868);enable-background:accumulate"
478
+           d="m -15.782705,81.725197 8.7458304,9.147937"
479
+           id="path3858"
480
+           inkscape:connector-curvature="0"
481
+           transform="matrix(10.616011,0,0,-10.616011,39.50133,1725.8152)"
482
+           xml:space="default" />
483
+        <path
484
+           style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-indent:0;text-align:start;text-decoration:none;line-height:normal;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;text-anchor:start;baseline-shift:baseline;opacity:0.3;color:#000000;fill:url(#linearGradient3904);fill-opacity:1;stroke:none;stroke-width:2;marker:none;visibility:visible;display:inline;overflow:visible;filter:url(#filter3885);enable-background:accumulate;font-family:Sans;-inkscape-font-specification:Sans"
485
+           d="m -95.18931,981.03569 a 10.617073,10.617073 0 0 1 -0.995251,-0.3318 l -42.795789,-5.308 a 10.617073,10.617073 0 0 1 -6.30326,-17.9145 L -4.2897203,812.5065 a 10.617073,10.617073 0 0 1 8.95726,-3.3175 l 49.0990503,7.63026 a 10.617073,10.617073 0 0 1 5.97151,17.91452 L -87.55905,978.04989 a 10.617073,10.617073 0 0 1 -7.63026,2.9858 z"
486
+           id="path3874"
487
+           inkscape:connector-curvature="0"
488
+           xml:space="default" />
489
+      </g>
490
+      <path
491
+         style="opacity:1;color:#000000;fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:0.1;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
492
+         d="M 20.697266 20.515625 C 19.336871 21.10204 18.348875 22.456253 18.345703 23.970703 L 18.345703 24 C 18.345703 23.9808 18.353156 23.962559 18.353516 23.943359 L 18.353516 28.300781 L 18.353516 35.341797 L 21.425781 38.349609 L 18.353516 38.625 L 18.353516 55.039062 L 21.425781 58.046875 L 18.353516 58.322266 L 18.353516 55.039062 L 18.345703 24.0625 L 18.353516 69.601562 C 18.349848 70.477025 18.685456 71.239319 19.222656 71.802734 L 19.212891 71.8125 L 19.357422 71.955078 C 19.360505 71.957909 19.364093 71.960073 19.367188 71.962891 L 26.660156 79.126953 L 33.458984 71.771484 L 21.814453 72.791016 C 21.791653 72.793016 21.770747 72.789016 21.748047 72.791016 L 33.488281 71.738281 L 67.492188 68.685547 C 67.874994 68.651208 68.237746 68.545454 68.578125 68.394531 L 55.199219 55.015625 L 25.611328 57.671875 L 25.611328 54.388672 L 52.1875 52.003906 L 37.123047 36.941406 L 25.611328 37.974609 L 25.611328 34.691406 L 34.111328 33.927734 L 20.697266 20.515625 z "
493
+         transform="translate(-268,635.29076)"
494
+         id="path4308" />
495
+      <path
496
+         style="color:#000000;fill:#c42e24;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:0.1;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
497
+         d="m -200.67969,651.54467 -45.49804,3.95898 c -0.39583,0.0351 -0.7701,0.14975 -1.125,0.30273 l 13.41406,13.41211 36.65625,-3.28711 0.01,0.74415 6.45508,-6.98633 -7.33984,-7.21875 -0.008,0.01 c -0.63301,-0.64671 -1.5421,-1.01814 -2.56446,-0.93554 z m -39,3.42382 -6.67187,0.59766 c 0.0594,-0.008 0.11568,-0.0282 0.17578,-0.0332 z m 42.44727,14.2461 -33.64453,3.01758 15.06445,15.0625 18.57813,-1.66602 0.002,-2.13672 0,-14.27734 z m -0.002,19.69531 -15.56641,1.39648 13.37891,13.37891 c 0.053,-0.0235 0.10451,-0.0502 0.15625,-0.0762 1.19087,-0.65347 2.02247,-1.91423 2.02539,-3.30274 l 0.006,-11.39648 z"
498
+         id="path4233"
499
+         inkscape:connector-curvature="0"
500
+         xml:space="default"
501
+         sodipodi:nodetypes="ccccccccccccccccccccccccccccc" />
502
+      <path
503
+         style="fill:#df4438;fill-opacity:1;fill-rule:nonzero;stroke:none"
504
+         d="m -193.41992,658.68199 -39.00195,3.39453 -6.66993,0.59766 c -1.81216,0.25153 -3.26311,1.84158 -3.29687,3.66797 l 0,11.39843 52.41406,-4.70117 0,-11.34375 c -0.0805,-1.83267 -1.58243,-3.16418 -3.44531,-3.01367 z"
505
+         id="path4674" />
506
+      <path
507
+         style="fill:#dd3b2f;fill-opacity:1;fill-rule:nonzero;stroke:none"
508
+         d="m -189.97461,676.32262 -52.41406,4.70117 0,16.41406 52.41406,-4.70117 0,-16.41406 z"
509
+         id="path4672" />
510
+      <path
511
+         style="fill:#d93023;fill-opacity:1;fill-rule:nonzero;stroke:none"
512
+         d="m -189.97461,696.01793 -52.41406,4.70312 0.002,11.3086 c -0.008,1.88995 1.51656,3.29383 3.40235,3.16015 l 45.73437,-4.10547 c 0.66788,-0.0599 1.28587,-0.3155 1.80273,-0.70312 0.88331,-0.70488 1.46437,-1.77799 1.4668,-2.9375 l 0.006,-11.42578 z"
513
+         id="path4670" />
514
+      <path
515
+         style="fill:#d93023;fill-opacity:1;fill-rule:nonzero;stroke:none"
516
+         d="m -191.44727,710.38121 c -0.0994,0.0793 -0.20788,0.14708 -0.31445,0.2168 0.10723,-0.0697 0.21469,-0.13718 0.31445,-0.2168 z"
517
+         id="path4668" />
518
+      <path
519
+         style="fill:#d93023;fill-opacity:1;fill-rule:nonzero;stroke:none"
520
+         d="m -191.96484,710.72496 c -0.0984,0.0562 -0.19952,0.10691 -0.30274,0.1543 0.10395,-0.0471 0.20372,-0.0983 0.30274,-0.1543 z"
521
+         id="path4666" />
522
+      <path
523
+         style="fill:#d93023;fill-opacity:1;fill-rule:nonzero;stroke:none"
524
+         d="m -192.58594,711.00426 c -0.082,0.0289 -0.1637,0.0589 -0.24804,0.082 0.0849,-0.0229 0.16545,-0.0534 0.24804,-0.082 z"
525
+         id="path4633" />
526
+      <rect
527
+         xml:space="default"
528
+         y="648.49109"
529
+         x="-258.70667"
530
+         height="69.20665"
531
+         width="69.20665"
532
+         id="rect3585-3"
533
+         style="opacity:0.8;color:#000000;fill:none;stroke:none;stroke-width:4;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate" />
534
+      <path
535
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-indent:0;text-align:start;text-decoration:none;line-height:normal;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;text-anchor:start;baseline-shift:baseline;opacity:1;color:#000000;color-interpolation:sRGB;color-interpolation-filters:sRGB;fill:url(#linearGradient4318);fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:5.25;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate;clip-rule:nonzero;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;font-family:sans-serif;-inkscape-font-specification:sans-serif"
536
+         d="M 22.029297 20.195312 L 21.822266 20.212891 C 19.919838 20.381715 18.370776 22.043134 18.349609 23.939453 L 24.662109 30.251953 L 25.605469 31.195312 L 25.605469 31.103516 C 25.609469 29.193966 27.168951 27.515473 29.082031 27.345703 L 29.171875 27.337891 L 28.373047 26.539062 L 22.029297 20.195312 z "
537
+         transform="translate(-268,635.29076)"
538
+         id="path4256" />
539
+      <path
540
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-indent:0;text-align:start;text-decoration:none;line-height:normal;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;text-anchor:start;baseline-shift:baseline;opacity:0.5;color:#000000;color-interpolation:sRGB;color-interpolation-filters:sRGB;fill:url(#linearGradient4326);fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:2.4;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate;clip-rule:nonzero;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;font-family:sans-serif;-inkscape-font-specification:sans-serif;stroke-miterlimit:4;stroke-dasharray:none"
541
+         d="M 67.330078 16.253906 L 68.03125 16.955078 L 74.472656 23.396484 L 74.580078 23.386719 C 75.531927 23.309814 76.390588 23.620657 77.015625 24.185547 L 69.892578 17.179688 L 69.884766 17.189453 C 69.253843 16.544862 68.348328 16.174551 67.330078 16.253906 z M 77.054688 24.222656 C 77.115589 24.279686 77.164628 24.348282 77.220703 24.410156 L 77.232422 24.398438 L 77.054688 24.222656 z "
542
+         transform="translate(-268,635.29076)"
543
+         id="path4272" />
544
+      <path
545
+         style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-indent:0;text-align:start;text-decoration:none;line-height:normal;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;text-anchor:start;baseline-shift:baseline;opacity:1;color:#000000;color-interpolation:sRGB;color-interpolation-filters:sRGB;fill:url(#linearGradient4334);fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:1.7;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate;clip-rule:nonzero;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;font-family:sans-serif;-inkscape-font-specification:sans-serif;stroke-miterlimit:4;stroke-dasharray:none"
546
+         d="M 18.363281 69.712891 C 18.387957 70.540342 18.709001 71.264013 19.222656 71.802734 L 19.212891 71.8125 L 19.357422 71.955078 C 19.360505 71.957909 19.364093 71.960073 19.367188 71.962891 L 26.599609 79.068359 C 26.044831 78.550125 25.698241 77.821152 25.638672 76.988281 L 18.951172 70.298828 L 18.363281 69.712891 z M 26.636719 79.103516 L 26.660156 79.126953 L 26.664062 79.123047 C 26.655656 79.11562 26.645042 79.111033 26.636719 79.103516 z "
547
+         transform="translate(-268,635.29076)"
548
+         id="path4290" />
549
+      <path
550
+         style="color:#000000;fill:#871f1c;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:0.1;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
551
+         d="m 75.006338,38.020624 -45.602041,4.088751 0,3.283203 48.615713,-4.360235 z m 0.002,19.69531 -45.603995,4.090707 0,3.283203 48.615713,-4.362191 z m 1.026864,17.71766 c -0.09902,0.056 -0.198784,0.107197 -0.302734,0.154297 0.10322,-0.04739 0.204334,-0.0981 0.302734,-0.154297 z m -0.621094,0.279297 c -0.08259,0.0286 -0.163146,0.05913 -0.248046,0.08203 0.08434,-0.0231 0.166047,-0.05313 0.248046,-0.08203 z"
552
+         transform="translate(-268,635.29076)"
553
+         id="path4656"
554
+         inkscape:connector-curvature="0"
555
+         sodipodi:nodetypes="cccccccccccccccc" />
556
+      <path
557
+         style="fill:#ffffff;fill-opacity:1;fill-rule:nonzero;stroke:none;opacity:0.3"
558
+         d="M 74.580078 23.390625 L 35.578125 26.785156 L 28.908203 27.382812 C 27.096043 27.634343 25.645088 29.224391 25.611328 31.050781 L 25.611328 31.25 C 25.645088 29.42361 27.096043 27.833561 28.908203 27.582031 L 35.578125 26.984375 L 74.580078 23.589844 C 76.442958 23.439334 77.944891 24.770846 78.025391 26.603516 L 78.025391 26.404297 C 77.944891 24.571627 76.442958 23.240115 74.580078 23.390625 z M 78.025391 41.03125 L 25.611328 45.732422 L 25.611328 45.931641 L 78.025391 41.230469 L 78.025391 41.03125 z M 78.025391 60.726562 L 25.611328 65.429688 L 25.611328 65.628906 L 78.025391 60.925781 L 78.025391 60.726562 z "
559
+         transform="translate(-268,635.29076)"
560
+         id="path4676" />
561
+    </g>
562
+  </g>
563
+  <g
564
+     style="display:inline"
565
+     inkscape:label="PLACE YOUR PICTOGRAM HERE"
566
+     id="layer3"
567
+     inkscape:groupmode="layer" />
568
+  <g
569
+     sodipodi:insensitive="true"
570
+     style="display:none"
571
+     inkscape:label="BADGE"
572
+     id="layer2"
573
+     inkscape:groupmode="layer">
574
+    <g
575
+       clip-path="none"
576
+       id="g4394"
577
+       transform="translate(-340.00001,-581)"
578
+       style="display:inline">
579
+      <g
580
+         id="g855">
581
+        <g
582
+           style="opacity:0.6;filter:url(#filter891)"
583
+           clip-path="url(#clipPath873)"
584
+           id="g870"
585
+           inkscape:groupmode="maskhelper">
586
+          <path
587
+             sodipodi:type="arc"
588
+             style="color:#000000;fill:#000000;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:4;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
589
+             id="path844"
590
+             sodipodi:cx="252"
591
+             sodipodi:cy="552.36218"
592
+             sodipodi:rx="12"
593
+             sodipodi:ry="12"
594
+             d="m 264,552.36218 c 0,6.62742 -5.37258,12 -12,12 -6.62742,0 -12,-5.37258 -12,-12 0,-6.62741 5.37258,-12 12,-12 6.62742,0 12,5.37259 12,12 z"
595
+             transform="matrix(1.4999992,0,0,1.4999992,-29.999795,-237.54282)" />
596
+        </g>
597
+        <g
598
+           id="g862">
599
+          <path
600
+             transform="matrix(1.4999992,0,0,1.4999992,-29.999795,-238.54282)"
601
+             d="m 264,552.36218 c 0,6.62742 -5.37258,12 -12,12 -6.62742,0 -12,-5.37258 -12,-12 0,-6.62741 5.37258,-12 12,-12 6.62742,0 12,5.37259 12,12 z"
602
+             sodipodi:ry="12"
603
+             sodipodi:rx="12"
604
+             sodipodi:cy="552.36218"
605
+             sodipodi:cx="252"
606
+             id="path4398"
607
+             style="color:#000000;fill:#f5f5f5;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:4;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
608
+             sodipodi:type="arc" />
609
+          <path
610
+             sodipodi:type="arc"
611
+             style="color:#000000;fill:#dd4814;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:4;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
612
+             id="path4400"
613
+             sodipodi:cx="252"
614
+             sodipodi:cy="552.36218"
615
+             sodipodi:rx="12"
616
+             sodipodi:ry="12"
617
+             d="m 264,552.36218 c 0,6.62742 -5.37258,12 -12,12 -6.62742,0 -12,-5.37258 -12,-12 0,-6.62741 5.37258,-12 12,-12 6.62742,0 12,5.37259 12,12 z"
618
+             transform="matrix(1.25,0,0,1.25,33,-100.45273)" />
619
+          <path
620
+             transform="matrix(1.511423,-0.16366377,0.16366377,1.511423,-755.37346,-191.93651)"
621
+             d="m 669.8173,595.77657 c -0.39132,0.22593 -3.62645,-1.90343 -4.07583,-1.95066 -0.44938,-0.0472 -4.05653,1.36297 -4.39232,1.06062 -0.3358,-0.30235 0.68963,-4.03715 0.59569,-4.47913 -0.0939,-0.44198 -2.5498,-3.43681 -2.36602,-3.8496 0.18379,-0.41279 4.05267,-0.59166 4.44398,-0.81759 0.39132,-0.22593 2.48067,-3.48704 2.93005,-3.4398 0.44938,0.0472 1.81505,3.67147 2.15084,3.97382 0.3358,0.30236 4.08294,1.2817 4.17689,1.72369 0.0939,0.44198 -2.9309,2.86076 -3.11469,3.27355 -0.18379,0.41279 0.0427,4.27917 -0.34859,4.5051 z"
622
+             inkscape:randomized="0"
623
+             inkscape:rounded="0.1"
624
+             inkscape:flatsided="false"
625
+             sodipodi:arg2="1.6755161"
626
+             sodipodi:arg1="1.0471976"
627
+             sodipodi:r2="4.3458705"
628
+             sodipodi:r1="7.2431178"
629
+             sodipodi:cy="589.50385"
630
+             sodipodi:cx="666.19574"
631
+             sodipodi:sides="5"
632
+             id="path4459"
633
+             style="color:#000000;fill:#f5f5f5;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:3;marker:none;visibility:visible;display:inline;overflow:visible;enable-background:accumulate"
634
+             sodipodi:type="star" />
635
+        </g>
636
+      </g>
637
+    </g>
638
+  </g>
639
+</svg>
Back to file index

layer.yaml

 1
--- 
 2
+++ layer.yaml
 3
@@ -0,0 +1,10 @@
 4
+"options":
 5
+  "ibm-cinder-ds8k": {}
 6
+  "basic":
 7
+    "use_venv": !!bool "false"
 8
+    "packages": []
 9
+    "include_system_packages": !!bool "false"
10
+"includes":
11
+- "layer:basic"
12
+- "interface:cinder-backend"
13
+"is": "ibm-cinder-ds8k"
Back to file index

lib/charms/layer/__init__.py

 1
--- 
 2
+++ lib/charms/layer/__init__.py
 3
@@ -0,0 +1,21 @@
 4
+import os
 5
+
 6
+
 7
+class LayerOptions(dict):
 8
+    def __init__(self, layer_file, section=None):
 9
+        import yaml  # defer, might not be available until bootstrap
10
+        with open(layer_file) as f:
11
+            layer = yaml.safe_load(f.read())
12
+        opts = layer.get('options', {})
13
+        if section and section in opts:
14
+            super(LayerOptions, self).__init__(opts.get(section))
15
+        else:
16
+            super(LayerOptions, self).__init__(opts)
17
+
18
+
19
+def options(section=None, layer_file=None):
20
+    if not layer_file:
21
+        base_dir = os.environ.get('CHARM_DIR', os.getcwd())
22
+        layer_file = os.path.join(base_dir, 'layer.yaml')
23
+
24
+    return LayerOptions(layer_file, section)
Back to file index

lib/charms/layer/basic.py

  1
--- 
  2
+++ lib/charms/layer/basic.py
  3
@@ -0,0 +1,196 @@
  4
+import os
  5
+import sys
  6
+import shutil
  7
+from glob import glob
  8
+from subprocess import check_call
  9
+
 10
+from charms.layer.execd import execd_preinstall
 11
+
 12
+
 13
+def lsb_release():
 14
+    """Return /etc/lsb-release in a dict"""
 15
+    d = {}
 16
+    with open('/etc/lsb-release', 'r') as lsb:
 17
+        for l in lsb:
 18
+            k, v = l.split('=')
 19
+            d[k.strip()] = v.strip()
 20
+    return d
 21
+
 22
+
 23
+def bootstrap_charm_deps():
 24
+    """
 25
+    Set up the base charm dependencies so that the reactive system can run.
 26
+    """
 27
+    # execd must happen first, before any attempt to install packages or
 28
+    # access the network, because sites use this hook to do bespoke
 29
+    # configuration and install secrets so the rest of this bootstrap
 30
+    # and the charm itself can actually succeed. This call does nothing
 31
+    # unless the operator has created and populated $CHARM_DIR/exec.d.
 32
+    execd_preinstall()
 33
+    # ensure that $CHARM_DIR/bin is on the path, for helper scripts
 34
+    os.environ['PATH'] += ':%s' % os.path.join(os.environ['CHARM_DIR'], 'bin')
 35
+    venv = os.path.abspath('../.venv')
 36
+    vbin = os.path.join(venv, 'bin')
 37
+    vpip = os.path.join(vbin, 'pip')
 38
+    vpy = os.path.join(vbin, 'python')
 39
+    if os.path.exists('wheelhouse/.bootstrapped'):
 40
+        activate_venv()
 41
+        return
 42
+    # bootstrap wheelhouse
 43
+    if os.path.exists('wheelhouse'):
 44
+        with open('/root/.pydistutils.cfg', 'w') as fp:
 45
+            # make sure that easy_install also only uses the wheelhouse
 46
+            # (see https://github.com/pypa/pip/issues/410)
 47
+            charm_dir = os.environ['CHARM_DIR']
 48
+            fp.writelines([
 49
+                "[easy_install]\n",
 50
+                "allow_hosts = ''\n",
 51
+                "find_links = file://{}/wheelhouse/\n".format(charm_dir),
 52
+            ])
 53
+        apt_install([
 54
+            'python3-pip',
 55
+            'python3-setuptools',
 56
+            'python3-yaml',
 57
+            'python3-dev',
 58
+        ])
 59
+        from charms import layer
 60
+        cfg = layer.options('basic')
 61
+        # include packages defined in layer.yaml
 62
+        apt_install(cfg.get('packages', []))
 63
+        # if we're using a venv, set it up
 64
+        if cfg.get('use_venv'):
 65
+            if not os.path.exists(venv):
 66
+                series = lsb_release()['DISTRIB_CODENAME']
 67
+                if series in ('precise', 'trusty'):
 68
+                    apt_install(['python-virtualenv'])
 69
+                else:
 70
+                    apt_install(['virtualenv'])
 71
+                cmd = ['virtualenv', '-ppython3', '--never-download', venv]
 72
+                if cfg.get('include_system_packages'):
 73
+                    cmd.append('--system-site-packages')
 74
+                check_call(cmd)
 75
+            os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
 76
+            pip = vpip
 77
+        else:
 78
+            pip = 'pip3'
 79
+            # save a copy of system pip to prevent `pip3 install -U pip`
 80
+            # from changing it
 81
+            if os.path.exists('/usr/bin/pip'):
 82
+                shutil.copy2('/usr/bin/pip', '/usr/bin/pip.save')
 83
+        # need newer pip, to fix spurious Double Requirement error:
 84
+        # https://github.com/pypa/pip/issues/56
 85
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse',
 86
+                    'pip'])
 87
+        # install the rest of the wheelhouse deps
 88
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse'] +
 89
+                   glob('wheelhouse/*'))
 90
+        if not cfg.get('use_venv'):
 91
+            # restore system pip to prevent `pip3 install -U pip`
 92
+            # from changing it
 93
+            if os.path.exists('/usr/bin/pip.save'):
 94
+                shutil.copy2('/usr/bin/pip.save', '/usr/bin/pip')
 95
+                os.remove('/usr/bin/pip.save')
 96
+        os.remove('/root/.pydistutils.cfg')
 97
+        # flag us as having already bootstrapped so we don't do it again
 98
+        open('wheelhouse/.bootstrapped', 'w').close()
 99
+        # Ensure that the newly bootstrapped libs are available.
100
+        # Note: this only seems to be an issue with namespace packages.
101
+        # Non-namespace-package libs (e.g., charmhelpers) are available
102
+        # without having to reload the interpreter. :/
103
+        reload_interpreter(vpy if cfg.get('use_venv') else sys.argv[0])
104
+
105
+
106
+def activate_venv():
107
+    """
108
+    Activate the venv if enabled in ``layer.yaml``.
109
+
110
+    This is handled automatically for normal hooks, but actions might
111
+    need to invoke this manually, using something like:
112
+
113
+        # Load modules from $CHARM_DIR/lib
114
+        import sys
115
+        sys.path.append('lib')
116
+
117
+        from charms.layer.basic import activate_venv
118
+        activate_venv()
119
+
120
+    This will ensure that modules installed in the charm's
121
+    virtual environment are available to the action.
122
+    """
123
+    venv = os.path.abspath('../.venv')
124
+    vbin = os.path.join(venv, 'bin')
125
+    vpy = os.path.join(vbin, 'python')
126
+    from charms import layer
127
+    cfg = layer.options('basic')
128
+    if cfg.get('use_venv') and '.venv' not in sys.executable:
129
+        # activate the venv
130
+        os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
131
+        reload_interpreter(vpy)
132
+
133
+
134
+def reload_interpreter(python):
135
+    """
136
+    Reload the python interpreter to ensure that all deps are available.
137
+
138
+    Newly installed modules in namespace packages sometimes seemt to
139
+    not be picked up by Python 3.
140
+    """
141
+    os.execle(python, python, sys.argv[0], os.environ)
142
+
143
+
144
+def apt_install(packages):
145
+    """
146
+    Install apt packages.
147
+
148
+    This ensures a consistent set of options that are often missed but
149
+    should really be set.
150
+    """
151
+    if isinstance(packages, (str, bytes)):
152
+        packages = [packages]
153
+
154
+    env = os.environ.copy()
155
+
156
+    if 'DEBIAN_FRONTEND' not in env:
157
+        env['DEBIAN_FRONTEND'] = 'noninteractive'
158
+
159
+    cmd = ['apt-get',
160
+           '--option=Dpkg::Options::=--force-confold',
161
+           '--assume-yes',
162
+           'install']
163
+    check_call(cmd + packages, env=env)
164
+
165
+
166
+def init_config_states():
167
+    import yaml
168
+    from charmhelpers.core import hookenv
169
+    from charms.reactive import set_state
170
+    from charms.reactive import toggle_state
171
+    config = hookenv.config()
172
+    config_defaults = {}
173
+    config_defs = {}
174
+    config_yaml = os.path.join(hookenv.charm_dir(), 'config.yaml')
175
+    if os.path.exists(config_yaml):
176
+        with open(config_yaml) as fp:
177
+            config_defs = yaml.safe_load(fp).get('options', {})
178
+            config_defaults = {key: value.get('default')
179
+                               for key, value in config_defs.items()}
180
+    for opt in config_defs.keys():
181
+        if config.changed(opt):
182
+            set_state('config.changed')
183
+            set_state('config.changed.{}'.format(opt))
184
+        toggle_state('config.set.{}'.format(opt), config.get(opt))
185
+        toggle_state('config.default.{}'.format(opt),
186
+                     config.get(opt) == config_defaults[opt])
187
+    hookenv.atexit(clear_config_states)
188
+
189
+
190
+def clear_config_states():
191
+    from charmhelpers.core import hookenv, unitdata
192
+    from charms.reactive import remove_state
193
+    config = hookenv.config()
194
+    remove_state('config.changed')
195
+    for opt in config.keys():
196
+        remove_state('config.changed.{}'.format(opt))
197
+        remove_state('config.set.{}'.format(opt))
198
+        remove_state('config.default.{}'.format(opt))
199
+    unitdata.kv().flush()
Back to file index

lib/charms/layer/execd.py

  1
--- 
  2
+++ lib/charms/layer/execd.py
  3
@@ -0,0 +1,138 @@
  4
+# Copyright 2014-2016 Canonical Limited.
  5
+#
  6
+# This file is part of layer-basic, the reactive base layer for Juju.
  7
+#
  8
+# charm-helpers is free software: you can redistribute it and/or modify
  9
+# it under the terms of the GNU Lesser General Public License version 3 as
 10
+# published by the Free Software Foundation.
 11
+#
 12
+# charm-helpers is distributed in the hope that it will be useful,
 13
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
 14
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 15
+# GNU Lesser General Public License for more details.
 16
+#
 17
+# You should have received a copy of the GNU Lesser General Public License
 18
+# along with charm-helpers.  If not, see <http://www.gnu.org/licenses/>.
 19
+
 20
+# This module may only import from the Python standard library.
 21
+import os
 22
+import sys
 23
+import subprocess
 24
+import time
 25
+
 26
+'''
 27
+execd/preinstall
 28
+
 29
+It is often necessary to configure and reconfigure machines
 30
+after provisioning, but before attempting to run the charm.
 31
+Common examples are specialized network configuration, enabling
 32
+of custom hardware, non-standard disk partitioning and filesystems,
 33
+adding secrets and keys required for using a secured network.
 34
+
 35
+The reactive framework's base layer invokes this mechanism as
 36
+early as possible, before any network access is made or dependencies
 37
+unpacked or non-standard modules imported (including the charms.reactive
 38
+framework itself).
 39
+
 40
+Operators needing to use this functionality may branch a charm and
 41
+create an exec.d directory in it. The exec.d directory in turn contains
 42
+one or more subdirectories, each of which contains an executable called
 43
+charm-pre-install and any other required resources. The charm-pre-install
 44
+executables are run, and if successful, state saved so they will not be
 45
+run again.
 46
+
 47
+    $CHARM_DIR/exec.d/mynamespace/charm-pre-install
 48
+
 49
+An alternative to branching a charm is to compose a new charm that contains
 50
+the exec.d directory, using the original charm as a layer,
 51
+
 52
+A charm author could also abuse this mechanism to modify the charm
 53
+environment in unusual ways, but for most purposes it is saner to use
 54
+charmhelpers.core.hookenv.atstart().
 55
+'''
 56
+
 57
+
 58
+def default_execd_dir():
 59
+    return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
 60
+
 61
+
 62
+def execd_module_paths(execd_dir=None):
 63
+    """Generate a list of full paths to modules within execd_dir."""
 64
+    if not execd_dir:
 65
+        execd_dir = default_execd_dir()
 66
+
 67
+    if not os.path.exists(execd_dir):
 68
+        return
 69
+
 70
+    for subpath in os.listdir(execd_dir):
 71
+        module = os.path.join(execd_dir, subpath)
 72
+        if os.path.isdir(module):
 73
+            yield module
 74
+
 75
+
 76
+def execd_submodule_paths(command, execd_dir=None):
 77
+    """Generate a list of full paths to the specified command within exec_dir.
 78
+    """
 79
+    for module_path in execd_module_paths(execd_dir):
 80
+        path = os.path.join(module_path, command)
 81
+        if os.access(path, os.X_OK) and os.path.isfile(path):
 82
+            yield path
 83
+
 84
+
 85
+def execd_sentinel_path(submodule_path):
 86
+    module_path = os.path.dirname(submodule_path)
 87
+    execd_path = os.path.dirname(module_path)
 88
+    module_name = os.path.basename(module_path)
 89
+    submodule_name = os.path.basename(submodule_path)
 90
+    return os.path.join(execd_path,
 91
+                        '.{}_{}.done'.format(module_name, submodule_name))
 92
+
 93
+
 94
+def execd_run(command, execd_dir=None, stop_on_error=True, stderr=None):
 95
+    """Run command for each module within execd_dir which defines it."""
 96
+    if stderr is None:
 97
+        stderr = sys.stdout
 98
+    for submodule_path in execd_submodule_paths(command, execd_dir):
 99
+        # Only run each execd once. We cannot simply run them in the
100
+        # install hook, as potentially storage hooks are run before that.
101
+        # We cannot rely on them being idempotent.
102
+        sentinel = execd_sentinel_path(submodule_path)
103
+        if os.path.exists(sentinel):
104
+            continue
105
+
106
+        try:
107
+            subprocess.check_call([submodule_path], stderr=stderr,
108
+                                  universal_newlines=True)
109
+            with open(sentinel, 'w') as f:
110
+                f.write('{} ran successfully {}\n'.format(submodule_path,
111
+                                                          time.ctime()))
112
+                f.write('Removing this file will cause it to be run again\n')
113
+        except subprocess.CalledProcessError as e:
114
+            # Logs get the details. We can't use juju-log, as the
115
+            # output may be substantial and exceed command line
116
+            # length limits.
117
+            print("ERROR ({}) running {}".format(e.returncode, e.cmd),
118
+                  file=stderr)
119
+            print("STDOUT<<EOM", file=stderr)
120
+            print(e.output, file=stderr)
121
+            print("EOM", file=stderr)
122
+
123
+            # Unit workload status gets a shorter fail message.
124
+            short_path = os.path.relpath(submodule_path)
125
+            block_msg = "Error ({}) running {}".format(e.returncode,
126
+                                                       short_path)
127
+            try:
128
+                subprocess.check_call(['status-set', 'blocked', block_msg],
129
+                                      universal_newlines=True)
130
+                if stop_on_error:
131
+                    sys.exit(0)  # Leave unit in blocked state.
132
+            except Exception:
133
+                pass  # We care about the exec.d/* failure, not status-set.
134
+
135
+            if stop_on_error:
136
+                sys.exit(e.returncode or 1)  # Error state for pre-1.24 Juju
137
+
138
+
139
+def execd_preinstall(execd_dir=None):
140
+    """Run charm-pre-install for each module within execd_dir."""
141
+    execd_run('charm-pre-install', execd_dir=execd_dir)
Back to file index

metadata.yaml

 1
--- 
 2
+++ metadata.yaml
 3
@@ -0,0 +1,32 @@
 4
+"name": "ibm-cinder-ds8k"
 5
+"summary": "DS8K integration for OpenStack Block Storage"
 6
+"maintainer": "IBM Juju Support Team <jujusupp@us.ibm.com>"
 7
+"description": |
 8
+  Cinder is the block storage service for the Openstack project.
 9
+  This charm provides DS8K storage backend for Cinder
10
+"tags":
11
+- "openstack"
12
+- "storage"
13
+- "file-servers"
14
+- "misc"
15
+- "ibm-z"
16
+- "ibm"
17
+"requires":
18
+  "juju-info":
19
+    "interface": "juju-info"
20
+    "scope": "container"
21
+"provides":
22
+  "storage-backend":
23
+    "interface": "cinder-backend"
24
+    "scope": "container"
25
+"resources":
26
+  "ibm_cinder_ds8k_installer":
27
+    "type": "file"
28
+    "filename": "IBM_Storage_Driver_for_OpenStack_1.7.0.1-b985.tar.gz"
29
+    "description": "ibm storage driver for openstack"
30
+"series":
31
+- "xenial"
32
+- "trusty"
33
+"subordinate": !!bool "true"
34
+"terms":
35
+- "ibmcharmers/ibm-cinder-ds8k/1"
Back to file index

reactive/cinder_ds8k_driver.py

  1
--- 
  2
+++ reactive/cinder_ds8k_driver.py
  3
@@ -0,0 +1,284 @@
  4
+#!/usr/bin/python
  5
+#
  6
+# Copyright 2016 Canonical Ltd
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#  http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+
 20
+import os
 21
+import sys
 22
+import json
 23
+import os.path
 24
+import subprocess
 25
+import glob
 26
+import shutil
 27
+
 28
+from charmhelpers.payload import (
 29
+    archive,
 30
+)
 31
+
 32
+from charmhelpers import fetch
 33
+
 34
+from charms.reactive import when, when_not, set_state, remove_state
 35
+from charmhelpers.core import hookenv
 36
+
 37
+from cinder_contexts import ds8kSubordinateContext
 38
+
 39
+from charmhelpers.core.hookenv import (
 40
+    Hooks,
 41
+    UnregisteredHookError,
 42
+    config,
 43
+    status_set,
 44
+    relation_set,
 45
+    service_name,
 46
+    relation_ids,
 47
+    log,
 48
+)
 49
+
 50
+from charmhelpers.core.host import (
 51
+    service_restart,
 52
+)
 53
+
 54
+hooks = Hooks()
 55
+
 56
+VERSION_PACKAGE = 'cinder-common'
 57
+
 58
+ARCHITECTURE = os.uname()
 59
+
 60
+CHARM_DIR = os.environ['CHARM_DIR']
 61
+
 62
+DRIVER_INSTALLER = '/IBM_Storage_Driver_for_OpenStack_1.7.0.1-b985/install.sh'
 63
+
 64
+
 65
+@when_not('ibm-cinder-ds8k.installed')
 66
+def install_ds8k():
 67
+        """
 68
+        Function to install the ds8k driver
 69
+        get the ds8k resource
 70
+        :returns: None
 71
+        """
 72
+
 73
+        if not (("s390x" in ARCHITECTURE)):
 74
+            hookenv.log("IBM ds8k: only supported on s390x")
 75
+            hookenv.status_set('blocked', 'IBM ds8k: unsupported architecture')
 76
+            return 1
 77
+
 78
+        if "s390x" in ARCHITECTURE:
 79
+            hookenv.log("architecture is s390x")
 80
+
 81
+        hookenv.log("IBM Cinder DS8K:"
 82
+                    "fetching the ibm_cinder_ds8k_installer resource")
 83
+        hookenv.status_set('active',
 84
+                           'fetching ibm_cinder_ds8k_installer resource')
 85
+        ibm_cinder_ds8k_installer_get = hookenv.resource_get(
 86
+                                         'ibm_cinder_ds8k_installer')
 87
+
 88
+        # If we don't have a package,report blocked status;we can't proceed.
 89
+        if ((ibm_cinder_ds8k_installer_get is False)):
 90
+            hookenv.log("IBM Cinder DS8K:"
 91
+                        "missing required ibm_cinder_ds8k resources")
 92
+            hookenv.status_set("blocked", "Required packages are missing")
 93
+            return 0
 94
+
 95
+        command1 = ["file", ibm_cinder_ds8k_installer_get]
 96
+        p1 = subprocess.Popen(command1, stdout=subprocess.PIPE,
 97
+                              stderr=subprocess.PIPE, shell=False)
 98
+        output1, err = p1.communicate()
 99
+        ibm_cinder_ds8k_installer_get_msg = str(output1)
100
+
101
+        if (("empty" in ibm_cinder_ds8k_installer_get_msg)):
102
+            hookenv.log("IBM ds8k: missing required ibm_ds8k"
103
+                        "resources,empty packages are found")
104
+            hookenv.status_set("blocked", "Required packages are empty")
105
+            return 0
106
+        else:
107
+            ibm_cinder_ds8k_downpath = os.path.dirname(
108
+                                       ibm_cinder_ds8k_installer_get)
109
+            hookenv.log("ibm_cinder_ds8k_downpath")
110
+            CHARM_DIR = os.getcwd()
111
+            charmpath = CHARM_DIR+"/../resources"
112
+            DS8K_install_path = CHARM_DIR+"/../resources/Cinder"
113
+            if os.path.exists(CHARM_DIR+"/../resources/Cinder"):
114
+                hookenv.log("IBM ds8k: dir exist already.")
115
+            else:
116
+                os.makedirs(CHARM_DIR+"/../resources/Cinder")
117
+            if ((os.path.exists(charmpath+"/"
118
+                                "Cinder/IBM_Storage_Driver_for_OpenStack_*"))):
119
+                hookenv.log("IBM ds8k: Packages extracted already.")
120
+            else:
121
+                hookenv.log("IBM ds8k: Extracting package contents.")
122
+
123
+                # checking whether zip file extracted properly
124
+            try:
125
+                os.chdir(ibm_cinder_ds8k_downpath)
126
+                archivelist = glob.glob("*.tar.gz")
127
+                if archivelist:
128
+                        archive.extract(str(archivelist[0]), DS8K_install_path)
129
+                        hookenv.log("IBM Cinder DS8K : Extraction of IBM"
130
+                                    " Cinder DS8K packages is successfull")
131
+            except subprocess.CalledProcessError as e:
132
+                    hookenv.log(e.output)
133
+                    hookenv.log("IBM ds8k: Unable to extract packages")
134
+                    hookenv.status_set("blocked", "Package is corrupt")
135
+                    shutil.rmtree(charmpath+"/Cinder")
136
+                    sys.exit(0)
137
+
138
+            # Installtion
139
+            hookenv.status_set('active', "IBM ds8k: Installing")
140
+            try:
141
+                subprocess.check_call([DS8K_install_path + DRIVER_INSTALLER,
142
+                                       '-s', '--bypass-unsupported-os'])
143
+                log("IBM Ciner DS8K: IBM DS8K"
144
+                    " Installed successfully")
145
+                set_state('ibm-cinder-ds8k.installed')
146
+                status_set('active', 'IBM Cinder DS8K is'
147
+                           'installed')
148
+            except subprocess.CalledProcessError as e:
149
+                hookenv.log(e.output)
150
+                status_set("maintenance", "IBM DS8K:"
151
+                           "Error installing DS8K")
152
+                return 1
153
+
154
+
155
+@when('ibm-cinder-ds8k.installed')
156
+def get_ds8k_request():
157
+    """
158
+    Function to get the config values of driver from config.yaml file
159
+    :returns: dictionary of all config values
160
+    """
161
+
162
+    volumedriver = config('volume-driver')
163
+    volume_backend_name = config('volume-backend-name')
164
+    san_ip = config('san-ip')
165
+    san_login = config('san-login')
166
+    san_password = config('san-password')
167
+    ds8k_storage_unit = config('ds8k_storage_unit')
168
+    san_clustername = config('san_clustername')
169
+    xiv_chap = config('xiv_chap')
170
+    xiv_ds8k_connection_type = config('xiv_ds8k_connection_type')
171
+    management_ips = config('management_ips')
172
+    ds8k_java_path = config('ds8k_java_path')
173
+    host_profile = config('host_profile')
174
+    ds8k_jar_lib_path = config('ds8k_jar_lib_path')
175
+    hookenv.log("Getting config val %s" % xiv_ds8k_connection_type)
176
+    if xiv_ds8k_connection_type == "fibre_channel":
177
+        # Install package
178
+        hookenv.log("sysyfsutils installing")
179
+        fetch.apt_install('sysfsutils')
180
+    elif xiv_ds8k_connection_type == "iscsi":
181
+        # Install package
182
+        hookenv.log("isci package")
183
+        fetch.apt_install('open-iscsi')
184
+
185
+    data = {"volume-driver": volumedriver, "volume-backend-name":
186
+            volume_backend_name, "san-ip": san_ip, "san-login":
187
+            san_login, "san-password":
188
+            san_password, "ds8k_storage_unit":
189
+            ds8k_storage_unit, "san_clustername": san_clustername,
190
+            "xiv_chap": xiv_chap, "xiv_ds8k_connection_type":
191
+            xiv_ds8k_connection_type, "management_ips":
192
+            management_ips, "ds8k_java_path": ds8k_java_path,
193
+            "host_profile": host_profile, "ds8k_jar_lib_path":
194
+            ds8k_jar_lib_path,
195
+            }
196
+    if not san_ip or not san_login or \
197
+       not san_password:
198
+        log("san-ip, san-login, san-password config"
199
+            "parameter values are required, \
200
+            Please provide correct values")
201
+        return ''
202
+    else:
203
+        return data
204
+
205
+
206
+@when_not('ibm-cinder-ds8k.configured')
207
+@when('storage-backend.available')
208
+def configure_ds8k(self):
209
+    """
210
+    configuring ds8k driver
211
+    get the values from the user to add in cinder.conf
212
+    :returns : None
213
+    """
214
+    hookenv.log("In configure_ds8k fn")
215
+    data = get_ds8k_request()
216
+    if not data:
217
+        status_set('blocked', 'Some of the config parameter values are empty, \
218
+Please provide correct values.')
219
+        return 1
220
+    hookenv.log("data is %s" % data)
221
+    for rid in relation_ids('storage-backend'):
222
+        storage_backend_joined(rid, data)
223
+    set_state('ibm-cinder-ds8k.configured')
224
+
225
+
226
+@when('config.changed', 'ibm-cinder-ds8k.configured')
227
+def config_changed():
228
+    """
229
+    Config-changed
230
+    Get the config values, update the values in
231
+    Cinder.conf file and restart the cinder volume service.
232
+    :returns: None
233
+    """
234
+
235
+    remove_state('ibm-cinder-ds8k.configured')
236
+    remove_state('ibm-cinder-ds8k.started')
237
+
238
+
239
+def storage_backend_joined(rel_id=None, data=None):
240
+    """
241
+    Relation-joined with Cinder.
242
+    Calls the ds8kSubordinateContext constructir which writes the
243
+    cinder config values to cinder.conf file.
244
+    :returns: None
245
+    """
246
+
247
+    log('ds8k Volume Driver providing information to cinder charm.')
248
+    relation_set(
249
+        relation_id=rel_id,
250
+        backend_name=service_name(),
251
+        subordinate_configuration=json.dumps(ds8kSubordinateContext()(data)),
252
+        stateless=True,
253
+    )
254
+
255
+
256
+@when('ibm-cinder-ds8k.installed')
257
+@when('ibm-cinder-ds8k.configured')
258
+@when_not('ibm-cinder-ds8k.started')
259
+def restart_cinder_services():
260
+    """
261
+    Get the config values, update the values in
262
+    Cinder.conf file and restart the cinder volume service.
263
+    :returns: None
264
+    """
265
+
266
+    hookenv.log("Restarting cinder")
267
+    service_restart('cinder-volume')
268
+    set_state('ibm-cinder-ds8k.started')
269
+    hookenv.status_set('active', 'IBM Cinder DS8K: Ready')
270
+
271
+
272
+@when('storage-backend.changed')
273
+def storage_backend_changed():
274
+    """
275
+    Gets the config values and calls storage_backend function.
276
+    :returns: None
277
+    """
278
+
279
+    remove_state('ibm-cinder-ds8k.configured')
280
+    remove_state('ibm-cinder-ds8k.started')
281
+
282
+
283
+if __name__ == '__main__':
284
+    try:
285
+        hooks.execute(sys.argv)
286
+    except UnregisteredHookError as e:
287
+        log('Unknown hook {} - skipping.'.format(e))
Back to file index

requirements.txt

1
--- 
2
+++ requirements.txt
3
@@ -0,0 +1,2 @@
4
+flake8
5
+pytest
Back to file index

revision

1
--- 
2
+++ revision
3
@@ -0,0 +1 @@
4
+0
Back to file index

tests/00-setup

 1
--- 
 2
+++ tests/00-setup
 3
@@ -0,0 +1,23 @@
 4
+#!/bin/bash
 5
+
 6
+
 7
+DS8K_CONFIG_SANIP=${DS8K_CONFIG_SANIP?Error: IBM DS8K config_sanip defined in tests/00-setup}
 8
+DS8K_CONFIG_SANLOGIN=${DS8K_CONFIG_SANLOGIN?Error: IBM DS8K config_sanlogin in tests/00-setup}
 9
+DS8K_CONFIG_SANPASSWORD=${DS8K_CONFIG_SANPASSWORD?Error: IBM DS8K config_sanpassword defined in tests/00-setup}
10
+VOLUME_BACKEND_NAME=${VOLUME_BACKEND_NAME?Error: IBM DS8K volume_backend_name defined in tests/00-setup}
11
+
12
+
13
+# Add a local configuration file
14
+cat << EOF > local.yaml
15
+ibm-repo:
16
+    ds8k_config_sanip: "$DS8K_CONFIG_SANIP"
17
+    ds8k_config_sanlogin: "$DS8K_CONFIG_SANLOGIN"
18
+    ds8k_config_sanpassword: "$DS8K_CONFIG_SANPASSWORD"    
19
+    volume_backend_name: "$VOLUME_BACKEND_NAME"
20
+    volume-driver: "cinder.volume.drivers.ibm.xiv_ds8k.XIVDS8KDriver"
21
+    san_clustername: "P5"
22
+EOF
23
+
24
+sudo add-apt-repository ppa:juju/stable -y
25
+sudo apt-get update
26
+sudo apt-get install amulet python3 -y
Back to file index

tests/basic_deployment.py

  1
--- 
  2
+++ tests/basic_deployment.py
  3
@@ -0,0 +1,180 @@
  4
+#!/usr/bin/env python
  5
+#
  6
+# Copyright 2016 Canonical Ltd
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#  http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+
 20
+"""
 21
+Basic cinder-ds8k functional test.
 22
+"""
 23
+import amulet
 24
+import os
 25
+import sys
 26
+import yaml
 27
+
 28
+from charmhelpers.contrib.openstack.amulet.deployment import (
 29
+    OpenStackAmuletDeployment
 30
+)
 31
+
 32
+
 33
+seconds_to_wait = 100000
 34
+
 35
+
 36
+class Cinderds8kBasicDeployment(OpenStackAmuletDeployment):
 37
+    """Amulet tests on a basic heat deployment."""
 38
+
 39
+    def __init__(self, series=None, openstack=None, source=None, git=False,
 40
+                 stable=True):
 41
+        """
 42
+        Function to deploy the entire test environment.
 43
+        :param: series, openstack, source, git
 44
+        :returns: None
 45
+        """
 46
+
 47
+        super(Cinderds8kBasicDeployment,
 48
+              self).__init__(series, openstack,
 49
+                             source, stable)
 50
+        self._add_services()
 51
+        self._add_relations()
 52
+        self._configure_services()
 53
+        self._deploy()
 54
+
 55
+        exclude_services = []
 56
+
 57
+        self._auto_wait_for_status(exclude_services=exclude_services)
 58
+
 59
+        self._check_cinderconffile()
 60
+        self.d.sentry.wait(seconds_to_wait)
 61
+
 62
+    def _add_services(self):
 63
+
 64
+        """
 65
+        Function: Add the services that we're testing,
 66
+                  where ibm-cinder-ds8k is
 67
+                  local, and the rest of the services
 68
+                  are from lp branches that are compatible with
 69
+                  the local charm (e.g. stable or next).
 70
+        :param: None
 71
+        :returns: None
 72
+        """
 73
+        # Note: ibm-cinder-ds8k becomes a cinder subordinate unit.
 74
+        this_service = {'name': 'ibm-cinder-ds8k'}
 75
+        other_services = [
 76
+            {'name': 'percona-cluster', 'constraints': {'mem': '3072M'}},
 77
+            {'name': 'keystone'},
 78
+            {'name': 'rabbitmq-server'},
 79
+            {'name': 'cinder'},
 80
+        ]
 81
+        super(Cinderds8kBasicDeployment,
 82
+              self)._add_services(this_service,
 83
+                                  other_services)
 84
+
 85
+    def _add_relations(self):
 86
+
 87
+        """
 88
+        Function: Add all of the relations for the services.
 89
+        :param: None
 90
+        :returns: None
 91
+        """
 92
+
 93
+        relations = {
 94
+            'cinder:storage-backend':
 95
+                'ibm-cinder-ds8k:storage-backend',
 96
+            'keystone:shared-db': 'percona-cluster:shared-db',
 97
+            'cinder:shared-db': 'percona-cluster:shared-db',
 98
+            'cinder:identity-service': 'keystone:identity-service',
 99
+            'cinder:amqp': 'rabbitmq-server:amqp',
100
+        }
101
+        super(Cinderds8kBasicDeployment, self)._add_relations(relations)
102
+
103
+    def _configure_services(self):
104
+
105
+        """
106
+        Function to Configure all of the services.
107
+        :param: None
108
+        :return: None
109
+        """
110
+        keystone_config = {
111
+            'admin-password': 'openstack',
112
+            'admin-token': 'ubuntutesting'
113
+        }
114
+        pxc_config = {
115
+            'dataset-size': '25%',
116
+            'max-connections': 1000,
117
+            'root-password': 'ChangeMe123',
118
+            'sst-password': 'ChangeMe123',
119
+        }
120
+        cinder_config = {
121
+            'block-device': 'None',
122
+            'glance-api-version': '2'
123
+        }
124
+        local_path = os.path.join(os.path.dirname(__file__), 'local.yaml')
125
+        with open(local_path, "r") as fd:
126
+            config = yaml.safe_load(fd)
127
+
128
+        san_ip = config.get('ibm-repo').get('ds8k_config_sanip')
129
+        if not san_ip:
130
+            message = 'Please provide the san-ip of the ds8k driver'
131
+            amulet.raise_status(amulet.FAIL, msg=message)
132
+            sys.exit(1)
133
+        san_login = config.get('ibm-repo').get('ds8k_config_sanlogin')
134
+        if not san_login:
135
+            message = 'Please provide the san-login of the ds8k driver'
136
+            amulet.raise_status(amulet.FAIL, msg=message)
137
+            sys.exit(1)
138
+        san_password = config.get('ibm-repo').get('ds8k_'
139
+                                                  'config_sanpassword')
140
+        if not san_password:
141
+            message = 'Please provide the san-password for ds8k driver'
142
+            amulet.raise_status(amulet.FAIL, msg=message)
143
+            sys.exit(1)
144
+        volume_backend_name = config.get('ibm-repo').get('volume_backend_name')
145
+        if not volume_backend_name:
146
+            message = 'Please provide the volume-backend-name \
147
+for ds8k driver'
148
+            amulet.raise_status(amulet.FAIL, msg=message)
149
+            sys.exit(1)
150
+        ds8k_config = {
151
+           'volume-driver': 'cinder.volume.drivers.'
152
+                            'ibm.xiv_ds8k.XIVDS8KDriver',
153
+           'volume-backend-name': volume_backend_name,
154
+           'san-ip': san_ip,
155
+           'san-login': san_login,
156
+           'san-password': san_password,
157
+           'san_clustername': 'P5',
158
+        }
159
+        configs = {
160
+            'keystone': keystone_config,
161
+            'percona-cluster': pxc_config,
162
+            'cinder': cinder_config,
163
+            'ibm-cinder-ds8k': ds8k_config,
164
+        }
165
+        super(Cinderds8kBasicDeployment,
166
+              self)._configure_services(configs)
167
+
168
+    def _check_cinderconffile(self):
169
+
170
+        """
171
+        Function to check cinder.conf file to confirm
172
+               if ds8k driver is configured.
173
+        :param: None
174
+        :return: None
175
+        """
176
+        unit = self.d.sentry['ibm-cinder-ds8k'][0]
177
+        output, result_code = unit.run("grep '\[ibm-cinder-ds8k\'"
178
+                                       " /etc/cinder/cinder.conf")
179
+        print('Output of grep is %s' % output)
180
+
181
+        if result_code != 0:
182
+            message = ('ibm-cinder-ds8k is not found in cinder.conf')
183
+            amulet.raise_status(amulet.FAIL, msg=message)
Back to file index

tests/charmhelpers/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/__init__.py
 3
@@ -0,0 +1,36 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+# Bootstrap charm-helpers, installing its dependencies if necessary using
19
+# only standard libraries.
20
+import subprocess
21
+import sys
22
+
23
+try:
24
+    import six  # flake8: noqa
25
+except ImportError:
26
+    if sys.version_info.major == 2:
27
+        subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
28
+    else:
29
+        subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
30
+    import six  # flake8: noqa
31
+
32
+try:
33
+    import yaml  # flake8: noqa
34
+except ImportError:
35
+    if sys.version_info.major == 2:
36
+        subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
37
+    else:
38
+        subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
39
+    import yaml  # flake8: noqa
Back to file index

tests/charmhelpers/contrib/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/amulet/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/amulet/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/amulet/deployment.py

  1
--- 
  2
+++ tests/charmhelpers/contrib/amulet/deployment.py
  3
@@ -0,0 +1,97 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import amulet
 19
+import os
 20
+import six
 21
+
 22
+
 23
+class AmuletDeployment(object):
 24
+    """Amulet deployment.
 25
+
 26
+       This class provides generic Amulet deployment and test runner
 27
+       methods.
 28
+       """
 29
+
 30
+    def __init__(self, series=None):
 31
+        """Initialize the deployment environment."""
 32
+        self.series = None
 33
+
 34
+        if series:
 35
+            self.series = series
 36
+            self.d = amulet.Deployment(series=self.series)
 37
+        else:
 38
+            self.d = amulet.Deployment()
 39
+
 40
+    def _add_services(self, this_service, other_services):
 41
+        """Add services.
 42
+
 43
+           Add services to the deployment where this_service is the local charm
 44
+           that we're testing and other_services are the other services that
 45
+           are being used in the local amulet tests.
 46
+           """
 47
+        if this_service['name'] != os.path.basename(os.getcwd()):
 48
+            s = this_service['name']
 49
+            msg = "The charm's root directory name needs to be {}".format(s)
 50
+            amulet.raise_status(amulet.FAIL, msg=msg)
 51
+
 52
+        if 'units' not in this_service:
 53
+            this_service['units'] = 1
 54
+
 55
+        self.d.add(this_service['name'], units=this_service['units'],
 56
+                   constraints=this_service.get('constraints'))
 57
+
 58
+        for svc in other_services:
 59
+            if 'location' in svc:
 60
+                branch_location = svc['location']
 61
+            elif self.series:
 62
+                branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
 63
+            else:
 64
+                branch_location = None
 65
+
 66
+            if 'units' not in svc:
 67
+                svc['units'] = 1
 68
+
 69
+            self.d.add(svc['name'], charm=branch_location, units=svc['units'],
 70
+                       constraints=svc.get('constraints'))
 71
+
 72
+    def _add_relations(self, relations):
 73
+        """Add all of the relations for the services."""
 74
+        for k, v in six.iteritems(relations):
 75
+            self.d.relate(k, v)
 76
+
 77
+    def _configure_services(self, configs):
 78
+        """Configure all of the services."""
 79
+        for service, config in six.iteritems(configs):
 80
+            self.d.configure(service, config)
 81
+
 82
+    def _deploy(self):
 83
+        """Deploy environment and wait for all hooks to finish executing."""
 84
+        timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 900))
 85
+        try:
 86
+            self.d.setup(timeout=timeout)
 87
+            self.d.sentry.wait(timeout=timeout)
 88
+        except amulet.helpers.TimeoutError:
 89
+            amulet.raise_status(
 90
+                amulet.FAIL,
 91
+                msg="Deployment timed out ({}s)".format(timeout)
 92
+            )
 93
+        except Exception:
 94
+            raise
 95
+
 96
+    def run_tests(self):
 97
+        """Run all of the methods that are prefixed with 'test_'."""
 98
+        for test in dir(self):
 99
+            if test.startswith('test_'):
100
+                getattr(self, test)()
Back to file index

tests/charmhelpers/contrib/amulet/utils.py

  1
--- 
  2
+++ tests/charmhelpers/contrib/amulet/utils.py
  3
@@ -0,0 +1,827 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import io
 19
+import json
 20
+import logging
 21
+import os
 22
+import re
 23
+import socket
 24
+import subprocess
 25
+import sys
 26
+import time
 27
+import uuid
 28
+
 29
+import amulet
 30
+import distro_info
 31
+import six
 32
+from six.moves import configparser
 33
+if six.PY3:
 34
+    from urllib import parse as urlparse
 35
+else:
 36
+    import urlparse
 37
+
 38
+
 39
+class AmuletUtils(object):
 40
+    """Amulet utilities.
 41
+
 42
+       This class provides common utility functions that are used by Amulet
 43
+       tests.
 44
+       """
 45
+
 46
+    def __init__(self, log_level=logging.ERROR):
 47
+        self.log = self.get_logger(level=log_level)
 48
+        self.ubuntu_releases = self.get_ubuntu_releases()
 49
+
 50
+    def get_logger(self, name="amulet-logger", level=logging.DEBUG):
 51
+        """Get a logger object that will log to stdout."""
 52
+        log = logging
 53
+        logger = log.getLogger(name)
 54
+        fmt = log.Formatter("%(asctime)s %(funcName)s "
 55
+                            "%(levelname)s: %(message)s")
 56
+
 57
+        handler = log.StreamHandler(stream=sys.stdout)
 58
+        handler.setLevel(level)
 59
+        handler.setFormatter(fmt)
 60
+
 61
+        logger.addHandler(handler)
 62
+        logger.setLevel(level)
 63
+
 64
+        return logger
 65
+
 66
+    def valid_ip(self, ip):
 67
+        if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
 68
+            return True
 69
+        else:
 70
+            return False
 71
+
 72
+    def valid_url(self, url):
 73
+        p = re.compile(
 74
+            r'^(?:http|ftp)s?://'
 75
+            r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'  # noqa
 76
+            r'localhost|'
 77
+            r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
 78
+            r'(?::\d+)?'
 79
+            r'(?:/?|[/?]\S+)$',
 80
+            re.IGNORECASE)
 81
+        if p.match(url):
 82
+            return True
 83
+        else:
 84
+            return False
 85
+
 86
+    def get_ubuntu_release_from_sentry(self, sentry_unit):
 87
+        """Get Ubuntu release codename from sentry unit.
 88
+
 89
+        :param sentry_unit: amulet sentry/service unit pointer
 90
+        :returns: list of strings - release codename, failure message
 91
+        """
 92
+        msg = None
 93
+        cmd = 'lsb_release -cs'
 94
+        release, code = sentry_unit.run(cmd)
 95
+        if code == 0:
 96
+            self.log.debug('{} lsb_release: {}'.format(
 97
+                sentry_unit.info['unit_name'], release))
 98
+        else:
 99
+            msg = ('{} `{}` returned {} '
100
+                   '{}'.format(sentry_unit.info['unit_name'],
101
+                               cmd, release, code))
102
+        if release not in self.ubuntu_releases:
103
+            msg = ("Release ({}) not found in Ubuntu releases "
104
+                   "({})".format(release, self.ubuntu_releases))
105
+        return release, msg
106
+
107
+    def validate_services(self, commands):
108
+        """Validate that lists of commands succeed on service units.  Can be
109
+           used to verify system services are running on the corresponding
110
+           service units.
111
+
112
+        :param commands: dict with sentry keys and arbitrary command list vals
113
+        :returns: None if successful, Failure string message otherwise
114
+        """
115
+        self.log.debug('Checking status of system services...')
116
+
117
+        # /!\ DEPRECATION WARNING (beisner):
118
+        # New and existing tests should be rewritten to use
119
+        # validate_services_by_name() as it is aware of init systems.
120
+        self.log.warn('DEPRECATION WARNING:  use '
121
+                      'validate_services_by_name instead of validate_services '
122
+                      'due to init system differences.')
123
+
124
+        for k, v in six.iteritems(commands):
125
+            for cmd in v:
126
+                output, code = k.run(cmd)
127
+                self.log.debug('{} `{}` returned '
128
+                               '{}'.format(k.info['unit_name'],
129
+                                           cmd, code))
130
+                if code != 0:
131
+                    return "command `{}` returned {}".format(cmd, str(code))
132
+        return None
133
+
134
+    def validate_services_by_name(self, sentry_services):
135
+        """Validate system service status by service name, automatically
136
+           detecting init system based on Ubuntu release codename.
137
+
138
+        :param sentry_services: dict with sentry keys and svc list values
139
+        :returns: None if successful, Failure string message otherwise
140
+        """
141
+        self.log.debug('Checking status of system services...')
142
+
143
+        # Point at which systemd became a thing
144
+        systemd_switch = self.ubuntu_releases.index('vivid')
145
+
146
+        for sentry_unit, services_list in six.iteritems(sentry_services):
147
+            # Get lsb_release codename from unit
148
+            release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
149
+            if ret:
150
+                return ret
151
+
152
+            for service_name in services_list:
153
+                if (self.ubuntu_releases.index(release) >= systemd_switch or
154
+                        service_name in ['rabbitmq-server', 'apache2']):
155
+                    # init is systemd (or regular sysv)
156
+                    cmd = 'sudo service {} status'.format(service_name)
157
+                    output, code = sentry_unit.run(cmd)
158
+                    service_running = code == 0
159
+                elif self.ubuntu_releases.index(release) < systemd_switch:
160
+                    # init is upstart
161
+                    cmd = 'sudo status {}'.format(service_name)
162
+                    output, code = sentry_unit.run(cmd)
163
+                    service_running = code == 0 and "start/running" in output
164
+
165
+                self.log.debug('{} `{}` returned '
166
+                               '{}'.format(sentry_unit.info['unit_name'],
167
+                                           cmd, code))
168
+                if not service_running:
169
+                    return u"command `{}` returned {} {}".format(
170
+                        cmd, output, str(code))
171
+        return None
172
+
173
+    def _get_config(self, unit, filename):
174
+        """Get a ConfigParser object for parsing a unit's config file."""
175
+        file_contents = unit.file_contents(filename)
176
+
177
+        # NOTE(beisner):  by default, ConfigParser does not handle options
178
+        # with no value, such as the flags used in the mysql my.cnf file.
179
+        # https://bugs.python.org/issue7005
180
+        config = configparser.ConfigParser(allow_no_value=True)
181
+        config.readfp(io.StringIO(file_contents))
182
+        return config
183
+
184
+    def validate_config_data(self, sentry_unit, config_file, section,
185
+                             expected):
186
+        """Validate config file data.
187
+
188
+           Verify that the specified section of the config file contains
189
+           the expected option key:value pairs.
190
+
191
+           Compare expected dictionary data vs actual dictionary data.
192
+           The values in the 'expected' dictionary can be strings, bools, ints,
193
+           longs, or can be a function that evaluates a variable and returns a
194
+           bool.
195
+           """
196
+        self.log.debug('Validating config file data ({} in {} on {})'
197
+                       '...'.format(section, config_file,
198
+                                    sentry_unit.info['unit_name']))
199
+        config = self._get_config(sentry_unit, config_file)
200
+
201
+        if section != 'DEFAULT' and not config.has_section(section):
202
+            return "section [{}] does not exist".format(section)
203
+
204
+        for k in expected.keys():
205
+            if not config.has_option(section, k):
206
+                return "section [{}] is missing option {}".format(section, k)
207
+
208
+            actual = config.get(section, k)
209
+            v = expected[k]
210
+            if (isinstance(v, six.string_types) or
211
+                    isinstance(v, bool) or
212
+                    isinstance(v, six.integer_types)):
213
+                # handle explicit values
214
+                if actual != v:
215
+                    return "section [{}] {}:{} != expected {}:{}".format(
216
+                           section, k, actual, k, expected[k])
217
+            # handle function pointers, such as not_null or valid_ip
218
+            elif not v(actual):
219
+                return "section [{}] {}:{} != expected {}:{}".format(
220
+                       section, k, actual, k, expected[k])
221
+        return None
222
+
223
+    def _validate_dict_data(self, expected, actual):
224
+        """Validate dictionary data.
225
+
226
+           Compare expected dictionary data vs actual dictionary data.
227
+           The values in the 'expected' dictionary can be strings, bools, ints,
228
+           longs, or can be a function that evaluates a variable and returns a
229
+           bool.
230
+           """
231
+        self.log.debug('actual: {}'.format(repr(actual)))
232
+        self.log.debug('expected: {}'.format(repr(expected)))
233
+
234
+        for k, v in six.iteritems(expected):
235
+            if k in actual:
236
+                if (isinstance(v, six.string_types) or
237
+                        isinstance(v, bool) or
238
+                        isinstance(v, six.integer_types)):
239
+                    # handle explicit values
240
+                    if v != actual[k]:
241
+                        return "{}:{}".format(k, actual[k])
242
+                # handle function pointers, such as not_null or valid_ip
243
+                elif not v(actual[k]):
244
+                    return "{}:{}".format(k, actual[k])
245
+            else:
246
+                return "key '{}' does not exist".format(k)
247
+        return None
248
+
249
+    def validate_relation_data(self, sentry_unit, relation, expected):
250
+        """Validate actual relation data based on expected relation data."""
251
+        actual = sentry_unit.relation(relation[0], relation[1])
252
+        return self._validate_dict_data(expected, actual)
253
+
254
+    def _validate_list_data(self, expected, actual):
255
+        """Compare expected list vs actual list data."""
256
+        for e in expected:
257
+            if e not in actual:
258
+                return "expected item {} not found in actual list".format(e)
259
+        return None
260
+
261
+    def not_null(self, string):
262
+        if string is not None:
263
+            return True
264
+        else:
265
+            return False
266
+
267
+    def _get_file_mtime(self, sentry_unit, filename):
268
+        """Get last modification time of file."""
269
+        return sentry_unit.file_stat(filename)['mtime']
270
+
271
+    def _get_dir_mtime(self, sentry_unit, directory):
272
+        """Get last modification time of directory."""
273
+        return sentry_unit.directory_stat(directory)['mtime']
274
+
275
+    def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
276
+        """Get start time of a process based on the last modification time
277
+           of the /proc/pid directory.
278
+
279
+        :sentry_unit:  The sentry unit to check for the service on
280
+        :service:  service name to look for in process table
281
+        :pgrep_full:  [Deprecated] Use full command line search mode with pgrep
282
+        :returns:  epoch time of service process start
283
+        :param commands:  list of bash commands
284
+        :param sentry_units:  list of sentry unit pointers
285
+        :returns:  None if successful; Failure message otherwise
286
+        """
287
+        if pgrep_full is not None:
288
+            # /!\ DEPRECATION WARNING (beisner):
289
+            # No longer implemented, as pidof is now used instead of pgrep.
290
+            # https://bugs.launchpad.net/charm-helpers/+bug/1474030
291
+            self.log.warn('DEPRECATION WARNING:  pgrep_full bool is no '
292
+                          'longer implemented re: lp 1474030.')
293
+
294
+        pid_list = self.get_process_id_list(sentry_unit, service)
295
+        pid = pid_list[0]
296
+        proc_dir = '/proc/{}'.format(pid)
297
+        self.log.debug('Pid for {} on {}: {}'.format(
298
+            service, sentry_unit.info['unit_name'], pid))
299
+
300
+        return self._get_dir_mtime(sentry_unit, proc_dir)
301
+
302
+    def service_restarted(self, sentry_unit, service, filename,
303
+                          pgrep_full=None, sleep_time=20):
304
+        """Check if service was restarted.
305
+
306
+           Compare a service's start time vs a file's last modification time
307
+           (such as a config file for that service) to determine if the service
308
+           has been restarted.
309
+           """
310
+        # /!\ DEPRECATION WARNING (beisner):
311
+        # This method is prone to races in that no before-time is known.
312
+        # Use validate_service_config_changed instead.
313
+
314
+        # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
315
+        # used instead of pgrep.  pgrep_full is still passed through to ensure
316
+        # deprecation WARNS.  lp1474030
317
+        self.log.warn('DEPRECATION WARNING:  use '
318
+                      'validate_service_config_changed instead of '
319
+                      'service_restarted due to known races.')
320
+
321
+        time.sleep(sleep_time)
322
+        if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
323
+                self._get_file_mtime(sentry_unit, filename)):
324
+            return True
325
+        else:
326
+            return False
327
+
328
+    def service_restarted_since(self, sentry_unit, mtime, service,
329
+                                pgrep_full=None, sleep_time=20,
330
+                                retry_count=30, retry_sleep_time=10):
331
+        """Check if service was been started after a given time.
332
+
333
+        Args:
334
+          sentry_unit (sentry): The sentry unit to check for the service on
335
+          mtime (float): The epoch time to check against
336
+          service (string): service name to look for in process table
337
+          pgrep_full: [Deprecated] Use full command line search mode with pgrep
338
+          sleep_time (int): Initial sleep time (s) before looking for file
339
+          retry_sleep_time (int): Time (s) to sleep between retries
340
+          retry_count (int): If file is not found, how many times to retry
341
+
342
+        Returns:
343
+          bool: True if service found and its start time it newer than mtime,
344
+                False if service is older than mtime or if service was
345
+                not found.
346
+        """
347
+        # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
348
+        # used instead of pgrep.  pgrep_full is still passed through to ensure
349
+        # deprecation WARNS.  lp1474030
350
+
351
+        unit_name = sentry_unit.info['unit_name']
352
+        self.log.debug('Checking that %s service restarted since %s on '
353
+                       '%s' % (service, mtime, unit_name))
354
+        time.sleep(sleep_time)
355
+        proc_start_time = None
356
+        tries = 0
357
+        while tries <= retry_count and not proc_start_time:
358
+            try:
359
+                proc_start_time = self._get_proc_start_time(sentry_unit,
360
+                                                            service,
361
+                                                            pgrep_full)
362
+                self.log.debug('Attempt {} to get {} proc start time on {} '
363
+                               'OK'.format(tries, service, unit_name))
364
+            except IOError as e:
365
+                # NOTE(beisner) - race avoidance, proc may not exist yet.
366
+                # https://bugs.launchpad.net/charm-helpers/+bug/1474030
367
+                self.log.debug('Attempt {} to get {} proc start time on {} '
368
+                               'failed\n{}'.format(tries, service,
369
+                                                   unit_name, e))
370
+                time.sleep(retry_sleep_time)
371
+                tries += 1
372
+
373
+        if not proc_start_time:
374
+            self.log.warn('No proc start time found, assuming service did '
375
+                          'not start')
376
+            return False
377
+        if proc_start_time >= mtime:
378
+            self.log.debug('Proc start time is newer than provided mtime'
379
+                           '(%s >= %s) on %s (OK)' % (proc_start_time,
380
+                                                      mtime, unit_name))
381
+            return True
382
+        else:
383
+            self.log.warn('Proc start time (%s) is older than provided mtime '
384
+                          '(%s) on %s, service did not '
385
+                          'restart' % (proc_start_time, mtime, unit_name))
386
+            return False
387
+
388
+    def config_updated_since(self, sentry_unit, filename, mtime,
389
+                             sleep_time=20, retry_count=30,
390
+                             retry_sleep_time=10):
391
+        """Check if file was modified after a given time.
392
+
393
+        Args:
394
+          sentry_unit (sentry): The sentry unit to check the file mtime on
395
+          filename (string): The file to check mtime of
396
+          mtime (float): The epoch time to check against
397
+          sleep_time (int): Initial sleep time (s) before looking for file
398
+          retry_sleep_time (int): Time (s) to sleep between retries
399
+          retry_count (int): If file is not found, how many times to retry
400
+
401
+        Returns:
402
+          bool: True if file was modified more recently than mtime, False if
403
+                file was modified before mtime, or if file not found.
404
+        """
405
+        unit_name = sentry_unit.info['unit_name']
406
+        self.log.debug('Checking that %s updated since %s on '
407
+                       '%s' % (filename, mtime, unit_name))
408
+        time.sleep(sleep_time)
409
+        file_mtime = None
410
+        tries = 0
411
+        while tries <= retry_count and not file_mtime:
412
+            try:
413
+                file_mtime = self._get_file_mtime(sentry_unit, filename)
414
+                self.log.debug('Attempt {} to get {} file mtime on {} '
415
+                               'OK'.format(tries, filename, unit_name))
416
+            except IOError as e:
417
+                # NOTE(beisner) - race avoidance, file may not exist yet.
418
+                # https://bugs.launchpad.net/charm-helpers/+bug/1474030
419
+                self.log.debug('Attempt {} to get {} file mtime on {} '
420
+                               'failed\n{}'.format(tries, filename,
421
+                                                   unit_name, e))
422
+                time.sleep(retry_sleep_time)
423
+                tries += 1
424
+
425
+        if not file_mtime:
426
+            self.log.warn('Could not determine file mtime, assuming '
427
+                          'file does not exist')
428
+            return False
429
+
430
+        if file_mtime >= mtime:
431
+            self.log.debug('File mtime is newer than provided mtime '
432
+                           '(%s >= %s) on %s (OK)' % (file_mtime,
433
+                                                      mtime, unit_name))
434
+            return True
435
+        else:
436
+            self.log.warn('File mtime is older than provided mtime'
437
+                          '(%s < on %s) on %s' % (file_mtime,
438
+                                                  mtime, unit_name))
439
+            return False
440
+
441
+    def validate_service_config_changed(self, sentry_unit, mtime, service,
442
+                                        filename, pgrep_full=None,
443
+                                        sleep_time=20, retry_count=30,
444
+                                        retry_sleep_time=10):
445
+        """Check service and file were updated after mtime
446
+
447
+        Args:
448
+          sentry_unit (sentry): The sentry unit to check for the service on
449
+          mtime (float): The epoch time to check against
450
+          service (string): service name to look for in process table
451
+          filename (string): The file to check mtime of
452
+          pgrep_full: [Deprecated] Use full command line search mode with pgrep
453
+          sleep_time (int): Initial sleep in seconds to pass to test helpers
454
+          retry_count (int): If service is not found, how many times to retry
455
+          retry_sleep_time (int): Time in seconds to wait between retries
456
+
457
+        Typical Usage:
458
+            u = OpenStackAmuletUtils(ERROR)
459
+            ...
460
+            mtime = u.get_sentry_time(self.cinder_sentry)
461
+            self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
462
+            if not u.validate_service_config_changed(self.cinder_sentry,
463
+                                                     mtime,
464
+                                                     'cinder-api',
465
+                                                     '/etc/cinder/cinder.conf')
466
+                amulet.raise_status(amulet.FAIL, msg='update failed')
467
+        Returns:
468
+          bool: True if both service and file where updated/restarted after
469
+                mtime, False if service is older than mtime or if service was
470
+                not found or if filename was modified before mtime.
471
+        """
472
+
473
+        # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
474
+        # used instead of pgrep.  pgrep_full is still passed through to ensure
475
+        # deprecation WARNS.  lp1474030
476
+
477
+        service_restart = self.service_restarted_since(
478
+            sentry_unit, mtime,
479
+            service,
480
+            pgrep_full=pgrep_full,
481
+            sleep_time=sleep_time,
482
+            retry_count=retry_count,
483
+            retry_sleep_time=retry_sleep_time)
484
+
485
+        config_update = self.config_updated_since(
486
+            sentry_unit,
487
+            filename,
488
+            mtime,
489
+            sleep_time=sleep_time,
490
+            retry_count=retry_count,
491
+            retry_sleep_time=retry_sleep_time)
492
+
493
+        return service_restart and config_update
494
+
495
+    def get_sentry_time(self, sentry_unit):
496
+        """Return current epoch time on a sentry"""
497
+        cmd = "date +'%s'"
498
+        return float(sentry_unit.run(cmd)[0])
499
+
500
+    def relation_error(self, name, data):
501
+        return 'unexpected relation data in {} - {}'.format(name, data)
502
+
503
+    def endpoint_error(self, name, data):
504
+        return 'unexpected endpoint data in {} - {}'.format(name, data)
505
+
506
+    def get_ubuntu_releases(self):
507
+        """Return a list of all Ubuntu releases in order of release."""
508
+        _d = distro_info.UbuntuDistroInfo()
509
+        _release_list = _d.all
510
+        return _release_list
511
+
512
+    def file_to_url(self, file_rel_path):
513
+        """Convert a relative file path to a file URL."""
514
+        _abs_path = os.path.abspath(file_rel_path)
515
+        return urlparse.urlparse(_abs_path, scheme='file').geturl()
516
+
517
+    def check_commands_on_units(self, commands, sentry_units):
518
+        """Check that all commands in a list exit zero on all
519
+        sentry units in a list.
520
+
521
+        :param commands:  list of bash commands
522
+        :param sentry_units:  list of sentry unit pointers
523
+        :returns: None if successful; Failure message otherwise
524
+        """
525
+        self.log.debug('Checking exit codes for {} commands on {} '
526
+                       'sentry units...'.format(len(commands),
527
+                                                len(sentry_units)))
528
+        for sentry_unit in sentry_units:
529
+            for cmd in commands:
530
+                output, code = sentry_unit.run(cmd)
531
+                if code == 0:
532
+                    self.log.debug('{} `{}` returned {} '
533
+                                   '(OK)'.format(sentry_unit.info['unit_name'],
534
+                                                 cmd, code))
535
+                else:
536
+                    return ('{} `{}` returned {} '
537
+                            '{}'.format(sentry_unit.info['unit_name'],
538
+                                        cmd, code, output))
539
+        return None
540
+
541
+    def get_process_id_list(self, sentry_unit, process_name,
542
+                            expect_success=True):
543
+        """Get a list of process ID(s) from a single sentry juju unit
544
+        for a single process name.
545
+
546
+        :param sentry_unit: Amulet sentry instance (juju unit)
547
+        :param process_name: Process name
548
+        :param expect_success: If False, expect the PID to be missing,
549
+            raise if it is present.
550
+        :returns: List of process IDs
551
+        """
552
+        cmd = 'pidof -x {}'.format(process_name)
553
+        if not expect_success:
554
+            cmd += " || exit 0 && exit 1"
555
+        output, code = sentry_unit.run(cmd)
556
+        if code != 0:
557
+            msg = ('{} `{}` returned {} '
558
+                   '{}'.format(sentry_unit.info['unit_name'],
559
+                               cmd, code, output))
560
+            amulet.raise_status(amulet.FAIL, msg=msg)
561
+        return str(output).split()
562
+
563
+    def get_unit_process_ids(self, unit_processes, expect_success=True):
564
+        """Construct a dict containing unit sentries, process names, and
565
+        process IDs.
566
+
567
+        :param unit_processes: A dictionary of Amulet sentry instance
568
+            to list of process names.
569
+        :param expect_success: if False expect the processes to not be
570
+            running, raise if they are.
571
+        :returns: Dictionary of Amulet sentry instance to dictionary
572
+            of process names to PIDs.
573
+        """
574
+        pid_dict = {}
575
+        for sentry_unit, process_list in six.iteritems(unit_processes):
576
+            pid_dict[sentry_unit] = {}
577
+            for process in process_list:
578
+                pids = self.get_process_id_list(
579
+                    sentry_unit, process, expect_success=expect_success)
580
+                pid_dict[sentry_unit].update({process: pids})
581
+        return pid_dict
582
+
583
+    def validate_unit_process_ids(self, expected, actual):
584
+        """Validate process id quantities for services on units."""
585
+        self.log.debug('Checking units for running processes...')
586
+        self.log.debug('Expected PIDs: {}'.format(expected))
587
+        self.log.debug('Actual PIDs: {}'.format(actual))
588
+
589
+        if len(actual) != len(expected):
590
+            return ('Unit count mismatch.  expected, actual: {}, '
591
+                    '{} '.format(len(expected), len(actual)))
592
+
593
+        for (e_sentry, e_proc_names) in six.iteritems(expected):
594
+            e_sentry_name = e_sentry.info['unit_name']
595
+            if e_sentry in actual.keys():
596
+                a_proc_names = actual[e_sentry]
597
+            else:
598
+                return ('Expected sentry ({}) not found in actual dict data.'
599
+                        '{}'.format(e_sentry_name, e_sentry))
600
+
601
+            if len(e_proc_names.keys()) != len(a_proc_names.keys()):
602
+                return ('Process name count mismatch.  expected, actual: {}, '
603
+                        '{}'.format(len(expected), len(actual)))
604
+
605
+            for (e_proc_name, e_pids), (a_proc_name, a_pids) in \
606
+                    zip(e_proc_names.items(), a_proc_names.items()):
607
+                if e_proc_name != a_proc_name:
608
+                    return ('Process name mismatch.  expected, actual: {}, '
609
+                            '{}'.format(e_proc_name, a_proc_name))
610
+
611
+                a_pids_length = len(a_pids)
612
+                fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
613
+                            '{}, {} ({})'.format(e_sentry_name, e_proc_name,
614
+                                                 e_pids, a_pids_length,
615
+                                                 a_pids))
616
+
617
+                # If expected is a list, ensure at least one PID quantity match
618
+                if isinstance(e_pids, list) and \
619
+                        a_pids_length not in e_pids:
620
+                    return fail_msg
621
+                # If expected is not bool and not list,
622
+                # ensure PID quantities match
623
+                elif not isinstance(e_pids, bool) and \
624
+                        not isinstance(e_pids, list) and \
625
+                        a_pids_length != e_pids:
626
+                    return fail_msg
627
+                # If expected is bool True, ensure 1 or more PIDs exist
628
+                elif isinstance(e_pids, bool) and \
629
+                        e_pids is True and a_pids_length < 1:
630
+                    return fail_msg
631
+                # If expected is bool False, ensure 0 PIDs exist
632
+                elif isinstance(e_pids, bool) and \
633
+                        e_pids is False and a_pids_length != 0:
634
+                    return fail_msg
635
+                else:
636
+                    self.log.debug('PID check OK: {} {} {}: '
637
+                                   '{}'.format(e_sentry_name, e_proc_name,
638
+                                               e_pids, a_pids))
639
+        return None
640
+
641
+    def validate_list_of_identical_dicts(self, list_of_dicts):
642
+        """Check that all dicts within a list are identical."""
643
+        hashes = []
644
+        for _dict in list_of_dicts:
645
+            hashes.append(hash(frozenset(_dict.items())))
646
+
647
+        self.log.debug('Hashes: {}'.format(hashes))
648
+        if len(set(hashes)) == 1:
649
+            self.log.debug('Dicts within list are identical')
650
+        else:
651
+            return 'Dicts within list are not identical'
652
+
653
+        return None
654
+
655
+    def validate_sectionless_conf(self, file_contents, expected):
656
+        """A crude conf parser.  Useful to inspect configuration files which
657
+        do not have section headers (as would be necessary in order to use
658
+        the configparser).  Such as openstack-dashboard or rabbitmq confs."""
659
+        for line in file_contents.split('\n'):
660
+            if '=' in line:
661
+                args = line.split('=')
662
+                if len(args) <= 1:
663
+                    continue
664
+                key = args[0].strip()
665
+                value = args[1].strip()
666
+                if key in expected.keys():
667
+                    if expected[key] != value:
668
+                        msg = ('Config mismatch.  Expected, actual:  {}, '
669
+                               '{}'.format(expected[key], value))
670
+                        amulet.raise_status(amulet.FAIL, msg=msg)
671
+
672
+    def get_unit_hostnames(self, units):
673
+        """Return a dict of juju unit names to hostnames."""
674
+        host_names = {}
675
+        for unit in units:
676
+            host_names[unit.info['unit_name']] = \
677
+                str(unit.file_contents('/etc/hostname').strip())
678
+        self.log.debug('Unit host names: {}'.format(host_names))
679
+        return host_names
680
+
681
+    def run_cmd_unit(self, sentry_unit, cmd):
682
+        """Run a command on a unit, return the output and exit code."""
683
+        output, code = sentry_unit.run(cmd)
684
+        if code == 0:
685
+            self.log.debug('{} `{}` command returned {} '
686
+                           '(OK)'.format(sentry_unit.info['unit_name'],
687
+                                         cmd, code))
688
+        else:
689
+            msg = ('{} `{}` command returned {} '
690
+                   '{}'.format(sentry_unit.info['unit_name'],
691
+                               cmd, code, output))
692
+            amulet.raise_status(amulet.FAIL, msg=msg)
693
+        return str(output), code
694
+
695
+    def file_exists_on_unit(self, sentry_unit, file_name):
696
+        """Check if a file exists on a unit."""
697
+        try:
698
+            sentry_unit.file_stat(file_name)
699
+            return True
700
+        except IOError:
701
+            return False
702
+        except Exception as e:
703
+            msg = 'Error checking file {}: {}'.format(file_name, e)
704
+            amulet.raise_status(amulet.FAIL, msg=msg)
705
+
706
+    def file_contents_safe(self, sentry_unit, file_name,
707
+                           max_wait=60, fatal=False):
708
+        """Get file contents from a sentry unit.  Wrap amulet file_contents
709
+        with retry logic to address races where a file checks as existing,
710
+        but no longer exists by the time file_contents is called.
711
+        Return None if file not found. Optionally raise if fatal is True."""
712
+        unit_name = sentry_unit.info['unit_name']
713
+        file_contents = False
714
+        tries = 0
715
+        while not file_contents and tries < (max_wait / 4):
716
+            try:
717
+                file_contents = sentry_unit.file_contents(file_name)
718
+            except IOError:
719
+                self.log.debug('Attempt {} to open file {} from {} '
720
+                               'failed'.format(tries, file_name,
721
+                                               unit_name))
722
+                time.sleep(4)
723
+                tries += 1
724
+
725
+        if file_contents:
726
+            return file_contents
727
+        elif not fatal:
728
+            return None
729
+        elif fatal:
730
+            msg = 'Failed to get file contents from unit.'
731
+            amulet.raise_status(amulet.FAIL, msg)
732
+
733
+    def port_knock_tcp(self, host="localhost", port=22, timeout=15):
734
+        """Open a TCP socket to check for a listening sevice on a host.
735
+
736
+        :param host: host name or IP address, default to localhost
737
+        :param port: TCP port number, default to 22
738
+        :param timeout: Connect timeout, default to 15 seconds
739
+        :returns: True if successful, False if connect failed
740
+        """
741
+
742
+        # Resolve host name if possible
743
+        try:
744
+            connect_host = socket.gethostbyname(host)
745
+            host_human = "{} ({})".format(connect_host, host)
746
+        except socket.error as e:
747
+            self.log.warn('Unable to resolve address: '
748
+                          '{} ({}) Trying anyway!'.format(host, e))
749
+            connect_host = host
750
+            host_human = connect_host
751
+
752
+        # Attempt socket connection
753
+        try:
754
+            knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
755
+            knock.settimeout(timeout)
756
+            knock.connect((connect_host, port))
757
+            knock.close()
758
+            self.log.debug('Socket connect OK for host '
759
+                           '{} on port {}.'.format(host_human, port))
760
+            return True
761
+        except socket.error as e:
762
+            self.log.debug('Socket connect FAIL for'
763
+                           ' {} port {} ({})'.format(host_human, port, e))
764
+            return False
765
+
766
+    def port_knock_units(self, sentry_units, port=22,
767
+                         timeout=15, expect_success=True):
768
+        """Open a TCP socket to check for a listening sevice on each
769
+        listed juju unit.
770
+
771
+        :param sentry_units: list of sentry unit pointers
772
+        :param port: TCP port number, default to 22
773
+        :param timeout: Connect timeout, default to 15 seconds
774
+        :expect_success: True by default, set False to invert logic
775
+        :returns: None if successful, Failure message otherwise
776
+        """
777
+        for unit in sentry_units:
778
+            host = unit.info['public-address']
779
+            connected = self.port_knock_tcp(host, port, timeout)
780
+            if not connected and expect_success:
781
+                return 'Socket connect failed.'
782
+            elif connected and not expect_success:
783
+                return 'Socket connected unexpectedly.'
784
+
785
+    def get_uuid_epoch_stamp(self):
786
+        """Returns a stamp string based on uuid4 and epoch time.  Useful in
787
+        generating test messages which need to be unique-ish."""
788
+        return '[{}-{}]'.format(uuid.uuid4(), time.time())
789
+
790
+# amulet juju action helpers:
791
+    def run_action(self, unit_sentry, action,
792
+                   _check_output=subprocess.check_output,
793
+                   params=None):
794
+        """Run the named action on a given unit sentry.
795
+
796
+        params a dict of parameters to use
797
+        _check_output parameter is used for dependency injection.
798
+
799
+        @return action_id.
800
+        """
801
+        unit_id = unit_sentry.info["unit_name"]
802
+        command = ["juju", "action", "do", "--format=json", unit_id, action]
803
+        if params is not None:
804
+            for key, value in params.iteritems():
805
+                command.append("{}={}".format(key, value))
806
+        self.log.info("Running command: %s\n" % " ".join(command))
807
+        output = _check_output(command, universal_newlines=True)
808
+        data = json.loads(output)
809
+        action_id = data[u'Action queued with id']
810
+        return action_id
811
+
812
+    def wait_on_action(self, action_id, _check_output=subprocess.check_output):
813
+        """Wait for a given action, returning if it completed or not.
814
+
815
+        _check_output parameter is used for dependency injection.
816
+        """
817
+        command = ["juju", "action", "fetch", "--format=json", "--wait=0",
818
+                   action_id]
819
+        output = _check_output(command, universal_newlines=True)
820
+        data = json.loads(output)
821
+        return data.get(u"status") == "completed"
822
+
823
+    def status_get(self, unit):
824
+        """Return the current service status of this unit."""
825
+        raw_status, return_code = unit.run(
826
+            "status-get --format=json --include-data")
827
+        if return_code != 0:
828
+            return ("unknown", "")
829
+        status = json.loads(raw_status)
830
+        return (status["status"], status["message"])
Back to file index

tests/charmhelpers/contrib/openstack/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/openstack/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/openstack/amulet/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/openstack/amulet/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/openstack/amulet/deployment.py

  1
--- 
  2
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py
  3
@@ -0,0 +1,345 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import logging
 19
+import re
 20
+import sys
 21
+import six
 22
+from collections import OrderedDict
 23
+from charmhelpers.contrib.amulet.deployment import (
 24
+    AmuletDeployment
 25
+)
 26
+
 27
+DEBUG = logging.DEBUG
 28
+ERROR = logging.ERROR
 29
+
 30
+
 31
+class OpenStackAmuletDeployment(AmuletDeployment):
 32
+    """OpenStack amulet deployment.
 33
+
 34
+       This class inherits from AmuletDeployment and has additional support
 35
+       that is specifically for use by OpenStack charms.
 36
+       """
 37
+
 38
+    def __init__(self, series=None, openstack=None, source=None,
 39
+                 stable=True, log_level=DEBUG):
 40
+        """Initialize the deployment environment."""
 41
+        super(OpenStackAmuletDeployment, self).__init__(series)
 42
+        self.log = self.get_logger(level=log_level)
 43
+        self.log.info('OpenStackAmuletDeployment:  init')
 44
+        self.openstack = openstack
 45
+        self.source = source
 46
+        self.stable = stable
 47
+
 48
+    def get_logger(self, name="deployment-logger", level=logging.DEBUG):
 49
+        """Get a logger object that will log to stdout."""
 50
+        log = logging
 51
+        logger = log.getLogger(name)
 52
+        fmt = log.Formatter("%(asctime)s %(funcName)s "
 53
+                            "%(levelname)s: %(message)s")
 54
+
 55
+        handler = log.StreamHandler(stream=sys.stdout)
 56
+        handler.setLevel(level)
 57
+        handler.setFormatter(fmt)
 58
+
 59
+        logger.addHandler(handler)
 60
+        logger.setLevel(level)
 61
+
 62
+        return logger
 63
+
 64
+    def _determine_branch_locations(self, other_services):
 65
+        """Determine the branch locations for the other services.
 66
+
 67
+           Determine if the local branch being tested is derived from its
 68
+           stable or next (dev) branch, and based on this, use the corresonding
 69
+           stable or next branches for the other_services."""
 70
+
 71
+        self.log.info('OpenStackAmuletDeployment:  determine branch locations')
 72
+
 73
+        # Charms outside the ~openstack-charmers
 74
+        base_charms = {
 75
+            'mysql': ['precise', 'trusty'],
 76
+            'mongodb': ['precise', 'trusty'],
 77
+            'nrpe': ['precise', 'trusty', 'wily', 'xenial'],
 78
+        }
 79
+
 80
+        for svc in other_services:
 81
+            # If a location has been explicitly set, use it
 82
+            if svc.get('location'):
 83
+                continue
 84
+            if svc['name'] in base_charms:
 85
+                # NOTE: not all charms have support for all series we
 86
+                #       want/need to test against, so fix to most recent
 87
+                #       that each base charm supports
 88
+                target_series = self.series
 89
+                if self.series not in base_charms[svc['name']]:
 90
+                    target_series = base_charms[svc['name']][-1]
 91
+                svc['location'] = 'cs:{}/{}'.format(target_series,
 92
+                                                    svc['name'])
 93
+            elif self.stable:
 94
+                svc['location'] = 'cs:{}/{}'.format(self.series,
 95
+                                                    svc['name'])
 96
+            else:
 97
+                svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format(
 98
+                    self.series,
 99
+                    svc['name']
100
+                )
101
+
102
+        return other_services
103
+
104
+    def _add_services(self, this_service, other_services, use_source=None,
105
+                      no_origin=None):
106
+        """Add services to the deployment and optionally set
107
+        openstack-origin/source.
108
+
109
+        :param this_service dict: Service dictionary describing the service
110
+                                  whose amulet tests are being run
111
+        :param other_services dict: List of service dictionaries describing
112
+                                    the services needed to support the target
113
+                                    service
114
+        :param use_source list: List of services which use the 'source' config
115
+                                option rather than 'openstack-origin'
116
+        :param no_origin list: List of services which do not support setting
117
+                               the Cloud Archive.
118
+        Service Dict:
119
+            {
120
+                'name': str charm-name,
121
+                'units': int number of units,
122
+                'constraints': dict of juju constraints,
123
+                'location': str location of charm,
124
+            }
125
+        eg
126
+        this_service = {
127
+            'name': 'openvswitch-odl',
128
+            'constraints': {'mem': '8G'},
129
+        }
130
+        other_services = [
131
+            {
132
+                'name': 'nova-compute',
133
+                'units': 2,
134
+                'constraints': {'mem': '4G'},
135
+                'location': cs:~bob/xenial/nova-compute
136
+            },
137
+            {
138
+                'name': 'mysql',
139
+                'constraints': {'mem': '2G'},
140
+            },
141
+            {'neutron-api-odl'}]
142
+        use_source = ['mysql']
143
+        no_origin = ['neutron-api-odl']
144
+        """
145
+        self.log.info('OpenStackAmuletDeployment:  adding services')
146
+
147
+        other_services = self._determine_branch_locations(other_services)
148
+
149
+        super(OpenStackAmuletDeployment, self)._add_services(this_service,
150
+                                                             other_services)
151
+
152
+        services = other_services
153
+        services.append(this_service)
154
+
155
+        use_source = use_source or []
156
+        no_origin = no_origin or []
157
+
158
+        # Charms which should use the source config option
159
+        use_source = list(set(
160
+            use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
161
+                          'ceph-osd', 'ceph-radosgw', 'ceph-mon',
162
+                          'ceph-proxy', 'percona-cluster', 'lxd']))
163
+
164
+        # Charms which can not use openstack-origin, ie. many subordinates
165
+        no_origin = list(set(
166
+            no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch',
167
+                         'nrpe', 'openvswitch-odl', 'neutron-api-odl',
168
+                         'odl-controller', 'cinder-backup', 'nexentaedge-data',
169
+                         'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
170
+                         'cinder-nexentaedge', 'nexentaedge-mgmt']))
171
+
172
+        if self.openstack:
173
+            for svc in services:
174
+                if svc['name'] not in use_source + no_origin:
175
+                    config = {'openstack-origin': self.openstack}
176
+                    self.d.configure(svc['name'], config)
177
+
178
+        if self.source:
179
+            for svc in services:
180
+                if svc['name'] in use_source and svc['name'] not in no_origin:
181
+                    config = {'source': self.source}
182
+                    self.d.configure(svc['name'], config)
183
+
184
+    def _configure_services(self, configs):
185
+        """Configure all of the services."""
186
+        self.log.info('OpenStackAmuletDeployment:  configure services')
187
+        for service, config in six.iteritems(configs):
188
+            self.d.configure(service, config)
189
+
190
+    def _auto_wait_for_status(self, message=None, exclude_services=None,
191
+                              include_only=None, timeout=1800):
192
+        """Wait for all units to have a specific extended status, except
193
+        for any defined as excluded.  Unless specified via message, any
194
+        status containing any case of 'ready' will be considered a match.
195
+
196
+        Examples of message usage:
197
+
198
+          Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
199
+              message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
200
+
201
+          Wait for all units to reach this status (exact match):
202
+              message = re.compile('^Unit is ready and clustered$')
203
+
204
+          Wait for all units to reach any one of these (exact match):
205
+              message = re.compile('Unit is ready|OK|Ready')
206
+
207
+          Wait for at least one unit to reach this status (exact match):
208
+              message = {'ready'}
209
+
210
+        See Amulet's sentry.wait_for_messages() for message usage detail.
211
+        https://github.com/juju/amulet/blob/master/amulet/sentry.py
212
+
213
+        :param message: Expected status match
214
+        :param exclude_services: List of juju service names to ignore,
215
+            not to be used in conjuction with include_only.
216
+        :param include_only: List of juju service names to exclusively check,
217
+            not to be used in conjuction with exclude_services.
218
+        :param timeout: Maximum time in seconds to wait for status match
219
+        :returns: None.  Raises if timeout is hit.
220
+        """
221
+        self.log.info('Waiting for extended status on units...')
222
+
223
+        all_services = self.d.services.keys()
224
+
225
+        if exclude_services and include_only:
226
+            raise ValueError('exclude_services can not be used '
227
+                             'with include_only')
228
+
229
+        if message:
230
+            if isinstance(message, re._pattern_type):
231
+                match = message.pattern
232
+            else:
233
+                match = message
234
+
235
+            self.log.debug('Custom extended status wait match: '
236
+                           '{}'.format(match))
237
+        else:
238
+            self.log.debug('Default extended status wait match:  contains '
239
+                           'READY (case-insensitive)')
240
+            message = re.compile('.*ready.*', re.IGNORECASE)
241
+
242
+        if exclude_services:
243
+            self.log.debug('Excluding services from extended status match: '
244
+                           '{}'.format(exclude_services))
245
+        else:
246
+            exclude_services = []
247
+
248
+        if include_only:
249
+            services = include_only
250
+        else:
251
+            services = list(set(all_services) - set(exclude_services))
252
+
253
+        self.log.debug('Waiting up to {}s for extended status on services: '
254
+                       '{}'.format(timeout, services))
255
+        service_messages = {service: message for service in services}
256
+        self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
257
+        self.log.info('OK')
258
+
259
+    def _get_openstack_release(self):
260
+        """Get openstack release.
261
+
262
+           Return an integer representing the enum value of the openstack
263
+           release.
264
+           """
265
+        # Must be ordered by OpenStack release (not by Ubuntu release):
266
+        (self.precise_essex, self.precise_folsom, self.precise_grizzly,
267
+         self.precise_havana, self.precise_icehouse,
268
+         self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
269
+         self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
270
+         self.wily_liberty, self.trusty_mitaka,
271
+         self.xenial_mitaka, self.xenial_newton,
272
+         self.yakkety_newton) = range(16)
273
+
274
+        releases = {
275
+            ('precise', None): self.precise_essex,
276
+            ('precise', 'cloud:precise-folsom'): self.precise_folsom,
277
+            ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
278
+            ('precise', 'cloud:precise-havana'): self.precise_havana,
279
+            ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
280
+            ('trusty', None): self.trusty_icehouse,
281
+            ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
282
+            ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
283
+            ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
284
+            ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
285
+            ('utopic', None): self.utopic_juno,
286
+            ('vivid', None): self.vivid_kilo,
287
+            ('wily', None): self.wily_liberty,
288
+            ('xenial', None): self.xenial_mitaka,
289
+            ('xenial', 'cloud:xenial-newton'): self.xenial_newton,
290
+            ('yakkety', None): self.yakkety_newton,
291
+        }
292
+        return releases[(self.series, self.openstack)]
293
+
294
+    def _get_openstack_release_string(self):
295
+        """Get openstack release string.
296
+
297
+           Return a string representing the openstack release.
298
+           """
299
+        releases = OrderedDict([
300
+            ('precise', 'essex'),
301
+            ('quantal', 'folsom'),
302
+            ('raring', 'grizzly'),
303
+            ('saucy', 'havana'),
304
+            ('trusty', 'icehouse'),
305
+            ('utopic', 'juno'),
306
+            ('vivid', 'kilo'),
307
+            ('wily', 'liberty'),
308
+            ('xenial', 'mitaka'),
309
+            ('yakkety', 'newton'),
310
+        ])
311
+        if self.openstack:
312
+            os_origin = self.openstack.split(':')[1]
313
+            return os_origin.split('%s-' % self.series)[1].split('/')[0]
314
+        else:
315
+            return releases[self.series]
316
+
317
+    def get_ceph_expected_pools(self, radosgw=False):
318
+        """Return a list of expected ceph pools in a ceph + cinder + glance
319
+        test scenario, based on OpenStack release and whether ceph radosgw
320
+        is flagged as present or not."""
321
+
322
+        if self._get_openstack_release() >= self.trusty_kilo:
323
+            # Kilo or later
324
+            pools = [
325
+                'rbd',
326
+                'cinder',
327
+                'glance'
328
+            ]
329
+        else:
330
+            # Juno or earlier
331
+            pools = [
332
+                'data',
333
+                'metadata',
334
+                'rbd',
335
+                'cinder',
336
+                'glance'
337
+            ]
338
+
339
+        if radosgw:
340
+            pools.extend([
341
+                '.rgw.root',
342
+                '.rgw.control',
343
+                '.rgw',
344
+                '.rgw.gc',
345
+                '.users.uid'
346
+            ])
347
+
348
+        return pools
Back to file index

tests/charmhelpers/contrib/openstack/amulet/utils.py

   1
--- 
   2
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py
   3
@@ -0,0 +1,1124 @@
   4
+# Copyright 2014-2015 Canonical Limited.
   5
+#
   6
+# Licensed under the Apache License, Version 2.0 (the "License");
   7
+# you may not use this file except in compliance with the License.
   8
+# You may obtain a copy of the License at
   9
+#
  10
+#  http://www.apache.org/licenses/LICENSE-2.0
  11
+#
  12
+# Unless required by applicable law or agreed to in writing, software
  13
+# distributed under the License is distributed on an "AS IS" BASIS,
  14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  15
+# See the License for the specific language governing permissions and
  16
+# limitations under the License.
  17
+
  18
+import amulet
  19
+import json
  20
+import logging
  21
+import os
  22
+import re
  23
+import six
  24
+import time
  25
+import urllib
  26
+
  27
+import cinderclient.v1.client as cinder_client
  28
+import glanceclient.v1.client as glance_client
  29
+import heatclient.v1.client as heat_client
  30
+import keystoneclient.v2_0 as keystone_client
  31
+from keystoneclient.auth.identity import v3 as keystone_id_v3
  32
+from keystoneclient import session as keystone_session
  33
+from keystoneclient.v3 import client as keystone_client_v3
  34
+
  35
+import novaclient.client as nova_client
  36
+import pika
  37
+import swiftclient
  38
+
  39
+from charmhelpers.contrib.amulet.utils import (
  40
+    AmuletUtils
  41
+)
  42
+
  43
+DEBUG = logging.DEBUG
  44
+ERROR = logging.ERROR
  45
+
  46
+NOVA_CLIENT_VERSION = "2"
  47
+
  48
+
  49
+class OpenStackAmuletUtils(AmuletUtils):
  50
+    """OpenStack amulet utilities.
  51
+
  52
+       This class inherits from AmuletUtils and has additional support
  53
+       that is specifically for use by OpenStack charm tests.
  54
+       """
  55
+
  56
+    def __init__(self, log_level=ERROR):
  57
+        """Initialize the deployment environment."""
  58
+        super(OpenStackAmuletUtils, self).__init__(log_level)
  59
+
  60
+    def validate_endpoint_data(self, endpoints, admin_port, internal_port,
  61
+                               public_port, expected):
  62
+        """Validate endpoint data.
  63
+
  64
+           Validate actual endpoint data vs expected endpoint data. The ports
  65
+           are used to find the matching endpoint.
  66
+           """
  67
+        self.log.debug('Validating endpoint data...')
  68
+        self.log.debug('actual: {}'.format(repr(endpoints)))
  69
+        found = False
  70
+        for ep in endpoints:
  71
+            self.log.debug('endpoint: {}'.format(repr(ep)))
  72
+            if (admin_port in ep.adminurl and
  73
+                    internal_port in ep.internalurl and
  74
+                    public_port in ep.publicurl):
  75
+                found = True
  76
+                actual = {'id': ep.id,
  77
+                          'region': ep.region,
  78
+                          'adminurl': ep.adminurl,
  79
+                          'internalurl': ep.internalurl,
  80
+                          'publicurl': ep.publicurl,
  81
+                          'service_id': ep.service_id}
  82
+                ret = self._validate_dict_data(expected, actual)
  83
+                if ret:
  84
+                    return 'unexpected endpoint data - {}'.format(ret)
  85
+
  86
+        if not found:
  87
+            return 'endpoint not found'
  88
+
  89
+    def validate_v3_endpoint_data(self, endpoints, admin_port, internal_port,
  90
+                                  public_port, expected):
  91
+        """Validate keystone v3 endpoint data.
  92
+
  93
+        Validate the v3 endpoint data which has changed from v2.  The
  94
+        ports are used to find the matching endpoint.
  95
+
  96
+        The new v3 endpoint data looks like:
  97
+
  98
+        [<Endpoint enabled=True,
  99
+                   id=0432655fc2f74d1e9fa17bdaa6f6e60b,
 100
+                   interface=admin,
 101
+                   links={u'self': u'<RESTful URL of this endpoint>'},
 102
+                   region=RegionOne,
 103
+                   region_id=RegionOne,
 104
+                   service_id=17f842a0dc084b928e476fafe67e4095,
 105
+                   url=http://10.5.6.5:9312>,
 106
+         <Endpoint enabled=True,
 107
+                   id=6536cb6cb92f4f41bf22b079935c7707,
 108
+                   interface=admin,
 109
+                   links={u'self': u'<RESTful url of this endpoint>'},
 110
+                   region=RegionOne,
 111
+                   region_id=RegionOne,
 112
+                   service_id=72fc8736fb41435e8b3584205bb2cfa3,
 113
+                   url=http://10.5.6.6:35357/v3>,
 114
+                   ... ]
 115
+        """
 116
+        self.log.debug('Validating v3 endpoint data...')
 117
+        self.log.debug('actual: {}'.format(repr(endpoints)))
 118
+        found = []
 119
+        for ep in endpoints:
 120
+            self.log.debug('endpoint: {}'.format(repr(ep)))
 121
+            if ((admin_port in ep.url and ep.interface == 'admin') or
 122
+                    (internal_port in ep.url and ep.interface == 'internal') or
 123
+                    (public_port in ep.url and ep.interface == 'public')):
 124
+                found.append(ep.interface)
 125
+                # note we ignore the links member.
 126
+                actual = {'id': ep.id,
 127
+                          'region': ep.region,
 128
+                          'region_id': ep.region_id,
 129
+                          'interface': self.not_null,
 130
+                          'url': ep.url,
 131
+                          'service_id': ep.service_id, }
 132
+                ret = self._validate_dict_data(expected, actual)
 133
+                if ret:
 134
+                    return 'unexpected endpoint data - {}'.format(ret)
 135
+
 136
+        if len(found) != 3:
 137
+            return 'Unexpected number of endpoints found'
 138
+
 139
+    def validate_svc_catalog_endpoint_data(self, expected, actual):
 140
+        """Validate service catalog endpoint data.
 141
+
 142
+           Validate a list of actual service catalog endpoints vs a list of
 143
+           expected service catalog endpoints.
 144
+           """
 145
+        self.log.debug('Validating service catalog endpoint data...')
 146
+        self.log.debug('actual: {}'.format(repr(actual)))
 147
+        for k, v in six.iteritems(expected):
 148
+            if k in actual:
 149
+                ret = self._validate_dict_data(expected[k][0], actual[k][0])
 150
+                if ret:
 151
+                    return self.endpoint_error(k, ret)
 152
+            else:
 153
+                return "endpoint {} does not exist".format(k)
 154
+        return ret
 155
+
 156
+    def validate_v3_svc_catalog_endpoint_data(self, expected, actual):
 157
+        """Validate the keystone v3 catalog endpoint data.
 158
+
 159
+        Validate a list of dictinaries that make up the keystone v3 service
 160
+        catalogue.
 161
+
 162
+        It is in the form of:
 163
+
 164
+
 165
+        {u'identity': [{u'id': u'48346b01c6804b298cdd7349aadb732e',
 166
+                        u'interface': u'admin',
 167
+                        u'region': u'RegionOne',
 168
+                        u'region_id': u'RegionOne',
 169
+                        u'url': u'http://10.5.5.224:35357/v3'},
 170
+                       {u'id': u'8414f7352a4b47a69fddd9dbd2aef5cf',
 171
+                        u'interface': u'public',
 172
+                        u'region': u'RegionOne',
 173
+                        u'region_id': u'RegionOne',
 174
+                        u'url': u'http://10.5.5.224:5000/v3'},
 175
+                       {u'id': u'd5ca31440cc24ee1bf625e2996fb6a5b',
 176
+                        u'interface': u'internal',
 177
+                        u'region': u'RegionOne',
 178
+                        u'region_id': u'RegionOne',
 179
+                        u'url': u'http://10.5.5.224:5000/v3'}],
 180
+         u'key-manager': [{u'id': u'68ebc17df0b045fcb8a8a433ebea9e62',
 181
+                           u'interface': u'public',
 182
+                           u'region': u'RegionOne',
 183
+                           u'region_id': u'RegionOne',
 184
+                           u'url': u'http://10.5.5.223:9311'},
 185
+                          {u'id': u'9cdfe2a893c34afd8f504eb218cd2f9d',
 186
+                           u'interface': u'internal',
 187
+                           u'region': u'RegionOne',
 188
+                           u'region_id': u'RegionOne',
 189
+                           u'url': u'http://10.5.5.223:9311'},
 190
+                          {u'id': u'f629388955bc407f8b11d8b7ca168086',
 191
+                           u'interface': u'admin',
 192
+                           u'region': u'RegionOne',
 193
+                           u'region_id': u'RegionOne',
 194
+                           u'url': u'http://10.5.5.223:9312'}]}
 195
+
 196
+        Note, that an added complication is that the order of admin, public,
 197
+        internal against 'interface' in each region.
 198
+
 199
+        Thus, the function sorts the expected and actual lists using the
 200
+        interface key as a sort key, prior to the comparison.
 201
+        """
 202
+        self.log.debug('Validating v3 service catalog endpoint data...')
 203
+        self.log.debug('actual: {}'.format(repr(actual)))
 204
+        for k, v in six.iteritems(expected):
 205
+            if k in actual:
 206
+                l_expected = sorted(v, key=lambda x: x['interface'])
 207
+                l_actual = sorted(actual[k], key=lambda x: x['interface'])
 208
+                if len(l_actual) != len(l_expected):
 209
+                    return ("endpoint {} has differing number of interfaces "
 210
+                            " - expected({}), actual({})"
 211
+                            .format(k, len(l_expected), len(l_actual)))
 212
+                for i_expected, i_actual in zip(l_expected, l_actual):
 213
+                    self.log.debug("checking interface {}"
 214
+                                   .format(i_expected['interface']))
 215
+                    ret = self._validate_dict_data(i_expected, i_actual)
 216
+                    if ret:
 217
+                        return self.endpoint_error(k, ret)
 218
+            else:
 219
+                return "endpoint {} does not exist".format(k)
 220
+        return ret
 221
+
 222
+    def validate_tenant_data(self, expected, actual):
 223
+        """Validate tenant data.
 224
+
 225
+           Validate a list of actual tenant data vs list of expected tenant
 226
+           data.
 227
+           """
 228
+        self.log.debug('Validating tenant data...')
 229
+        self.log.debug('actual: {}'.format(repr(actual)))
 230
+        for e in expected:
 231
+            found = False
 232
+            for act in actual:
 233
+                a = {'enabled': act.enabled, 'description': act.description,
 234
+                     'name': act.name, 'id': act.id}
 235
+                if e['name'] == a['name']:
 236
+                    found = True
 237
+                    ret = self._validate_dict_data(e, a)
 238
+                    if ret:
 239
+                        return "unexpected tenant data - {}".format(ret)
 240
+            if not found:
 241
+                return "tenant {} does not exist".format(e['name'])
 242
+        return ret
 243
+
 244
+    def validate_role_data(self, expected, actual):
 245
+        """Validate role data.
 246
+
 247
+           Validate a list of actual role data vs a list of expected role
 248
+           data.
 249
+           """
 250
+        self.log.debug('Validating role data...')
 251
+        self.log.debug('actual: {}'.format(repr(actual)))
 252
+        for e in expected:
 253
+            found = False
 254
+            for act in actual:
 255
+                a = {'name': act.name, 'id': act.id}
 256
+                if e['name'] == a['name']:
 257
+                    found = True
 258
+                    ret = self._validate_dict_data(e, a)
 259
+                    if ret:
 260
+                        return "unexpected role data - {}".format(ret)
 261
+            if not found:
 262
+                return "role {} does not exist".format(e['name'])
 263
+        return ret
 264
+
 265
+    def validate_user_data(self, expected, actual, api_version=None):
 266
+        """Validate user data.
 267
+
 268
+           Validate a list of actual user data vs a list of expected user
 269
+           data.
 270
+           """
 271
+        self.log.debug('Validating user data...')
 272
+        self.log.debug('actual: {}'.format(repr(actual)))
 273
+        for e in expected:
 274
+            found = False
 275
+            for act in actual:
 276
+                if e['name'] == act.name:
 277
+                    a = {'enabled': act.enabled, 'name': act.name,
 278
+                         'email': act.email, 'id': act.id}
 279
+                    if api_version == 3:
 280
+                        a['default_project_id'] = getattr(act,
 281
+                                                          'default_project_id',
 282
+                                                          'none')
 283
+                    else:
 284
+                        a['tenantId'] = act.tenantId
 285
+                    found = True
 286
+                    ret = self._validate_dict_data(e, a)
 287
+                    if ret:
 288
+                        return "unexpected user data - {}".format(ret)
 289
+            if not found:
 290
+                return "user {} does not exist".format(e['name'])
 291
+        return ret
 292
+
 293
+    def validate_flavor_data(self, expected, actual):
 294
+        """Validate flavor data.
 295
+
 296
+           Validate a list of actual flavors vs a list of expected flavors.
 297
+           """
 298
+        self.log.debug('Validating flavor data...')
 299
+        self.log.debug('actual: {}'.format(repr(actual)))
 300
+        act = [a.name for a in actual]
 301
+        return self._validate_list_data(expected, act)
 302
+
 303
+    def tenant_exists(self, keystone, tenant):
 304
+        """Return True if tenant exists."""
 305
+        self.log.debug('Checking if tenant exists ({})...'.format(tenant))
 306
+        return tenant in [t.name for t in keystone.tenants.list()]
 307
+
 308
+    def authenticate_cinder_admin(self, keystone_sentry, username,
 309
+                                  password, tenant):
 310
+        """Authenticates admin user with cinder."""
 311
+        # NOTE(beisner): cinder python client doesn't accept tokens.
 312
+        keystone_ip = keystone_sentry.info['public-address']
 313
+        ept = "http://{}:5000/v2.0".format(keystone_ip.strip().decode('utf-8'))
 314
+        return cinder_client.Client(username, password, tenant, ept)
 315
+
 316
+    def authenticate_keystone_admin(self, keystone_sentry, user, password,
 317
+                                    tenant=None, api_version=None,
 318
+                                    keystone_ip=None):
 319
+        """Authenticates admin user with the keystone admin endpoint."""
 320
+        self.log.debug('Authenticating keystone admin...')
 321
+        if not keystone_ip:
 322
+            keystone_ip = keystone_sentry.info['public-address']
 323
+
 324
+        base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
 325
+        if not api_version or api_version == 2:
 326
+            ep = base_ep + "/v2.0"
 327
+            return keystone_client.Client(username=user, password=password,
 328
+                                          tenant_name=tenant, auth_url=ep)
 329
+        else:
 330
+            ep = base_ep + "/v3"
 331
+            auth = keystone_id_v3.Password(
 332
+                user_domain_name='admin_domain',
 333
+                username=user,
 334
+                password=password,
 335
+                domain_name='admin_domain',
 336
+                auth_url=ep,
 337
+            )
 338
+            sess = keystone_session.Session(auth=auth)
 339
+            return keystone_client_v3.Client(session=sess)
 340
+
 341
+    def authenticate_keystone_user(self, keystone, user, password, tenant):
 342
+        """Authenticates a regular user with the keystone public endpoint."""
 343
+        self.log.debug('Authenticating keystone user ({})...'.format(user))
 344
+        ep = keystone.service_catalog.url_for(service_type='identity',
 345
+                                              endpoint_type='publicURL')
 346
+        return keystone_client.Client(username=user, password=password,
 347
+                                      tenant_name=tenant, auth_url=ep)
 348
+
 349
+    def authenticate_glance_admin(self, keystone):
 350
+        """Authenticates admin user with glance."""
 351
+        self.log.debug('Authenticating glance admin...')
 352
+        ep = keystone.service_catalog.url_for(service_type='image',
 353
+                                              endpoint_type='adminURL')
 354
+        return glance_client.Client(ep, token=keystone.auth_token)
 355
+
 356
+    def authenticate_heat_admin(self, keystone):
 357
+        """Authenticates the admin user with heat."""
 358
+        self.log.debug('Authenticating heat admin...')
 359
+        ep = keystone.service_catalog.url_for(service_type='orchestration',
 360
+                                              endpoint_type='publicURL')
 361
+        return heat_client.Client(endpoint=ep, token=keystone.auth_token)
 362
+
 363
+    def authenticate_nova_user(self, keystone, user, password, tenant):
 364
+        """Authenticates a regular user with nova-api."""
 365
+        self.log.debug('Authenticating nova user ({})...'.format(user))
 366
+        ep = keystone.service_catalog.url_for(service_type='identity',
 367
+                                              endpoint_type='publicURL')
 368
+        return nova_client.Client(NOVA_CLIENT_VERSION,
 369
+                                  username=user, api_key=password,
 370
+                                  project_id=tenant, auth_url=ep)
 371
+
 372
+    def authenticate_swift_user(self, keystone, user, password, tenant):
 373
+        """Authenticates a regular user with swift api."""
 374
+        self.log.debug('Authenticating swift user ({})...'.format(user))
 375
+        ep = keystone.service_catalog.url_for(service_type='identity',
 376
+                                              endpoint_type='publicURL')
 377
+        return swiftclient.Connection(authurl=ep,
 378
+                                      user=user,
 379
+                                      key=password,
 380
+                                      tenant_name=tenant,
 381
+                                      auth_version='2.0')
 382
+
 383
+    def create_cirros_image(self, glance, image_name):
 384
+        """Download the latest cirros image and upload it to glance,
 385
+        validate and return a resource pointer.
 386
+
 387
+        :param glance: pointer to authenticated glance connection
 388
+        :param image_name: display name for new image
 389
+        :returns: glance image pointer
 390
+        """
 391
+        self.log.debug('Creating glance cirros image '
 392
+                       '({})...'.format(image_name))
 393
+
 394
+        # Download cirros image
 395
+        http_proxy = os.getenv('AMULET_HTTP_PROXY')
 396
+        self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
 397
+        if http_proxy:
 398
+            proxies = {'http': http_proxy}
 399
+            opener = urllib.FancyURLopener(proxies)
 400
+        else:
 401
+            opener = urllib.FancyURLopener()
 402
+
 403
+        f = opener.open('http://download.cirros-cloud.net/version/released')
 404
+        version = f.read().strip()
 405
+        cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
 406
+        local_path = os.path.join('tests', cirros_img)
 407
+
 408
+        if not os.path.exists(local_path):
 409
+            cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
 410
+                                                  version, cirros_img)
 411
+            opener.retrieve(cirros_url, local_path)
 412
+        f.close()
 413
+
 414
+        # Create glance image
 415
+        with open(local_path) as f:
 416
+            image = glance.images.create(name=image_name, is_public=True,
 417
+                                         disk_format='qcow2',
 418
+                                         container_format='bare', data=f)
 419
+
 420
+        # Wait for image to reach active status
 421
+        img_id = image.id
 422
+        ret = self.resource_reaches_status(glance.images, img_id,
 423
+                                           expected_stat='active',
 424
+                                           msg='Image status wait')
 425
+        if not ret:
 426
+            msg = 'Glance image failed to reach expected state.'
 427
+            amulet.raise_status(amulet.FAIL, msg=msg)
 428
+
 429
+        # Re-validate new image
 430
+        self.log.debug('Validating image attributes...')
 431
+        val_img_name = glance.images.get(img_id).name
 432
+        val_img_stat = glance.images.get(img_id).status
 433
+        val_img_pub = glance.images.get(img_id).is_public
 434
+        val_img_cfmt = glance.images.get(img_id).container_format
 435
+        val_img_dfmt = glance.images.get(img_id).disk_format
 436
+        msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
 437
+                    'container fmt:{} disk fmt:{}'.format(
 438
+                        val_img_name, val_img_pub, img_id,
 439
+                        val_img_stat, val_img_cfmt, val_img_dfmt))
 440
+
 441
+        if val_img_name == image_name and val_img_stat == 'active' \
 442
+                and val_img_pub is True and val_img_cfmt == 'bare' \
 443
+                and val_img_dfmt == 'qcow2':
 444
+            self.log.debug(msg_attr)
 445
+        else:
 446
+            msg = ('Volume validation failed, {}'.format(msg_attr))
 447
+            amulet.raise_status(amulet.FAIL, msg=msg)
 448
+
 449
+        return image
 450
+
 451
+    def delete_image(self, glance, image):
 452
+        """Delete the specified image."""
 453
+
 454
+        # /!\ DEPRECATION WARNING
 455
+        self.log.warn('/!\\ DEPRECATION WARNING:  use '
 456
+                      'delete_resource instead of delete_image.')
 457
+        self.log.debug('Deleting glance image ({})...'.format(image))
 458
+        return self.delete_resource(glance.images, image, msg='glance image')
 459
+
 460
+    def create_instance(self, nova, image_name, instance_name, flavor):
 461
+        """Create the specified instance."""
 462
+        self.log.debug('Creating instance '
 463
+                       '({}|{}|{})'.format(instance_name, image_name, flavor))
 464
+        image = nova.images.find(name=image_name)
 465
+        flavor = nova.flavors.find(name=flavor)
 466
+        instance = nova.servers.create(name=instance_name, image=image,
 467
+                                       flavor=flavor)
 468
+
 469
+        count = 1
 470
+        status = instance.status
 471
+        while status != 'ACTIVE' and count < 60:
 472
+            time.sleep(3)
 473
+            instance = nova.servers.get(instance.id)
 474
+            status = instance.status
 475
+            self.log.debug('instance status: {}'.format(status))
 476
+            count += 1
 477
+
 478
+        if status != 'ACTIVE':
 479
+            self.log.error('instance creation timed out')
 480
+            return None
 481
+
 482
+        return instance
 483
+
 484
+    def delete_instance(self, nova, instance):
 485
+        """Delete the specified instance."""
 486
+
 487
+        # /!\ DEPRECATION WARNING
 488
+        self.log.warn('/!\\ DEPRECATION WARNING:  use '
 489
+                      'delete_resource instead of delete_instance.')
 490
+        self.log.debug('Deleting instance ({})...'.format(instance))
 491
+        return self.delete_resource(nova.servers, instance,
 492
+                                    msg='nova instance')
 493
+
 494
+    def create_or_get_keypair(self, nova, keypair_name="testkey"):
 495
+        """Create a new keypair, or return pointer if it already exists."""
 496
+        try:
 497
+            _keypair = nova.keypairs.get(keypair_name)
 498
+            self.log.debug('Keypair ({}) already exists, '
 499
+                           'using it.'.format(keypair_name))
 500
+            return _keypair
 501
+        except:
 502
+            self.log.debug('Keypair ({}) does not exist, '
 503
+                           'creating it.'.format(keypair_name))
 504
+
 505
+        _keypair = nova.keypairs.create(name=keypair_name)
 506
+        return _keypair
 507
+
 508
+    def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
 509
+                             img_id=None, src_vol_id=None, snap_id=None):
 510
+        """Create cinder volume, optionally from a glance image, OR
 511
+        optionally as a clone of an existing volume, OR optionally
 512
+        from a snapshot.  Wait for the new volume status to reach
 513
+        the expected status, validate and return a resource pointer.
 514
+
 515
+        :param vol_name: cinder volume display name
 516
+        :param vol_size: size in gigabytes
 517
+        :param img_id: optional glance image id
 518
+        :param src_vol_id: optional source volume id to clone
 519
+        :param snap_id: optional snapshot id to use
 520
+        :returns: cinder volume pointer
 521
+        """
 522
+        # Handle parameter input and avoid impossible combinations
 523
+        if img_id and not src_vol_id and not snap_id:
 524
+            # Create volume from image
 525
+            self.log.debug('Creating cinder volume from glance image...')
 526
+            bootable = 'true'
 527
+        elif src_vol_id and not img_id and not snap_id:
 528
+            # Clone an existing volume
 529
+            self.log.debug('Cloning cinder volume...')
 530
+            bootable = cinder.volumes.get(src_vol_id).bootable
 531
+        elif snap_id and not src_vol_id and not img_id:
 532
+            # Create volume from snapshot
 533
+            self.log.debug('Creating cinder volume from snapshot...')
 534
+            snap = cinder.volume_snapshots.find(id=snap_id)
 535
+            vol_size = snap.size
 536
+            snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
 537
+            bootable = cinder.volumes.get(snap_vol_id).bootable
 538
+        elif not img_id and not src_vol_id and not snap_id:
 539
+            # Create volume
 540
+            self.log.debug('Creating cinder volume...')
 541
+            bootable = 'false'
 542
+        else:
 543
+            # Impossible combination of parameters
 544
+            msg = ('Invalid method use - name:{} size:{} img_id:{} '
 545
+                   'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
 546
+                                                     img_id, src_vol_id,
 547
+                                                     snap_id))
 548
+            amulet.raise_status(amulet.FAIL, msg=msg)
 549
+
 550
+        # Create new volume
 551
+        try:
 552
+            vol_new = cinder.volumes.create(display_name=vol_name,
 553
+                                            imageRef=img_id,
 554
+                                            size=vol_size,
 555
+                                            source_volid=src_vol_id,
 556
+                                            snapshot_id=snap_id)
 557
+            vol_id = vol_new.id
 558
+        except Exception as e:
 559
+            msg = 'Failed to create volume: {}'.format(e)
 560
+            amulet.raise_status(amulet.FAIL, msg=msg)
 561
+
 562
+        # Wait for volume to reach available status
 563
+        ret = self.resource_reaches_status(cinder.volumes, vol_id,
 564
+                                           expected_stat="available",
 565
+                                           msg="Volume status wait")
 566
+        if not ret:
 567
+            msg = 'Cinder volume failed to reach expected state.'
 568
+            amulet.raise_status(amulet.FAIL, msg=msg)
 569
+
 570
+        # Re-validate new volume
 571
+        self.log.debug('Validating volume attributes...')
 572
+        val_vol_name = cinder.volumes.get(vol_id).display_name
 573
+        val_vol_boot = cinder.volumes.get(vol_id).bootable
 574
+        val_vol_stat = cinder.volumes.get(vol_id).status
 575
+        val_vol_size = cinder.volumes.get(vol_id).size
 576
+        msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
 577
+                    '{} size:{}'.format(val_vol_name, vol_id,
 578
+                                        val_vol_stat, val_vol_boot,
 579
+                                        val_vol_size))
 580
+
 581
+        if val_vol_boot == bootable and val_vol_stat == 'available' \
 582
+                and val_vol_name == vol_name and val_vol_size == vol_size:
 583
+            self.log.debug(msg_attr)
 584
+        else:
 585
+            msg = ('Volume validation failed, {}'.format(msg_attr))
 586
+            amulet.raise_status(amulet.FAIL, msg=msg)
 587
+
 588
+        return vol_new
 589
+
 590
+    def delete_resource(self, resource, resource_id,
 591
+                        msg="resource", max_wait=120):
 592
+        """Delete one openstack resource, such as one instance, keypair,
 593
+        image, volume, stack, etc., and confirm deletion within max wait time.
 594
+
 595
+        :param resource: pointer to os resource type, ex:glance_client.images
 596
+        :param resource_id: unique name or id for the openstack resource
 597
+        :param msg: text to identify purpose in logging
 598
+        :param max_wait: maximum wait time in seconds
 599
+        :returns: True if successful, otherwise False
 600
+        """
 601
+        self.log.debug('Deleting OpenStack resource '
 602
+                       '{} ({})'.format(resource_id, msg))
 603
+        num_before = len(list(resource.list()))
 604
+        resource.delete(resource_id)
 605
+
 606
+        tries = 0
 607
+        num_after = len(list(resource.list()))
 608
+        while num_after != (num_before - 1) and tries < (max_wait / 4):
 609
+            self.log.debug('{} delete check: '
 610
+                           '{} [{}:{}] {}'.format(msg, tries,
 611
+                                                  num_before,
 612
+                                                  num_after,
 613
+                                                  resource_id))
 614
+            time.sleep(4)
 615
+            num_after = len(list(resource.list()))
 616
+            tries += 1
 617
+
 618
+        self.log.debug('{}:  expected, actual count = {}, '
 619
+                       '{}'.format(msg, num_before - 1, num_after))
 620
+
 621
+        if num_after == (num_before - 1):
 622
+            return True
 623
+        else:
 624
+            self.log.error('{} delete timed out'.format(msg))
 625
+            return False
 626
+
 627
+    def resource_reaches_status(self, resource, resource_id,
 628
+                                expected_stat='available',
 629
+                                msg='resource', max_wait=120):
 630
+        """Wait for an openstack resources status to reach an
 631
+           expected status within a specified time.  Useful to confirm that
 632
+           nova instances, cinder vols, snapshots, glance images, heat stacks
 633
+           and other resources eventually reach the expected status.
 634
+
 635
+        :param resource: pointer to os resource type, ex: heat_client.stacks
 636
+        :param resource_id: unique id for the openstack resource
 637
+        :param expected_stat: status to expect resource to reach
 638
+        :param msg: text to identify purpose in logging
 639
+        :param max_wait: maximum wait time in seconds
 640
+        :returns: True if successful, False if status is not reached
 641
+        """
 642
+
 643
+        tries = 0
 644
+        resource_stat = resource.get(resource_id).status
 645
+        while resource_stat != expected_stat and tries < (max_wait / 4):
 646
+            self.log.debug('{} status check: '
 647
+                           '{} [{}:{}] {}'.format(msg, tries,
 648
+                                                  resource_stat,
 649
+                                                  expected_stat,
 650
+                                                  resource_id))
 651
+            time.sleep(4)
 652
+            resource_stat = resource.get(resource_id).status
 653
+            tries += 1
 654
+
 655
+        self.log.debug('{}:  expected, actual status = {}, '
 656
+                       '{}'.format(msg, resource_stat, expected_stat))
 657
+
 658
+        if resource_stat == expected_stat:
 659
+            return True
 660
+        else:
 661
+            self.log.debug('{} never reached expected status: '
 662
+                           '{}'.format(resource_id, expected_stat))
 663
+            return False
 664
+
 665
+    def get_ceph_osd_id_cmd(self, index):
 666
+        """Produce a shell command that will return a ceph-osd id."""
 667
+        return ("`initctl list | grep 'ceph-osd ' | "
 668
+                "awk 'NR=={} {{ print $2 }}' | "
 669
+                "grep -o '[0-9]*'`".format(index + 1))
 670
+
 671
+    def get_ceph_pools(self, sentry_unit):
 672
+        """Return a dict of ceph pools from a single ceph unit, with
 673
+        pool name as keys, pool id as vals."""
 674
+        pools = {}
 675
+        cmd = 'sudo ceph osd lspools'
 676
+        output, code = sentry_unit.run(cmd)
 677
+        if code != 0:
 678
+            msg = ('{} `{}` returned {} '
 679
+                   '{}'.format(sentry_unit.info['unit_name'],
 680
+                               cmd, code, output))
 681
+            amulet.raise_status(amulet.FAIL, msg=msg)
 682
+
 683
+        # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
 684
+        for pool in str(output).split(','):
 685
+            pool_id_name = pool.split(' ')
 686
+            if len(pool_id_name) == 2:
 687
+                pool_id = pool_id_name[0]
 688
+                pool_name = pool_id_name[1]
 689
+                pools[pool_name] = int(pool_id)
 690
+
 691
+        self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
 692
+                                                pools))
 693
+        return pools
 694
+
 695
+    def get_ceph_df(self, sentry_unit):
 696
+        """Return dict of ceph df json output, including ceph pool state.
 697
+
 698
+        :param sentry_unit: Pointer to amulet sentry instance (juju unit)
 699
+        :returns: Dict of ceph df output
 700
+        """
 701
+        cmd = 'sudo ceph df --format=json'
 702
+        output, code = sentry_unit.run(cmd)
 703
+        if code != 0:
 704
+            msg = ('{} `{}` returned {} '
 705
+                   '{}'.format(sentry_unit.info['unit_name'],
 706
+                               cmd, code, output))
 707
+            amulet.raise_status(amulet.FAIL, msg=msg)
 708
+        return json.loads(output)
 709
+
 710
+    def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
 711
+        """Take a sample of attributes of a ceph pool, returning ceph
 712
+        pool name, object count and disk space used for the specified
 713
+        pool ID number.
 714
+
 715
+        :param sentry_unit: Pointer to amulet sentry instance (juju unit)
 716
+        :param pool_id: Ceph pool ID
 717
+        :returns: List of pool name, object count, kb disk space used
 718
+        """
 719
+        df = self.get_ceph_df(sentry_unit)
 720
+        pool_name = df['pools'][pool_id]['name']
 721
+        obj_count = df['pools'][pool_id]['stats']['objects']
 722
+        kb_used = df['pools'][pool_id]['stats']['kb_used']
 723
+        self.log.debug('Ceph {} pool (ID {}): {} objects, '
 724
+                       '{} kb used'.format(pool_name, pool_id,
 725
+                                           obj_count, kb_used))
 726
+        return pool_name, obj_count, kb_used
 727
+
 728
+    def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
 729
+        """Validate ceph pool samples taken over time, such as pool
 730
+        object counts or pool kb used, before adding, after adding, and
 731
+        after deleting items which affect those pool attributes.  The
 732
+        2nd element is expected to be greater than the 1st; 3rd is expected
 733
+        to be less than the 2nd.
 734
+
 735
+        :param samples: List containing 3 data samples
 736
+        :param sample_type: String for logging and usage context
 737
+        :returns: None if successful, Failure message otherwise
 738
+        """
 739
+        original, created, deleted = range(3)
 740
+        if samples[created] <= samples[original] or \
 741
+                samples[deleted] >= samples[created]:
 742
+            return ('Ceph {} samples ({}) '
 743
+                    'unexpected.'.format(sample_type, samples))
 744
+        else:
 745
+            self.log.debug('Ceph {} samples (OK): '
 746
+                           '{}'.format(sample_type, samples))
 747
+            return None
 748
+
 749
+    # rabbitmq/amqp specific helpers:
 750
+
 751
+    def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
 752
+        """Wait for rmq units extended status to show cluster readiness,
 753
+        after an optional initial sleep period.  Initial sleep is likely
 754
+        necessary to be effective following a config change, as status
 755
+        message may not instantly update to non-ready."""
 756
+
 757
+        if init_sleep:
 758
+            time.sleep(init_sleep)
 759
+
 760
+        message = re.compile('^Unit is ready and clustered$')
 761
+        deployment._auto_wait_for_status(message=message,
 762
+                                         timeout=timeout,
 763
+                                         include_only=['rabbitmq-server'])
 764
+
 765
+    def add_rmq_test_user(self, sentry_units,
 766
+                          username="testuser1", password="changeme"):
 767
+        """Add a test user via the first rmq juju unit, check connection as
 768
+        the new user against all sentry units.
 769
+
 770
+        :param sentry_units: list of sentry unit pointers
 771
+        :param username: amqp user name, default to testuser1
 772
+        :param password: amqp user password
 773
+        :returns: None if successful.  Raise on error.
 774
+        """
 775
+        self.log.debug('Adding rmq user ({})...'.format(username))
 776
+
 777
+        # Check that user does not already exist
 778
+        cmd_user_list = 'rabbitmqctl list_users'
 779
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
 780
+        if username in output:
 781
+            self.log.warning('User ({}) already exists, returning '
 782
+                             'gracefully.'.format(username))
 783
+            return
 784
+
 785
+        perms = '".*" ".*" ".*"'
 786
+        cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
 787
+                'rabbitmqctl set_permissions {} {}'.format(username, perms)]
 788
+
 789
+        # Add user via first unit
 790
+        for cmd in cmds:
 791
+            output, _ = self.run_cmd_unit(sentry_units[0], cmd)
 792
+
 793
+        # Check connection against the other sentry_units
 794
+        self.log.debug('Checking user connect against units...')
 795
+        for sentry_unit in sentry_units:
 796
+            connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
 797
+                                                   username=username,
 798
+                                                   password=password)
 799
+            connection.close()
 800
+
 801
+    def delete_rmq_test_user(self, sentry_units, username="testuser1"):
 802
+        """Delete a rabbitmq user via the first rmq juju unit.
 803
+
 804
+        :param sentry_units: list of sentry unit pointers
 805
+        :param username: amqp user name, default to testuser1
 806
+        :param password: amqp user password
 807
+        :returns: None if successful or no such user.
 808
+        """
 809
+        self.log.debug('Deleting rmq user ({})...'.format(username))
 810
+
 811
+        # Check that the user exists
 812
+        cmd_user_list = 'rabbitmqctl list_users'
 813
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
 814
+
 815
+        if username not in output:
 816
+            self.log.warning('User ({}) does not exist, returning '
 817
+                             'gracefully.'.format(username))
 818
+            return
 819
+
 820
+        # Delete the user
 821
+        cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
 822
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
 823
+
 824
+    def get_rmq_cluster_status(self, sentry_unit):
 825
+        """Execute rabbitmq cluster status command on a unit and return
 826
+        the full output.
 827
+
 828
+        :param unit: sentry unit
 829
+        :returns: String containing console output of cluster status command
 830
+        """
 831
+        cmd = 'rabbitmqctl cluster_status'
 832
+        output, _ = self.run_cmd_unit(sentry_unit, cmd)
 833
+        self.log.debug('{} cluster_status:\n{}'.format(
 834
+            sentry_unit.info['unit_name'], output))
 835
+        return str(output)
 836
+
 837
+    def get_rmq_cluster_running_nodes(self, sentry_unit):
 838
+        """Parse rabbitmqctl cluster_status output string, return list of
 839
+        running rabbitmq cluster nodes.
 840
+
 841
+        :param unit: sentry unit
 842
+        :returns: List containing node names of running nodes
 843
+        """
 844
+        # NOTE(beisner): rabbitmqctl cluster_status output is not
 845
+        # json-parsable, do string chop foo, then json.loads that.
 846
+        str_stat = self.get_rmq_cluster_status(sentry_unit)
 847
+        if 'running_nodes' in str_stat:
 848
+            pos_start = str_stat.find("{running_nodes,") + 15
 849
+            pos_end = str_stat.find("]},", pos_start) + 1
 850
+            str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
 851
+            run_nodes = json.loads(str_run_nodes)
 852
+            return run_nodes
 853
+        else:
 854
+            return []
 855
+
 856
+    def validate_rmq_cluster_running_nodes(self, sentry_units):
 857
+        """Check that all rmq unit hostnames are represented in the
 858
+        cluster_status output of all units.
 859
+
 860
+        :param host_names: dict of juju unit names to host names
 861
+        :param units: list of sentry unit pointers (all rmq units)
 862
+        :returns: None if successful, otherwise return error message
 863
+        """
 864
+        host_names = self.get_unit_hostnames(sentry_units)
 865
+        errors = []
 866
+
 867
+        # Query every unit for cluster_status running nodes
 868
+        for query_unit in sentry_units:
 869
+            query_unit_name = query_unit.info['unit_name']
 870
+            running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
 871
+
 872
+            # Confirm that every unit is represented in the queried unit's
 873
+            # cluster_status running nodes output.
 874
+            for validate_unit in sentry_units:
 875
+                val_host_name = host_names[validate_unit.info['unit_name']]
 876
+                val_node_name = 'rabbit@{}'.format(val_host_name)
 877
+
 878
+                if val_node_name not in running_nodes:
 879
+                    errors.append('Cluster member check failed on {}: {} not '
 880
+                                  'in {}\n'.format(query_unit_name,
 881
+                                                   val_node_name,
 882
+                                                   running_nodes))
 883
+        if errors:
 884
+            return ''.join(errors)
 885
+
 886
+    def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
 887
+        """Check a single juju rmq unit for ssl and port in the config file."""
 888
+        host = sentry_unit.info['public-address']
 889
+        unit_name = sentry_unit.info['unit_name']
 890
+
 891
+        conf_file = '/etc/rabbitmq/rabbitmq.config'
 892
+        conf_contents = str(self.file_contents_safe(sentry_unit,
 893
+                                                    conf_file, max_wait=16))
 894
+        # Checks
 895
+        conf_ssl = 'ssl' in conf_contents
 896
+        conf_port = str(port) in conf_contents
 897
+
 898
+        # Port explicitly checked in config
 899
+        if port and conf_port and conf_ssl:
 900
+            self.log.debug('SSL is enabled  @{}:{} '
 901
+                           '({})'.format(host, port, unit_name))
 902
+            return True
 903
+        elif port and not conf_port and conf_ssl:
 904
+            self.log.debug('SSL is enabled @{} but not on port {} '
 905
+                           '({})'.format(host, port, unit_name))
 906
+            return False
 907
+        # Port not checked (useful when checking that ssl is disabled)
 908
+        elif not port and conf_ssl:
 909
+            self.log.debug('SSL is enabled  @{}:{} '
 910
+                           '({})'.format(host, port, unit_name))
 911
+            return True
 912
+        elif not conf_ssl:
 913
+            self.log.debug('SSL not enabled @{}:{} '
 914
+                           '({})'.format(host, port, unit_name))
 915
+            return False
 916
+        else:
 917
+            msg = ('Unknown condition when checking SSL status @{}:{} '
 918
+                   '({})'.format(host, port, unit_name))
 919
+            amulet.raise_status(amulet.FAIL, msg)
 920
+
 921
+    def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
 922
+        """Check that ssl is enabled on rmq juju sentry units.
 923
+
 924
+        :param sentry_units: list of all rmq sentry units
 925
+        :param port: optional ssl port override to validate
 926
+        :returns: None if successful, otherwise return error message
 927
+        """
 928
+        for sentry_unit in sentry_units:
 929
+            if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
 930
+                return ('Unexpected condition:  ssl is disabled on unit '
 931
+                        '({})'.format(sentry_unit.info['unit_name']))
 932
+        return None
 933
+
 934
+    def validate_rmq_ssl_disabled_units(self, sentry_units):
 935
+        """Check that ssl is enabled on listed rmq juju sentry units.
 936
+
 937
+        :param sentry_units: list of all rmq sentry units
 938
+        :returns: True if successful.  Raise on error.
 939
+        """
 940
+        for sentry_unit in sentry_units:
 941
+            if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
 942
+                return ('Unexpected condition:  ssl is enabled on unit '
 943
+                        '({})'.format(sentry_unit.info['unit_name']))
 944
+        return None
 945
+
 946
+    def configure_rmq_ssl_on(self, sentry_units, deployment,
 947
+                             port=None, max_wait=60):
 948
+        """Turn ssl charm config option on, with optional non-default
 949
+        ssl port specification.  Confirm that it is enabled on every
 950
+        unit.
 951
+
 952
+        :param sentry_units: list of sentry units
 953
+        :param deployment: amulet deployment object pointer
 954
+        :param port: amqp port, use defaults if None
 955
+        :param max_wait: maximum time to wait in seconds to confirm
 956
+        :returns: None if successful.  Raise on error.
 957
+        """
 958
+        self.log.debug('Setting ssl charm config option:  on')
 959
+
 960
+        # Enable RMQ SSL
 961
+        config = {'ssl': 'on'}
 962
+        if port:
 963
+            config['ssl_port'] = port
 964
+
 965
+        deployment.d.configure('rabbitmq-server', config)
 966
+
 967
+        # Wait for unit status
 968
+        self.rmq_wait_for_cluster(deployment)
 969
+
 970
+        # Confirm
 971
+        tries = 0
 972
+        ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
 973
+        while ret and tries < (max_wait / 4):
 974
+            time.sleep(4)
 975
+            self.log.debug('Attempt {}: {}'.format(tries, ret))
 976
+            ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
 977
+            tries += 1
 978
+
 979
+        if ret:
 980
+            amulet.raise_status(amulet.FAIL, ret)
 981
+
 982
+    def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
 983
+        """Turn ssl charm config option off, confirm that it is disabled
 984
+        on every unit.
 985
+
 986
+        :param sentry_units: list of sentry units
 987
+        :param deployment: amulet deployment object pointer
 988
+        :param max_wait: maximum time to wait in seconds to confirm
 989
+        :returns: None if successful.  Raise on error.
 990
+        """
 991
+        self.log.debug('Setting ssl charm config option:  off')
 992
+
 993
+        # Disable RMQ SSL
 994
+        config = {'ssl': 'off'}
 995
+        deployment.d.configure('rabbitmq-server', config)
 996
+
 997
+        # Wait for unit status
 998
+        self.rmq_wait_for_cluster(deployment)
 999
+
1000
+        # Confirm
1001
+        tries = 0
1002
+        ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1003
+        while ret and tries < (max_wait / 4):
1004
+            time.sleep(4)
1005
+            self.log.debug('Attempt {}: {}'.format(tries, ret))
1006
+            ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1007
+            tries += 1
1008
+
1009
+        if ret:
1010
+            amulet.raise_status(amulet.FAIL, ret)
1011
+
1012
+    def connect_amqp_by_unit(self, sentry_unit, ssl=False,
1013
+                             port=None, fatal=True,
1014
+                             username="testuser1", password="changeme"):
1015
+        """Establish and return a pika amqp connection to the rabbitmq service
1016
+        running on a rmq juju unit.
1017
+
1018
+        :param sentry_unit: sentry unit pointer
1019
+        :param ssl: boolean, default to False
1020
+        :param port: amqp port, use defaults if None
1021
+        :param fatal: boolean, default to True (raises on connect error)
1022
+        :param username: amqp user name, default to testuser1
1023
+        :param password: amqp user password
1024
+        :returns: pika amqp connection pointer or None if failed and non-fatal
1025
+        """
1026
+        host = sentry_unit.info['public-address']
1027
+        unit_name = sentry_unit.info['unit_name']
1028
+
1029
+        # Default port logic if port is not specified
1030
+        if ssl and not port:
1031
+            port = 5671
1032
+        elif not ssl and not port:
1033
+            port = 5672
1034
+
1035
+        self.log.debug('Connecting to amqp on {}:{} ({}) as '
1036
+                       '{}...'.format(host, port, unit_name, username))
1037
+
1038
+        try:
1039
+            credentials = pika.PlainCredentials(username, password)
1040
+            parameters = pika.ConnectionParameters(host=host, port=port,
1041
+                                                   credentials=credentials,
1042
+                                                   ssl=ssl,
1043
+                                                   connection_attempts=3,
1044
+                                                   retry_delay=5,
1045
+                                                   socket_timeout=1)
1046
+            connection = pika.BlockingConnection(parameters)
1047
+            assert connection.is_open is True
1048
+            assert connection.is_closing is False
1049
+            self.log.debug('Connect OK')
1050
+            return connection
1051
+        except Exception as e:
1052
+            msg = ('amqp connection failed to {}:{} as '
1053
+                   '{} ({})'.format(host, port, username, str(e)))
1054
+            if fatal:
1055
+                amulet.raise_status(amulet.FAIL, msg)
1056
+            else:
1057
+                self.log.warn(msg)
1058
+                return None
1059
+
1060
+    def publish_amqp_message_by_unit(self, sentry_unit, message,
1061
+                                     queue="test", ssl=False,
1062
+                                     username="testuser1",
1063
+                                     password="changeme",
1064
+                                     port=None):
1065
+        """Publish an amqp message to a rmq juju unit.
1066
+
1067
+        :param sentry_unit: sentry unit pointer
1068
+        :param message: amqp message string
1069
+        :param queue: message queue, default to test
1070
+        :param username: amqp user name, default to testuser1
1071
+        :param password: amqp user password
1072
+        :param ssl: boolean, default to False
1073
+        :param port: amqp port, use defaults if None
1074
+        :returns: None.  Raises exception if publish failed.
1075
+        """
1076
+        self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
1077
+                                                                    message))
1078
+        connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1079
+                                               port=port,
1080
+                                               username=username,
1081
+                                               password=password)
1082
+
1083
+        # NOTE(beisner): extra debug here re: pika hang potential:
1084
+        #   https://github.com/pika/pika/issues/297
1085
+        #   https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
1086
+        self.log.debug('Defining channel...')
1087
+        channel = connection.channel()
1088
+        self.log.debug('Declaring queue...')
1089
+        channel.queue_declare(queue=queue, auto_delete=False, durable=True)
1090
+        self.log.debug('Publishing message...')
1091
+        channel.basic_publish(exchange='', routing_key=queue, body=message)
1092
+        self.log.debug('Closing channel...')
1093
+        channel.close()
1094
+        self.log.debug('Closing connection...')
1095
+        connection.close()
1096
+
1097
+    def get_amqp_message_by_unit(self, sentry_unit, queue="test",
1098
+                                 username="testuser1",
1099
+                                 password="changeme",
1100
+                                 ssl=False, port=None):
1101
+        """Get an amqp message from a rmq juju unit.
1102
+
1103
+        :param sentry_unit: sentry unit pointer
1104
+        :param queue: message queue, default to test
1105
+        :param username: amqp user name, default to testuser1
1106
+        :param password: amqp user password
1107
+        :param ssl: boolean, default to False
1108
+        :param port: amqp port, use defaults if None
1109
+        :returns: amqp message body as string.  Raise if get fails.
1110
+        """
1111
+        connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1112
+                                               port=port,
1113
+                                               username=username,
1114
+                                               password=password)
1115
+        channel = connection.channel()
1116
+        method_frame, _, body = channel.basic_get(queue)
1117
+
1118
+        if method_frame:
1119
+            self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1120
+                                                                         body))
1121
+            channel.basic_ack(method_frame.delivery_tag)
1122
+            channel.close()
1123
+            connection.close()
1124
+            return body
1125
+        else:
1126
+            msg = 'No message retrieved.'
1127
+            amulet.raise_status(amulet.FAIL, msg)
Back to file index

tests/gate-basic-xenial-mitaka

 1
--- 
 2
+++ tests/gate-basic-xenial-mitaka
 3
@@ -0,0 +1,23 @@
 4
+#!/usr/bin/env python
 5
+#
 6
+# Copyright 2016 Canonical Ltd
 7
+#
 8
+# Licensed under the Apache License, Version 2.0 (the "License");
 9
+# you may not use this file except in compliance with the License.
10
+# You may obtain a copy of the License at
11
+#
12
+#  http://www.apache.org/licenses/LICENSE-2.0
13
+#
14
+# Unless required by applicable law or agreed to in writing, software
15
+# distributed under the License is distributed on an "AS IS" BASIS,
16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+# See the License for the specific language governing permissions and
18
+# limitations under the License.
19
+
20
+"""Amulet tests on a basic cinder-ceph deployment on xenial-mitaka."""
21
+
22
+from basic_deployment import Cinderds8kBasicDeployment 
23
+
24
+if __name__ == '__main__':
25
+    deployment = Cinderds8kBasicDeployment(series='xenial')
26
+    #deployment.run_tests()
Back to file index

tests/local.yaml

 1
--- 
 2
+++ tests/local.yaml
 3
@@ -0,0 +1,7 @@
 4
+ibm-repo:
 5
+    ds8k_config_sanip: "1.1.1.1"
 6
+    ds8k_config_sanlogin: "root"
 7
+    ds8k_config_sanpassword: "root123"    
 8
+    volume_backend_name: "sar.ds8k.xib"
 9
+    volume-driver: "cinder.volume.drivers.ibm.xiv_ds8k.XIVDS8KDriver"
10
+    san_clustername: "P5"
Back to file index

tox.ini

 1
--- 
 2
+++ tox.ini
 3
@@ -0,0 +1,12 @@
 4
+[tox]
 5
+skipsdist=True
 6
+envlist = py34, py35
 7
+skip_missing_interpreters = True
 8
+
 9
+[testenv]
10
+commands = py.test -v
11
+deps =
12
+    -r{toxinidir}/requirements.txt
13
+
14
+[flake8]
15
+exclude=docs