~ibmcharmers/ibm-spectrum-scale-client-6

Owner: shilkaul
Status: Needs Fixing
Vote: -1 (+2 needed for approval)

CPP?: No
OIL?: No

This charm is for IBM Spectrum Scale Client, to test this charm you need IBM Spectrum Scale Manager charm as well to create a Spectrum Scale cluster.

Its source code can be found in the below repository
Repo : https://code.launchpad.net/~ibmcharmers/ibmlayers/layer-ibm-spectrum-scale-client.


Tests

Substrate Status Results Last Updated
gce RETRY 19 days ago
lxc RETRY 19 days ago
aws RETRY 19 days ago

Voted: +0
shilkaul wrote 2 months ago
The duplicate entry for subordinate key has been removed from metadata file
The download zip link is not working as this charm contains terms. There is an already existing issue related to downloading the charms having terms from charm store
Voted: +0
petevg wrote 2 months ago
shikaul: thank you for your work on this charm. The subordinate key issue is addressed.

I attempted to manually deploy this to local lxd containers, but the charms wound up in a broken state. Here are the commands I ran to install:

cd /home/petevg/Code/charms/builds/ibm-spectrum-scale-client
juju deploy cs:ubuntu-8
juju deploy . --resource ibm_spectrum_scale_installer_client=~/Downloads/SpectrumScaleStd422LXx86.tar.gz --series=xenial
juju add-relation ubuntu ibm-spectrum-scale-client
cd /home/petevg/Code/charms/builds/ibm-spectrum-scale-manager/
juju deploy . --resource ibm_spectrum_scale_installer_manager=~/Downloads/SpectrumScaleStd422LXx86.tar.gz --series=xenial
juju add-relation ibm-spectrum-scale-client ibm-spectrum-scale-manager
cd ../ibm-spectrum-scale-client/

And here are my juju logs: http://paste.ubuntu.com/24328202/

Is it possible that the SpectrumScaleStd...tar.gz file in brickftp is broken?
Voted: +0
shilkaul wrote 2 months ago
Hi,

There was an issue with the Spectrum Scale package uploaded to brickftp site. I have uploaded the package again and tested with the same package from brickftp site and its working fine.
Due to the GPFS kernel module, this charm will not work in a LXC/LXD container environment. We might get into issues during installation itself, if we test this on LXC/LXD container.

Thanks and Regards,
Shilpa
Voted: +0
petevg wrote 2 months ago
Hi,

Thank you for updating the files on the brickftp site! I was able to get ibm-spectrum-scale-client to deploy on aws machines with the updated resource.

Unfortunately, when I try to validate that the installation worked by following the "Installation Verification" instructions in the README, I run into an error. The mmlscluster command gives the following output when I juju ssh into ubuntu/0:

```
root@ip-172-31-17-178:/usr/lpp/mmfs/bin# ./mmlscluster
mmlscluster: This node does not belong to a GPFS cluster.
mmlscluster: Command failed. Examine previous error messages to determine cause.
```

Note that the command works when run on the manager machines. Here is the output on one of the managers:

```
root@ip-172-31-50-150:/usr/lpp/mmfs/bin# ./mmlscluster

GPFS cluster information
========================
GPFS cluster name: spectrum_scale_cluster.ip-172-31-50-150
GPFS cluster id: 14880054459231082868
GPFS UID domain: spectrum_scale_cluster.ip-172-31-50-150
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
Repository type: CCR

Node Daemon node name IP address Admin node name Designation
----------------------------------------------------------------------
1 ip-172-31-50-150 172.31.50.150 ip-172-31-50-150 quorum-manager
2 ip-172-31-26-193 172.31.26.193 ip-172-31-26-193 quorum-manager
```

For reference, here is the relevant part of my bash history when deploying the bundle:

```
2018 juju deploy cs:trusty/ubuntu
2020 cd ../ibm-spectrum-scale-manager/
2021 juju deploy . --resource ibm_spectrum_scale_installer_manager=~/Downloads/SpectrumScaleStd422LXx86.tar.gz --storage disks=ebs,1G
2022 cd ../ibm-spectrum-scale-client/
2023 juju deploy . --resource ibm_spectrum_scale_installer_client=~/Downloads/SpectrumScaleStd422LXx86.tar.gz
2024 juju add-relation ubuntu ibm-spectrum-scale-client
2025 juju add-relation ibm-spectrum-scale-client ibm-spectrum-scale-manager
```

(I then waited for the model to settle before attempting to ssh in and run the commands suggested in the README.)
Voted: -1
petevg wrote 2 months ago
Just a quick follow-up: it appears that there are some cases where the ibm-spectrum-scale-client code swallows Exceptions, just like the manager code does. Sometimes, this is okay, as the functions return "None" to a caller that is expecting a Truthy or Falsey value. See the "check_node" function, for example.

In the case of functions like "build_modules", however, the Exceptions should be re-raised, because we really want to know whether the call to build_modules succeeded or not.

It's possible that one of these functions failed silently during my test run above, leading me to see a green juju status, even though there was an error on the client machine.
Voted: +0
shilkaul wrote 2 months ago
Hi Pete,

Can you please provide me the logs for client charm and manager charm.
Voted: +0
shilkaul wrote 2 months ago
I deployed the charms again from charm store on AWS environment and I was able to verify the mmlscluster command on both manager and client machines.
If you can share the logs, it would help me out in debugging why mmlscluster command did not work on the client machine

Thanks and Regards,
Shilpa
Voted: +0
petevg wrote 2 months ago
Hi shilkaul,

Here are logs from another run, exhibiting the same problem: http://paste.ubuntu.com/24375609/

Regards,
~ PeteVG

Add Comment

Login to comment/vote on this review.


Policy Checklist

Description Unreviewed Pass Fail

General

Must verify that any software installed or utilized is verified as coming from the intended source.
  • Any software installed from the Ubuntu or CentOS default archives satisfies this due to the apt and yum sources including cryptographic signing information.
  • Third party repositories must be listed as a configuration option that can be overridden by the user and not hard coded in the charm itself.
  • Launchpad PPAs are acceptable as the add-apt-repository command retrieves the keys securely.
  • Other third party repositories are acceptable if the signing key is embedded in the charm.
Must provide a means to protect users from known security vulnerabilities in a way consistent with best practices as defined by either operating system policies or upstream documentation. ajkavanagh
Basically, this means there must be instructions on how to apply updates if you use software not from distribution channels.
Must have hooks that are idempotent.
Should be built using charm layers. ajkavanagh
Should use Juju Resources to deliver required payloads. ajkavanagh

Testing and Quality

charm proof must pass without errors or warnings.
Must include passing unit, functional, or integration tests. ajkavanagh
Tests must exercise all relations. ajkavanagh
Tests must exercise config. ajkavanagh
set-config, unset-config, and re-set must be tested as a minimum
Must not use anything infrastructure-provider specific (i.e. querying EC2 metadata service).
Must be self contained unless the charm is a proxy for an existing cloud service, e.g. ec2-elb charm.
Must not use symlinks. ajkavanagh
Bundles must only use promulgated charms, they cannot reference charms in personal namespaces.
Must call Juju hook tools (relation-*, unit-*, config-*, etc) without a hard coded path. ajkavanagh
Should include a tests.yaml for all integration tests. ajkavanagh

Metadata

Must include a full description of what the software does. ajkavanagh
Must include a maintainer email address for a team or individual who will be responsive to contact. ajkavanagh
Must include a license. Call the file 'copyright' and make sure all files' licenses are specified clearly. ajkavanagh
Must be under a Free license. ajkavanagh
Must have a well documented and valid README.md. ajkavanagh
Must describe the service. ajkavanagh
Must describe how it interacts with other services, if applicable. ajkavanagh
Must document the interfaces. ajkavanagh
Must show how to deploy the charm. ajkavanagh
Must define external dependencies, if applicable. ajkavanagh
Should link to a recommend production usage bundle and recommended configuration if this differs from the default. ajkavanagh
Should reference and link to upstream documentation and best practices. ajkavanagh

Security

Must not run any network services using default passwords. ajkavanagh
Must verify and validate any external payload
  • Known and understood packaging systems that verify packages like apt, pip, and yum are ok.
  • wget | sh style is not ok.
Should make use of whatever Mandatory Access Control system is provided by the distribution.
Should avoid running services as root.

All changes | Changes since last revision

Source Diff

Files changed 74

Inline diff comments 0

No comments yet.

Back to file index

Makefile

 1
--- 
 2
+++ Makefile
 3
@@ -0,0 +1,24 @@
 4
+#!/usr/bin/make
 5
+
 6
+all: lint unit_test
 7
+
 8
+
 9
+.PHONY: clean
10
+clean:
11
+	@rm -rf .tox
12
+
13
+.PHONY: apt_prereqs
14
+apt_prereqs:
15
+	@# Need tox, but don't install the apt version unless we have to (don't want to conflict with pip)
16
+	@which tox >/dev/null || (sudo apt-get install -y python-pip && sudo pip install tox)
17
+
18
+.PHONY: lint
19
+lint: apt_prereqs
20
+	@tox --notest
21
+	@PATH=.tox/py34/bin:.tox/py35/bin flake8 $(wildcard hooks reactive lib unit_tests tests)
22
+	@charm proof
23
+
24
+.PHONY: unit_test
25
+unit_test: apt_prereqs
26
+	@echo Starting tests...
27
+	tox
Back to file index

README.md

  1
--- 
  2
+++ README.md
  3
@@ -0,0 +1,158 @@
  4
+Charm for IBM Spectrum Scale (GPFS) Client V 4.2.2
  5
+
  6
+
  7
+Overview
  8
+-----
  9
+
 10
+IBM Spectrum Scale Client
 11
+
 12
+IBM Spectrum Scale (GPFS) provides simplified data management and integrated information lifecycle tools capable of managing petabytes of data and billions 
 13
+of files, in order to arrest the growing cost of managing ever growing amounts of data.
 14
+
 15
+A `client node` is any server that has the Spectrum Scale product installed but do not support direct attached disks. Also a client node will not be part of the node pool from which file system managers and token managers can be selected.
 16
+
 17
+For details on Spectrum Scale, as well as information on purchasing, please visit:
 18
+[Product Page] [product-page] and at the [Passport Advantage Site] [passport-spectrum-scale]
 19
+
 20
+***Note that due to the GPFS kernel module, this charm will not work in a LXC/LXD container environment.***
 21
+
 22
+
 23
+Prerequisites
 24
+-------------
 25
+
 26
+This charm makes use of resources, a feature only available in Juju 2.0. During deploy, you will need to specify the installable package(s)
 27
+required by this charm. Download your licensed `IBM Spectrum Scale Standard 4.2.2` version for Ubuntu. To acquire and download IBM Spectrum Scale, follow instructions available at the [Product Page] [product-page]. 
 28
+
 29
+This charm will deploy only the Standard edition for IBM Spectrum Scale. 
 30
+
 31
+For `x86_64 Ubuntu`, the package and part number is:
 32
+    
 33
+    	IBM Spectrum Scale Standard 4.2.2 Linux for x86Series English (CNEP7EN) 
 34
+
 35
+For `Power Ubuntu`, the package and part number is:
 36
+ 
 37
+        IBM Spectrum Scale Standard 4.2.2 Linux PWR8 LE English (CNEP8EN)
 38
+
 39
+
 40
+
 41
+Usage
 42
+------
 43
+To use this charm, you must agree to the Terms of Use. You can view the full license for IBM Spectrum Scale by visiting 
 44
+the [Software license agreements search website][license-info]. Search for `"IBM Spectrum Scale, V4.2.2"` and choose the license that applies to the version you are using.
 45
+
 46
+
 47
+
 48
+Deploy
 49
+------
 50
+
 51
+Run the following commands to deploy this charm:
 52
+
 53
+***As the `Spectrum scale client` is a subordinate charm, you need to deploy the principle charm first, where you want your client nodes to be present. For simple deployment, you can deploy on top of Ubuntu charm as shown below:***
 54
+
 55
+    juju deploy cs:ubuntu-10
 56
+    juju deploy ibm-spectrum-scale-client --resource     
 57
+    ibm_spectrum_scale_installer_client=</path/to/installer.tar.gz> 
 58
+    juju add-relation ubuntu ibm-spectrum-scale-client
 59
+    juju add-relation ibm-spectrum-scale-client ibm-spectrum-scale-manager
 60
+    	   
 61
+
 62
+**Note**: This charm requires acceptance of Terms of Use. When deploying from the Charm Store, these terms will be presented to you for your consideration.
 63
+To accept the terms:
 64
+
 65
+    juju agree ibm-spectrum-scale/1
 66
+
 67
+Once you have agreed to the Terms, then only the IBM Spectrum Scale Client charm will be deployed.
 68
+Once IBM Spectrum Scale client is deployed successfully, a node will be added to the Spectrum Scale cluster with designated license as `client` and node designation as `non-quorum`
 69
+
 70
+
 71
+Installation Verification
 72
+-------------------------
 73
+To verify that the client node is added successfully, run the below commands:
 74
+
 75
+1) Go to the machine where Spectrum Scale client is installed.
 76
+
 77
+2) Go to the Spectrum Scale bin folder path: `/usr/lpp/mmfs/bin`
 78
+
 79
+3) The commands can be executed as a root user only, so do `sudo su` to run the commands as root user
 80
+
 81
+4) Run the `mmlscluster` command which will display cluster information or `mmgetstate` command to see the status of the client node
 82
+
 83
+5) If you have created the filesystem on manager nodes, you can issue command `df -h` to see whether gpfs filesystem is mounted or not on the client nodes. 
 84
+
 85
+
 86
+
 87
+### Upgrade
 88
+
 89
+Once deployed, users can install fixpacks by upgrading the charm:
 90
+
 91
+    juju attach ibm-spectrum-scale-client ibm_spectrum_scale_client_fixpack=</path/to/fixpack.tar.gz>
 92
+Provide the fixpack having file format as `*.tar.gz`
 93
+If the spectrum scale manager units are updated, please do update the spectrum scale client as well. Both `Manager` and `Client` nodes should be at same Spectrum Scale version.
 94
+
 95
+
 96
+
 97
+### Removing Relation 
 98
+An IBM Spectrum Scale client charm is related to IBM Spectrum Scale Manager, to remove relation between them, run the below step:
 99
+
100
+    juju remove-relation ibm-spectrum-scale-client ibm-spectrum-scale-manager
101
+
102
+This will remove the client node from the Spectrum Scale cluster. The GPFS file system will be unmounted before deleting the client node.
103
+
104
+
105
+### Relation with IBM Cinder/Glance SpectrumScale charm
106
+In Openstack, you can have Spectrum Scale as one of the storage backend for Cinder or Glance. The Spectrum Scale client charm can be deployed on
107
+nova compute nodes and related to IBM Cinder/Glance SpectrumScale charm (which is the gpfs driver). This allows a single Spectrum Scale storage cluster to
108
+be associated with Cinder or glance or both, potentially alongside other storage backends from other vendors.
109
+For more details on this, please refer to `IBM Cinder SpectrumScale` charm:
110
+[IBM Cinder SpectrumScale Charm] [cinder-spectrum]
111
+[IBM Glance SpectrumScale Charm] [glance-spectrum]
112
+
113
+
114
+
115
+
116
+IBM Spectrum Scale Information
117
+----------------
118
+(1) General Information
119
+
120
+Information on IBM Spectrum Scale available at the [Product Page] [product-page]
121
+
122
+(2) Download Information
123
+
124
+Information on procuring IBM Platform LSF product is available at the 
125
+[Passport Advantage Site][passport-spectrum-scale]
126
+
127
+(3) Spectrum Scale Infocenter
128
+
129
+To know more details about how Spectrum Scale works, refer to Spectrum Scale Infocenter
130
+[IBM Spectrum Scale Knowledge  Center][spectrum-scale-knowledgecenter]
131
+
132
+(4) License
133
+
134
+License information for IBM Spectrum Scale can be viewed at the
135
+[Software license agreements search website][license-info]
136
+
137
+(5) Contact Information
138
+
139
+For issues with this charm, please contact IBM Juju Support Team <jujusupp@us.ibm.com>
140
+
141
+(6) Known Limitations
142
+
143
+This charm makes use of Juju features that are only available in version `2.0` or
144
+greater.
145
+
146
+
147
+<!-- Links -->
148
+
149
+[product-page]: http://www-03.ibm.com/software/products/en/software
150
+
151
+[passport-spectrum-scale]: http://www-01.ibm.com/software/passportadvantage/
152
+
153
+[gpfs-info]: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General+Parallel+File+System+%28GPFS%29/page/Linux
154
+
155
+[license-info]: http://www-03.ibm.com/software/sla/sladb.nsf/search
156
+
157
+[spectrum-scale-knowledgecenter]: https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html
158
+
159
+[cinder-spectrum]: https://jujucharms.com/u/ibmcharmers/ibm-cinder-spectrumscale/4
160
+
161
+[glance-spectrum]: https://jujucharms.com/u/ibmcharmers/ibm-glance-spectrumscale/0
Back to file index

bin/layer_option

 1
--- 
 2
+++ bin/layer_option
 3
@@ -0,0 +1,24 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+import sys
 7
+sys.path.append('lib')
 8
+
 9
+import argparse
10
+from charms.layer import options
11
+
12
+
13
+parser = argparse.ArgumentParser(description='Access layer options.')
14
+parser.add_argument('section',
15
+                    help='the section, or layer, the option is from')
16
+parser.add_argument('option',
17
+                    help='the option to access')
18
+
19
+args = parser.parse_args()
20
+value = options(args.section).get(args.option, '')
21
+if isinstance(value, bool):
22
+    sys.exit(0 if value else 1)
23
+elif isinstance(value, list):
24
+    for val in value:
25
+        print(val)
26
+else:
27
+    print(value)
Back to file index

copyright

 1
--- 
 2
+++ copyright
 3
@@ -0,0 +1,13 @@
 4
+Copyright 2016 IBM Corporation
 5
+
 6
+This Charm is licensed under the Apache License, Version 2.0 (the "License");
 7
+you may not use this file except in compliance with the License.
 8
+You may obtain a copy of the License at
 9
+
10
+    http://www.apache.org/licenses/LICENSE-2.0
11
+
12
+Unless required by applicable law or agreed to in writing, software
13
+distributed under the License is distributed on an "AS IS" BASIS,
14
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+See the License for the specific language governing permissions and
16
+limitations under the License.
Back to file index

hooks/client-relation-broken

 1
--- 
 2
+++ hooks/client-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/client-relation-changed

 1
--- 
 2
+++ hooks/client-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/client-relation-departed

 1
--- 
 2
+++ hooks/client-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/client-relation-joined

 1
--- 
 2
+++ hooks/client-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/config-changed

 1
--- 
 2
+++ hooks/config-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-broken

 1
--- 
 2
+++ hooks/gpfsmanager-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-changed

 1
--- 
 2
+++ hooks/gpfsmanager-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-departed

 1
--- 
 2
+++ hooks/gpfsmanager-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-joined

 1
--- 
 2
+++ hooks/gpfsmanager-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/hook.template

 1
--- 
 2
+++ hooks/hook.template
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/install

 1
--- 
 2
+++ hooks/install
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-elected

 1
--- 
 2
+++ hooks/leader-elected
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-settings-changed

 1
--- 
 2
+++ hooks/leader-settings-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-broken

 1
--- 
 2
+++ hooks/quorum-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-changed

 1
--- 
 2
+++ hooks/quorum-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-departed

 1
--- 
 2
+++ hooks/quorum-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-joined

 1
--- 
 2
+++ hooks/quorum-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/relations/gpfs/README.md

 1
--- 
 2
+++ hooks/relations/gpfs/README.md
 3
@@ -0,0 +1,61 @@
 4
+Overview
 5
+--------
 6
+
 7
+This interface layer handles the communication between IBM Spectrum Scale Manager and IBM Spectrum Scale Client. The provider end of this interface provides the Spectrum Scale Manager service (Spectrum Scale Cluster). The consumer part requires the existence of a provider to function.
 8
+This interface also handles peer communication among Spectrum Scale Manager and Client units.
 9
+
10
+
11
+Usage
12
+------
13
+##### Provides
14
+
15
+This interface layer will set the following states, as appropriate:
16
+
17
+  - `{relation_name}.joined` : The relation is established between Spectrum Scale manager and clients.  At this point, the provider should broadcast configuration details using:
18
+      * `set_hostname(manager_hostname)`
19
+      * `set_ssh_key(privkey, pubkey)`
20
+      * `sett_notify_client(notify_client)`
21
+
22
+  - `{relation_name}.ready` : Manager has provided its connection string information, and is ready to accept requests from the clients. The connection information from the client can be accessed via below methods:
23
+      - `get_hostnames() and get_ips()` - These two methods will provide Hostname and Private IP Address of the Client
24
+      - `get_privclient_keys() and get_pubclient_keys()` - These two methods will provide the Private and Public Keys of the Client.
25
+
26
+
27
+##### Requires
28
+
29
+This interface layer will set the following states, as appropriate:
30
+  - `{relation_name}.joined` : The relation is established between Spectrum Scale manager and clients. At this point, the charm waits for Manager configuration details.
31
+
32
+  -  `{relation_name}.ready` : Spectrum Scale manager is ready for the clients. The client charm can access the configuration details using the below methods:
33
+
34
+      - `get_hostnames() and get_ips()` - Hostname and Private IP Address of Spectrum Scale Manager.
35
+      - `get_priv_keys() and get_pub_keys` - Private and Public Keys of Spectrum Scale Manager.
36
+      
37
+      Also it provides hostname and public key information to the Provider i.e. Spectrum Scale Manager using the following methods:
38
+     - `set_hostname(hostname_client)` - Provides the hostname of client to Manager.
39
+     - `set_ssh_key(pubkey)` - Provides  - Provides the public key of client to Manager.
40
+
41
+  - `{relation_name}.client-ready` : To notify the client that its added to the Cluster.
42
+
43
+
44
+
45
+##### Peers
46
+This interface allows the peers of the Spectrum Scale Manager/Client deployment to be aware of each other. This interface layer will set the following states, as appropriate:
47
+
48
+  - `{relation_name}.joined` - A new peer in the Spectrum Scale manager/client service has joined. 
49
+
50
+  - `{relation_name}.available` -  It will return a list of units containing the Hostname/IP Address and SSH keys information for cluster members.
51
+This information can be accessed via the below methods:
52
+      
53
+      - `get_unitips` and `get_hostname_peers` - Provides the Private IP Address and Hostname of the peer units.
54
+      - `get_pub_keys` - Public key of peer units.
55
+      - `get_storagedisks_peers` - List of Storage locations for peer units.
56
+      - `gpfsclient_managerpeer_services` - List of peer unit names.
57
+
58
+  - `{relation_name}.cluster.ready` - To notify the manager peers that cluster is ready. 
59
+
60
+
61
+  - `{relation_name}.departed` - A peer in the Spectrum Scale Manager/Client service has departed. 
62
+
63
+
64
+
Back to file index

hooks/relations/gpfs/interface.yaml

 1
--- 
 2
+++ hooks/relations/gpfs/interface.yaml
 3
@@ -0,0 +1,7 @@
 4
+name: gpfs
 5
+summary: |
 6
+   Basic gpfs interface required for adding gpfs clients to the existing
 7
+   Spectrum Scale cluster and peer units of gpfs manager/clients.
 8
+version: 1
 9
+maintainer: IBM Juju Support Team <jujusupp@us.ibm.com>
10
+
Back to file index

hooks/relations/gpfs/peers.py

  1
--- 
  2
+++ hooks/relations/gpfs/peers.py
  3
@@ -0,0 +1,135 @@
  4
+from charms.reactive import RelationBase, hook, scopes
  5
+
  6
+
  7
+class DistPeers(RelationBase):
  8
+    scope = scopes.UNIT
  9
+    peer_gpfs_ready = "Nil"
 10
+
 11
+    @hook('{peers:gpfs}-relation-joined')
 12
+    def joined(self):
 13
+        conv = self.conversation()
 14
+        conv.remove_state('{relation_name}.departing')
 15
+        conv.set_state('{relation_name}.connected')
 16
+
 17
+    @hook('{peers:gpfs}-relation-changed')
 18
+    def changed(self):
 19
+        conv = self.conversation()
 20
+        conv.remove_state('{relation_name}.departing')
 21
+        if ((str(conv.get_remote('manager_hostname')) != "None") and
 22
+           (str(conv.get_remote('pubkey')) != "None")):
 23
+            conv.set_state('{relation_name}.available')
 24
+
 25
+        if (str(conv.get_remote('peer_gpfs_ready')) == 'ClusterReady'):
 26
+            conv.set_state('{relation_name}.cluster.ready')
 27
+
 28
+    @hook('{peers:gpfs}-relation-departed')
 29
+    def departed(self):
 30
+        conv = self.conversation()
 31
+        conv.remove_state('{relation_name}.cluster.ready')
 32
+        conv.remove_state('{relation_name}.connected')
 33
+        conv.remove_state('{relation_name}.available')
 34
+        conv.set_state('{relation_name}.departing')
 35
+
 36
+    def dismiss_departed(self):
 37
+        """
 38
+        Remove the 'departing' state so we don't fall in here again
 39
+        (until another peer leaves).
 40
+        """
 41
+
 42
+        for conv in self.conversations():
 43
+            conv.remove_state('{relation_name}.departing')
 44
+
 45
+    def get_unitips(self):
 46
+        """
 47
+        Returns client 'private ip address' info
 48
+        :returns: List of peer units private ip addresses
 49
+        """
 50
+
 51
+        ips = []
 52
+        for conv in self.conversations():
 53
+            ips.append(conv.get_remote('private-address'))
 54
+        return ips
 55
+
 56
+    def set_hostname_peer(self, manager_hostname):
 57
+        """
 58
+        Forward 'hostname' info to its peers.
 59
+        :param manager_hostname: string - Hostname of the spectrum
 60
+                                          scale peer units.
 61
+        :returns: None
 62
+        """
 63
+
 64
+        for conv in self.conversations():
 65
+            conv.set_remote('manager_hostname', manager_hostname)
 66
+
 67
+    def set_ssh_key(self, pubkey):
 68
+        """
 69
+        Forward a dict of values containing Public SSH keys.
 70
+        :param pubkey: string - Public SSH key
 71
+        :returns: None
 72
+        """
 73
+
 74
+        for conv in self.conversations():
 75
+            conv.set_remote(data={
 76
+                            'pubkey':  pubkey,
 77
+                            })
 78
+
 79
+    def set_storagedisk_peer(self, devices_list):
 80
+        """
 81
+        Forward a list of Spectrum Scale Manager device/disk locations to
 82
+        its peers.
 83
+        :param devices_list: list - List of device location of the
 84
+                                    mgr peer units.
 85
+        :returns: None
 86
+        """
 87
+
 88
+        for conv in self.conversations():
 89
+            conv.set_remote('devices_list', devices_list)
 90
+
 91
+    def notify_peerready(self, peer_gpfs_ready):
 92
+        """
 93
+        Forward readiness flag status to its mgr peers, that cluster
 94
+        is created successfully
 95
+        :param peer_gpfs_ready: string - Readiness flag status value
 96
+        :returns: None
 97
+        """
 98
+
 99
+        for conv in self.conversations():
100
+            conv.set_remote('peer_gpfs_ready', peer_gpfs_ready)
101
+
102
+    def get_hostname_peers(self):
103
+        """
104
+        Returns a list of peer units hostname info
105
+        :returns: List of hostnames
106
+        """
107
+
108
+        hosts = []
109
+        for conv in self.conversations():
110
+            hosts.append(conv.get_remote('manager_hostname'))
111
+        return hosts
112
+
113
+    def get_pub_keys(self):
114
+        """
115
+        Returns a list of peer units public ssh keys info
116
+        :returns: List of public ssh keys
117
+        """
118
+
119
+        pub_ssh_keys = []
120
+        for conv in self.conversations():
121
+            pub_ssh_keys.append(conv.get_remote('pubkey'))
122
+        return pub_ssh_keys
123
+
124
+    def get_storagedisks_peers(self):
125
+        devices_list_peers = []
126
+        for conv in self.conversations():
127
+            devices_list_peers.append(conv.get_remote('devices_list'))
128
+        return list(devices_list_peers)
129
+
130
+    def gpfsclient_managerpeer_services(self):
131
+        """
132
+        Return a list of unit names.
133
+
134
+        """
135
+        units = []
136
+        for conv in self.conversations():
137
+            units.append(conv.scope)
138
+        return units
Back to file index

hooks/relations/gpfs/provides.py

  1
--- 
  2
+++ hooks/relations/gpfs/provides.py
  3
@@ -0,0 +1,119 @@
  4
+from charms.reactive import hook
  5
+from charms.reactive import RelationBase
  6
+from charms.reactive import scopes
  7
+
  8
+
  9
+class gpfsProvides(RelationBase):
 10
+    # Every unit connecting will get the same information
 11
+    scope = scopes.UNIT
 12
+
 13
+    # Use some template magic to declare our relation(s)
 14
+    @hook('{provides:gpfs}-relation-joined')
 15
+    def joined(self):
 16
+        conversation = self.conversation()
 17
+        conversation.remove_state('{relation_name}.departing')
 18
+        conversation.set_state('{relation_name}.connected')
 19
+
 20
+    @hook('{provides:gpfs}-relation-changed')
 21
+    def changed(self):
 22
+        conversation = self.conversation()
 23
+        conversation.remove_state('{relation_name}.departing')
 24
+        if (str(conversation.get_remote('hostname_client')) != "None"):
 25
+            conversation.set_state('{relation_name}.ready')
 26
+
 27
+    @hook('{provides:gpfs}-relation-departed')
 28
+    def departed(self):
 29
+        conversation = self.conversation()
 30
+        conversation.remove_state('{relation_name}.ready')
 31
+        conversation.remove_state('{relation_name}.connected')
 32
+        conversation.set_state('{relation_name}.departing')
 33
+
 34
+    def set_hostname(self, manager_hostname):
 35
+        """
 36
+        Forward Spectrum Scale Manager Hostname to client.
 37
+        :param manager_hostname: string - Hostname of the spectrum
 38
+                                          scale manager node
 39
+        :returns: None
 40
+        """
 41
+
 42
+        for conv in self.conversations():
 43
+            conv.set_remote('manager_hostname', manager_hostname)
 44
+
 45
+    def set_ssh_key(self, privkey, pubkey):
 46
+        """
 47
+        Forward a dict of values containing Private and Public SSH keys
 48
+        to client.
 49
+        :param privkey: string - Private SSH key
 50
+        :param pubkey: string - Public SSH key
 51
+        :returns: None
 52
+        """
 53
+
 54
+        for conv in self.conversations():
 55
+            conv.set_remote(data={
 56
+                            'privkey': privkey,
 57
+                            'pubkey':  pubkey,
 58
+                            })
 59
+
 60
+    def set_notify_client(self, notify_client):
 61
+        """
 62
+        Forward readiness flag status to client, that client is added
 63
+        successfully to the cluster
 64
+        :param notify_client: string - Readiness flag status value
 65
+        :returns: None
 66
+        """
 67
+
 68
+        for conv in self.conversations():
 69
+            conv.set_remote('notify_client', notify_client)
 70
+
 71
+    def get_hostnames(self):
 72
+        """
 73
+        Returns client hostname info
 74
+        :returns: List of client hostnames
 75
+        """
 76
+
 77
+        hosts = []
 78
+        for conv in self.conversations():
 79
+            hosts.append(conv.get_remote('hostname_client'))
 80
+        return hosts
 81
+
 82
+    def get_ips(self):
 83
+        """
 84
+        Returns client Private IP address info
 85
+        :returns: List of client Private IP Addresses
 86
+        """
 87
+
 88
+        ips = []
 89
+        for conv in self.conversations():
 90
+            ips.append(conv.get_remote('private-address'))
 91
+        return ips
 92
+
 93
+    def get_privclient_keys(self):
 94
+        """
 95
+        Returns client Private ssh key info
 96
+        :returns: List of client private ssh keys
 97
+        """
 98
+
 99
+        priv_ssh_keys = []
100
+        for conv in self.conversations():
101
+            priv_ssh_keys.append(conv.get_remote('privkey'))
102
+        return priv_ssh_keys
103
+
104
+    def get_pubclient_keys(self):
105
+        """
106
+        Returns client public ssh key info
107
+        :returns: List of client public ssh keys
108
+        """
109
+
110
+        pub_ssh_keys = []
111
+        for conv in self.conversations():
112
+            pub_ssh_keys.append(conv.get_remote('pubkey'))
113
+        return pub_ssh_keys
114
+
115
+    def dismiss(self):
116
+        """
117
+        Remove the 'departing' state so we don't fall in here again
118
+        (until another client unit leaves).
119
+        """
120
+
121
+        for conv in self.conversations():
122
+            conv.remove_state('{relation_name}.departing')
Back to file index

hooks/relations/gpfs/requires.py

 1
--- 
 2
+++ hooks/relations/gpfs/requires.py
 3
@@ -0,0 +1,95 @@
 4
+from charms.reactive import hook
 5
+from charms.reactive import RelationBase
 6
+from charms.reactive import scopes
 7
+
 8
+
 9
+class gpfsRequires(RelationBase):
10
+    scope = scopes.UNIT
11
+    notify_client = "No"
12
+
13
+    @hook('{requires:gpfs}-relation-joined')
14
+    def joined(self):
15
+        conversation = self.conversation()
16
+        conversation.set_state('{relation_name}.connected')
17
+
18
+    @hook('{requires:gpfs}-relation-changed')
19
+    def changed(self):
20
+        conversation = self.conversation()
21
+        if (str(conversation.get_remote('manager_hostname')) != "None"):
22
+            conversation.set_state('{relation_name}.ready')
23
+        if (str(conversation.get_remote('notify_client')) == "Yes"):
24
+            conversation.set_state('{relation_name}.client-ready')
25
+
26
+    @hook('{requires:gpfs}-relation-departed')
27
+    def departed(self):
28
+        conversation = self.conversation()
29
+        conversation.remove_state('{relation_name}.client-ready')
30
+        conversation.remove_state('{relation_name}.ready')
31
+        conversation.remove_state('{relation_name}.connected')
32
+
33
+    def set_hostname(self, hostname_client):
34
+        """
35
+        Forward Spectrum Scale Client Hostname to manager.
36
+        :param hostname_client: string - Hostname of the spectrum
37
+                                         scale client node
38
+        :returns: None
39
+        """
40
+
41
+        for conv in self.conversations():
42
+            conv.set_remote('hostname_client', hostname_client)
43
+
44
+    def set_ssh_key(self, pubkey):
45
+        """
46
+        Forward a dict of values containing Public SSH keys to manager.
47
+        :param pubkey: string - Public SSH key
48
+        :returns: None
49
+        """
50
+
51
+        for conv in self.conversations():
52
+            conv.set_remote(data={
53
+                            'pubkey':  pubkey,
54
+                            })
55
+
56
+    def get_hostnames(self):
57
+        """
58
+        Returns manager hostname info
59
+        :returns: List of manager hostnames
60
+        """
61
+
62
+        hosts = []
63
+        for conv in self.conversations():
64
+            hosts.append(conv.get_remote('manager_hostname'))
65
+        return hosts
66
+
67
+    def get_ips(self):
68
+        """
69
+        Returns manager private ip address info
70
+        :returns: List of manager private ip addresses
71
+        """
72
+
73
+        ips = []
74
+        for conv in self.conversations():
75
+            ips.append(conv.get_remote('private-address'))
76
+        return ips
77
+
78
+    def get_priv_keys(self):
79
+        """
80
+        Returns client manager ssh key info
81
+        :returns: List of manager private ssh keys
82
+        """
83
+
84
+        priv_ssh_keys = []
85
+        for conv in self.conversations():
86
+            priv_ssh_keys.append(conv.get_remote('privkey'))
87
+        return priv_ssh_keys
88
+
89
+    def get_pub_keys(self):
90
+        """
91
+        Returns manager public ssh key info
92
+        :returns: List of manager public ssh keys
93
+        """
94
+
95
+        pub_ssh_keys = []
96
+        for conv in self.conversations():
97
+            pub_ssh_keys.append(conv.get_remote('pubkey'))
98
+        return pub_ssh_keys
Back to file index

hooks/relations/spectrum-scale-client/README.md

 1
--- 
 2
+++ hooks/relations/spectrum-scale-client/README.md
 3
@@ -0,0 +1,28 @@
 4
+Overview
 5
+--------
 6
+
 7
+This interface layer handles the communication between IBM Spectrum Scale Client and IBM Cinder SpectrumScale (driver for gpfs).
 8
+The provider end of this interface provides the Spectrum Scale client service. The consumer part requires the existence of a provider to function.
 9
+
10
+
11
+Usage
12
+------
13
+##### Provides
14
+
15
+This interface layer will set the following states, as appropriate:
16
+
17
+  - `{relation_name}.joined` : The relation is established between Spectrum Scale client and the consumer. At this point, the provider should broadcast configuration details using:
18
+      - `set_publicip()` - will provide the Spectrum Scale Client Public IP Address.
19
+      - `set_hostname()` - will provide the Spectrum Scale Client Hostname.
20
+      - `set_ip()` - will provide the Spectrum Scale Client Private IP Address.
21
+ 
22
+  - `{relation_name}.ready` : Spectrum Scale Client has provided its connection string information, and is ready to accept requests from the consumer.
23
+
24
+      
25
+
26
+##### Requires
27
+Consumers like IBM Cinder SpectrumScale require this interface to connect to Spectrum Scale Client. This interface layer will set the following states, as appropriate:
28
+
29
+  - `{relation_name}.joined` : The relation is established between Spectrum Scale client and the consumer.
30
+
31
+  -  `{relation_name}.ready` : Spectrum Scale client is ready for its consumers.
Back to file index

hooks/relations/spectrum-scale-client/interface.yaml

1
--- 
2
+++ hooks/relations/spectrum-scale-client/interface.yaml
3
@@ -0,0 +1,6 @@
4
+name: spectrum-scale-client
5
+summary: |
6
+  Basic interface required for communication between Spectrum Scale
7
+  client and IBM Cinder SpectrumScale (driver for gpfs).
8
+version: 1
9
+maintainer: IBM Juju Support Team <jujusupp@us.ibm.com>
Back to file index

hooks/relations/spectrum-scale-client/provides.py

 1
--- 
 2
+++ hooks/relations/spectrum-scale-client/provides.py
 3
@@ -0,0 +1,67 @@
 4
+from charms.reactive import hook
 5
+from charms.reactive import RelationBase
 6
+from charms.reactive import scopes
 7
+
 8
+
 9
+class spectrumClientProvides(RelationBase):
10
+    # Every unit connecting will get the same information
11
+    scope = scopes.UNIT
12
+
13
+    # Use some template magic to declare our relation(s)
14
+    @hook('{provides:spectrum-scale-client}-relation-joined')
15
+    def joined(self):
16
+        conversation = self.conversation()
17
+        conversation.remove_state('{relation_name}.departing')
18
+        conversation.set_state('{relation_name}.connected')
19
+
20
+    @hook('{provides:spectrum-scale-client}-relation-changed')
21
+    def changed(self):
22
+        conversation = self.conversation()
23
+        conversation.remove_state('{relation_name}.departing')
24
+        conversation.set_state('{relation_name}.ready')
25
+
26
+    @hook('{provides:spectrum-scale-client}-relation-departed')
27
+    def departed(self):
28
+        conversation = self.conversation()
29
+        conversation.remove_state('{relation_name}.ready')
30
+        conversation.remove_state('{relation_name}.connected')
31
+        conversation.set_state('{relation_name}.departing')
32
+
33
+    def dismiss(self):
34
+        for conv in self.conversations():
35
+            conv.remove_state('{relation_name}.departing')
36
+
37
+    def set_publicip(self, client_public_ip):
38
+        """
39
+        Forward Spectrum Scale Client Public IP Address to any charm
40
+        connecting to it.
41
+        :param client_public_ip: string - Public IP Address of the spectrum
42
+                                          scale client node
43
+        :returns: None
44
+        """
45
+        for conv in self.conversations():
46
+            conv.set_remote('client_public_ip', client_public_ip)
47
+
48
+    def set_hostname(self, client_hostname):
49
+        """
50
+        Forward Spectrum Scale Client Hostname to any charm
51
+        connecting to it.
52
+        :param client_hostname: string - Hostname of the spectrum
53
+                                          scale client node
54
+        :returns: None
55
+        """
56
+
57
+        for conv in self.conversations():
58
+            conv.set_remote('client_hostname', client_hostname)
59
+
60
+    def set_ip(self, client_private_ip):
61
+        """
62
+        Forward Spectrum Scale Client Private IP Address to any charm
63
+        connecting to it.
64
+        :param client_private_ip: string - Private IP Address of the spectrum
65
+                                          scale client node
66
+        :returns: None
67
+        """
68
+
69
+        for conv in self.conversations():
70
+            conv.set_remote('client_private_ip', client_private_ip)
Back to file index

hooks/relations/spectrum-scale-client/requires.py

 1
--- 
 2
+++ hooks/relations/spectrum-scale-client/requires.py
 3
@@ -0,0 +1,26 @@
 4
+from charms.reactive import hook
 5
+from charms.reactive import RelationBase
 6
+from charms.reactive import scopes
 7
+
 8
+
 9
+class spectrumclientRequires(RelationBase):
10
+    scope = scopes.UNIT
11
+
12
+    @hook('{requires:spectrum-scale-client}-relation-joined')
13
+    def joined(self):
14
+        conversation = self.conversation()
15
+        conversation.remove_state('{relation_name}.departing')
16
+        conversation.set_state('{relation_name}.connected')
17
+
18
+    @hook('{requires:spectrum-scale-client}-relation-changed')
19
+    def changed(self):
20
+        conversation = self.conversation()
21
+        conversation.remove_state('{relation_name}.departing')
22
+        conversation.set_state('{relation_name}.ready')
23
+
24
+    @hook('{requires:spectrum-scale-client}-relation-departed')
25
+    def departed(self):
26
+        conversation = self.conversation()
27
+        conversation.remove_state('{relation_name}.ready')
28
+        conversation.remove_state('{relation_name}.connected')
29
+        conversation.set_state('{relation_name}.departing')
Back to file index

hooks/start

 1
--- 
 2
+++ hooks/start
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/stop

 1
--- 
 2
+++ hooks/stop
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/update-status

 1
--- 
 2
+++ hooks/update-status
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/upgrade-charm

 1
--- 
 2
+++ hooks/upgrade-charm
 3
@@ -0,0 +1,28 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import os
 8
+import sys
 9
+sys.path.append('lib')
10
+
11
+# This is an upgrade-charm context, make sure we install latest deps
12
+if not os.path.exists('wheelhouse/.upgrade'):
13
+    open('wheelhouse/.upgrade', 'w').close()
14
+    if os.path.exists('wheelhouse/.bootstrapped'):
15
+        os.unlink('wheelhouse/.bootstrapped')
16
+else:
17
+    os.unlink('wheelhouse/.upgrade')
18
+
19
+from charms.layer import basic
20
+basic.bootstrap_charm_deps()
21
+basic.init_config_states()
22
+
23
+
24
+# This will load and run the appropriate @hook and other decorated
25
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
26
+# and $CHARM_DIR/hooks/relations.
27
+#
28
+# See https://jujucharms.com/docs/stable/authors-charm-building
29
+# for more information on this pattern.
30
+from charms.reactive import main
31
+main()
Back to file index

icon.svg

 1
--- 
 2
+++ icon.svg
 3
@@ -0,0 +1,29 @@
 4
+<?xml version="1.0" encoding="UTF-8"?>
 5
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
 6
+<!-- Creator: CorelDRAW X6 -->
 7
+<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" width="1in" height="0.999996in" version="1.1" shape-rendering="geometricPrecision" text-rendering="geometricPrecision" image-rendering="optimizeQuality" fill-rule="evenodd" clip-rule="evenodd"
 8
+viewBox="0 0 1000 1000"
 9
+ xmlns:xlink="http://www.w3.org/1999/xlink">
10
+ <defs>
11
+    <linearGradient id="id0" gradientUnits="userSpaceOnUse" x1="500.002" y1="999.996" x2="500.002" y2="0">
12
+     <stop offset="0" stop-color="#A1CD3D"/>
13
+     <stop offset="1" stop-color="#DBF799"/>
14
+    </linearGradient>
15
+    <mask id="id1">
16
+      <linearGradient id="id2" gradientUnits="userSpaceOnUse" x1="500.002" y1="58.4805" x2="500.002" y2="307.017">
17
+       <stop offset="0" stop-opacity="1" stop-color="white"/>
18
+       <stop offset="0.141176" stop-opacity="89.8471" stop-color="white"/>
19
+       <stop offset="1" stop-opacity="0" stop-color="white"/>
20
+      </linearGradient>
21
+     <rect fill="url(#id2)" width="1000" height="365"/>
22
+    </mask>
23
+ </defs>
24
+ <g id="Layer_x0020_1">
25
+  <metadata id="CorelCorpID_0Corel-Layer"/>
26
+  <g id="_188173616">
27
+   <path id="Background" fill="url(#id0)" d="M0 676l0 -352c0,-283 41,-324 324,-324l352 0c284,0 324,41 324,324l0 352c0,283 -40,324 -324,324l-352 0c-283,0 -324,-41 -324,-324z"/>
28
+   <path fill="#999999" mask="url(#id1)" d="M0 365l0 -41c0,-283 41,-324 324,-324l352 0c284,0 324,41 324,324l0 41c0,-283 -40,-324 -324,-324l-352 0c-283,0 -324,41 -324,324z"/>
29
+   <path fill="white" fill-rule="nonzero" d="M438 407c12,-8 26,-14 40,-17l0 -87c-13,-4 -25,-11 -35,-20l0 0c-14,-15 -23,-35 -23,-57 0,-22 9,-42 23,-57 15,-14 35,-23 57,-23 22,0 42,9 57,23 14,15 23,35 23,57 0,22 -9,42 -23,57l0 0c-10,9 -22,16 -35,20l0 87c14,3 28,9 40,17l62 -61c-7,-13 -10,-26 -10,-40l0 0c0,-20 7,-41 23,-57 16,-15 36,-23 57,-23 20,0 41,8 57,23 15,16 23,37 23,57 0,21 -8,41 -23,57 -16,16 -37,23 -57,23l0 0c-14,0 -27,-3 -40,-10l-61 62c8,12 14,26 17,40l87 0c4,-13 11,-25 20,-35l0 0c15,-14 35,-23 57,-23 22,0 42,9 57,23 14,15 23,35 23,57 0,22 -9,42 -23,57 -15,14 -35,23 -57,23 -22,0 -42,-9 -57,-23l0 0c-9,-10 -16,-22 -20,-35l-87 0c-3,14 -9,28 -17,40l61 62c13,-7 26,-10 40,-10l0 0c20,0 41,7 57,23 15,16 23,36 23,57 0,20 -8,41 -23,57 -16,15 -37,23 -57,23 -21,0 -41,-8 -57,-23 -16,-16 -23,-37 -23,-57l0 0c0,-14 3,-27 10,-40l-62 -61c-12,8 -26,14 -40,17l0 87c13,4 25,11 35,20l0 0c14,15 23,35 23,57 0,22 -9,42 -23,57 -15,14 -35,23 -57,23 -22,0 -42,-9 -57,-23 -14,-15 -23,-35 -23,-57 0,-22 9,-42 23,-57l0 0c10,-9 22,-16 35,-20l0 -87c-14,-3 -28,-9 -40,-17l-62 61c7,13 10,26 10,40l0 0c0,20 -7,41 -23,57 -16,15 -36,23 -57,23 -20,0 -41,-8 -57,-23 -15,-16 -23,-37 -23,-57 0,-21 8,-41 23,-57 16,-16 37,-23 57,-23l0 0c14,0 27,3 40,10l61 -62c-8,-12 -14,-26 -17,-40l-87 0c-4,13 -11,25 -20,35l0 0c-15,14 -35,23 -57,23 -22,0 -42,-9 -57,-23 -14,-15 -23,-35 -23,-57 0,-22 9,-42 23,-57 15,-14 35,-23 57,-23 22,0 42,9 57,23l0 0c9,10 16,22 20,35l87 0c3,-14 9,-28 17,-40l-61 -62c-13,7 -26,10 -40,10l0 0c-20,0 -41,-7 -57,-23 -15,-16 -23,-36 -23,-57 0,-20 8,-41 23,-57 16,-15 37,-23 57,-23 21,0 41,8 57,23 16,16 23,37 23,57l0 0c0,14 -3,27 -10,40l62 61zm62 49c24,0 44,20 44,44 0,24 -20,44 -44,44 -24,0 -44,-20 -44,-44 0,-24 20,-44 44,-44zm35 -265c-9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15 -9,9 -15,21 -15,35 0,14 6,26 15,35l0 0c9,9 21,15 35,15 14,0 26,-6 35,-15l0 0c9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35zm-229 65c-13,0 -25,5 -35,15 -10,10 -15,22 -15,35 0,13 5,26 15,35 10,10 22,15 35,15l0 0c13,0 26,-5 35,-15 10,-9 15,-22 15,-35l0 0c0,-13 -5,-25 -15,-35 -9,-10 -22,-15 -35,-15zm-115 209c-9,9 -15,21 -15,35 0,14 6,26 15,35 9,9 21,15 35,15 14,0 26,-6 35,-15l0 0c9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35l0 0c-9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15zm65 229c0,13 5,25 15,35 10,10 22,15 35,15 13,0 26,-5 35,-15 10,-10 15,-22 15,-35l0 0c0,-13 -5,-26 -15,-35 -9,-10 -22,-15 -35,-15l0 0c-13,0 -25,5 -35,15 -10,9 -15,22 -15,35zm209 115c9,9 21,15 35,15 14,0 26,-6 35,-15 9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35l0 0c-9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15l0 0c-9,9 -15,21 -15,35 0,14 6,26 15,35zm229 -65c13,0 25,-5 35,-15 10,-10 15,-22 15,-35 0,-13 -5,-26 -15,-35 -10,-10 -22,-15 -35,-15l0 0c-13,0 -26,5 -35,15 -10,9 -15,22 -15,35l0 0c0,13 5,25 15,35 9,10 22,15 35,15zm115 -209c9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35 -9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15l0 0c-9,9 -15,21 -15,35 0,14 6,26 15,35l0 0c9,9 21,15 35,15 14,0 26,-6 35,-15zm-65 -229c0,-13 -5,-25 -15,-35 -10,-10 -22,-15 -35,-15 -13,0 -26,5 -35,15 -10,10 -15,22 -15,35l0 0c0,13 5,26 15,35 9,10 22,15 35,15l0 0c13,0 25,-5 35,-15 10,-9 15,-22 15,-35zm-175 195l0 -4c0,-17 -7,-33 -20,-46 -14,-14 -31,-20 -49,-20 -18,0 -35,6 -49,20 -14,14 -20,31 -20,49 0,18 6,35 20,49 13,13 29,20 46,20l4 0c17,0 35,-7 48,-20 13,-13 20,-31 20,-48z"/>
30
+  </g>
31
+ </g>
32
+</svg>
Back to file index

layer.yaml

 1
--- 
 2
+++ layer.yaml
 3
@@ -0,0 +1,16 @@
 4
+"options":
 5
+  "basic":
 6
+    "packages":
 7
+    - "tar"
 8
+    - "unzip"
 9
+
10
+
11
+    "use_venv": !!bool "false"
12
+    "include_system_packages": !!bool "false"
13
+  "ibm-spectrum-scale-client": {}
14
+"repo": "bzr+ssh://bazaar.launchpad.net/~ibmcharmers/ibmlayers/layer-ibm-spectrum-scale-client/"
15
+"includes":
16
+- "layer:basic"
17
+- "interface:gpfs"
18
+- "interface:spectrum-scale-client"
19
+"is": "ibm-spectrum-scale-client"
Back to file index

lib/charms/layer/__init__.py

 1
--- 
 2
+++ lib/charms/layer/__init__.py
 3
@@ -0,0 +1,21 @@
 4
+import os
 5
+
 6
+
 7
+class LayerOptions(dict):
 8
+    def __init__(self, layer_file, section=None):
 9
+        import yaml  # defer, might not be available until bootstrap
10
+        with open(layer_file) as f:
11
+            layer = yaml.safe_load(f.read())
12
+        opts = layer.get('options', {})
13
+        if section and section in opts:
14
+            super(LayerOptions, self).__init__(opts.get(section))
15
+        else:
16
+            super(LayerOptions, self).__init__(opts)
17
+
18
+
19
+def options(section=None, layer_file=None):
20
+    if not layer_file:
21
+        base_dir = os.environ.get('CHARM_DIR', os.getcwd())
22
+        layer_file = os.path.join(base_dir, 'layer.yaml')
23
+
24
+    return LayerOptions(layer_file, section)
Back to file index

lib/charms/layer/basic.py

  1
--- 
  2
+++ lib/charms/layer/basic.py
  3
@@ -0,0 +1,205 @@
  4
+import os
  5
+import sys
  6
+import shutil
  7
+from glob import glob
  8
+from subprocess import check_call, CalledProcessError
  9
+from time import sleep
 10
+
 11
+from charms.layer.execd import execd_preinstall
 12
+
 13
+
 14
+def lsb_release():
 15
+    """Return /etc/lsb-release in a dict"""
 16
+    d = {}
 17
+    with open('/etc/lsb-release', 'r') as lsb:
 18
+        for l in lsb:
 19
+            k, v = l.split('=')
 20
+            d[k.strip()] = v.strip()
 21
+    return d
 22
+
 23
+
 24
+def bootstrap_charm_deps():
 25
+    """
 26
+    Set up the base charm dependencies so that the reactive system can run.
 27
+    """
 28
+    # execd must happen first, before any attempt to install packages or
 29
+    # access the network, because sites use this hook to do bespoke
 30
+    # configuration and install secrets so the rest of this bootstrap
 31
+    # and the charm itself can actually succeed. This call does nothing
 32
+    # unless the operator has created and populated $CHARM_DIR/exec.d.
 33
+    execd_preinstall()
 34
+    # ensure that $CHARM_DIR/bin is on the path, for helper scripts
 35
+    os.environ['PATH'] += ':%s' % os.path.join(os.environ['CHARM_DIR'], 'bin')
 36
+    venv = os.path.abspath('../.venv')
 37
+    vbin = os.path.join(venv, 'bin')
 38
+    vpip = os.path.join(vbin, 'pip')
 39
+    vpy = os.path.join(vbin, 'python')
 40
+    if os.path.exists('wheelhouse/.bootstrapped'):
 41
+        activate_venv()
 42
+        return
 43
+    # bootstrap wheelhouse
 44
+    if os.path.exists('wheelhouse'):
 45
+        with open('/root/.pydistutils.cfg', 'w') as fp:
 46
+            # make sure that easy_install also only uses the wheelhouse
 47
+            # (see https://github.com/pypa/pip/issues/410)
 48
+            charm_dir = os.environ['CHARM_DIR']
 49
+            fp.writelines([
 50
+                "[easy_install]\n",
 51
+                "allow_hosts = ''\n",
 52
+                "find_links = file://{}/wheelhouse/\n".format(charm_dir),
 53
+            ])
 54
+        apt_install([
 55
+            'python3-pip',
 56
+            'python3-setuptools',
 57
+            'python3-yaml',
 58
+            'python3-dev',
 59
+        ])
 60
+        from charms import layer
 61
+        cfg = layer.options('basic')
 62
+        # include packages defined in layer.yaml
 63
+        apt_install(cfg.get('packages', []))
 64
+        # if we're using a venv, set it up
 65
+        if cfg.get('use_venv'):
 66
+            if not os.path.exists(venv):
 67
+                series = lsb_release()['DISTRIB_CODENAME']
 68
+                if series in ('precise', 'trusty'):
 69
+                    apt_install(['python-virtualenv'])
 70
+                else:
 71
+                    apt_install(['virtualenv'])
 72
+                cmd = ['virtualenv', '-ppython3', '--never-download', venv]
 73
+                if cfg.get('include_system_packages'):
 74
+                    cmd.append('--system-site-packages')
 75
+                check_call(cmd)
 76
+            os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
 77
+            pip = vpip
 78
+        else:
 79
+            pip = 'pip3'
 80
+            # save a copy of system pip to prevent `pip3 install -U pip`
 81
+            # from changing it
 82
+            if os.path.exists('/usr/bin/pip'):
 83
+                shutil.copy2('/usr/bin/pip', '/usr/bin/pip.save')
 84
+        # need newer pip, to fix spurious Double Requirement error:
 85
+        # https://github.com/pypa/pip/issues/56
 86
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse',
 87
+                    'pip'])
 88
+        # install the rest of the wheelhouse deps
 89
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse'] +
 90
+                   glob('wheelhouse/*'))
 91
+        if not cfg.get('use_venv'):
 92
+            # restore system pip to prevent `pip3 install -U pip`
 93
+            # from changing it
 94
+            if os.path.exists('/usr/bin/pip.save'):
 95
+                shutil.copy2('/usr/bin/pip.save', '/usr/bin/pip')
 96
+                os.remove('/usr/bin/pip.save')
 97
+        os.remove('/root/.pydistutils.cfg')
 98
+        # flag us as having already bootstrapped so we don't do it again
 99
+        open('wheelhouse/.bootstrapped', 'w').close()
100
+        # Ensure that the newly bootstrapped libs are available.
101
+        # Note: this only seems to be an issue with namespace packages.
102
+        # Non-namespace-package libs (e.g., charmhelpers) are available
103
+        # without having to reload the interpreter. :/
104
+        reload_interpreter(vpy if cfg.get('use_venv') else sys.argv[0])
105
+
106
+
107
+def activate_venv():
108
+    """
109
+    Activate the venv if enabled in ``layer.yaml``.
110
+
111
+    This is handled automatically for normal hooks, but actions might
112
+    need to invoke this manually, using something like:
113
+
114
+        # Load modules from $CHARM_DIR/lib
115
+        import sys
116
+        sys.path.append('lib')
117
+
118
+        from charms.layer.basic import activate_venv
119
+        activate_venv()
120
+
121
+    This will ensure that modules installed in the charm's
122
+    virtual environment are available to the action.
123
+    """
124
+    venv = os.path.abspath('../.venv')
125
+    vbin = os.path.join(venv, 'bin')
126
+    vpy = os.path.join(vbin, 'python')
127
+    from charms import layer
128
+    cfg = layer.options('basic')
129
+    if cfg.get('use_venv') and '.venv' not in sys.executable:
130
+        # activate the venv
131
+        os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
132
+        reload_interpreter(vpy)
133
+
134
+
135
+def reload_interpreter(python):
136
+    """
137
+    Reload the python interpreter to ensure that all deps are available.
138
+
139
+    Newly installed modules in namespace packages sometimes seemt to
140
+    not be picked up by Python 3.
141
+    """
142
+    os.execve(python, [python] + list(sys.argv), os.environ)
143
+
144
+
145
+def apt_install(packages):
146
+    """
147
+    Install apt packages.
148
+
149
+    This ensures a consistent set of options that are often missed but
150
+    should really be set.
151
+    """
152
+    if isinstance(packages, (str, bytes)):
153
+        packages = [packages]
154
+
155
+    env = os.environ.copy()
156
+
157
+    if 'DEBIAN_FRONTEND' not in env:
158
+        env['DEBIAN_FRONTEND'] = 'noninteractive'
159
+
160
+    cmd = ['apt-get',
161
+           '--option=Dpkg::Options::=--force-confold',
162
+           '--assume-yes',
163
+           'install']
164
+    for attempt in range(3):
165
+        try:
166
+            check_call(cmd + packages, env=env)
167
+        except CalledProcessError:
168
+            if attempt == 2:  # third attempt
169
+                raise
170
+            sleep(5)
171
+        else:
172
+            break
173
+
174
+
175
+def init_config_states():
176
+    import yaml
177
+    from charmhelpers.core import hookenv
178
+    from charms.reactive import set_state
179
+    from charms.reactive import toggle_state
180
+    config = hookenv.config()
181
+    config_defaults = {}
182
+    config_defs = {}
183
+    config_yaml = os.path.join(hookenv.charm_dir(), 'config.yaml')
184
+    if os.path.exists(config_yaml):
185
+        with open(config_yaml) as fp:
186
+            config_defs = yaml.safe_load(fp).get('options', {})
187
+            config_defaults = {key: value.get('default')
188
+                               for key, value in config_defs.items()}
189
+    for opt in config_defs.keys():
190
+        if config.changed(opt):
191
+            set_state('config.changed')
192
+            set_state('config.changed.{}'.format(opt))
193
+        toggle_state('config.set.{}'.format(opt), config.get(opt))
194
+        toggle_state('config.default.{}'.format(opt),
195
+                     config.get(opt) == config_defaults[opt])
196
+    hookenv.atexit(clear_config_states)
197
+
198
+
199
+def clear_config_states():
200
+    from charmhelpers.core import hookenv, unitdata
201
+    from charms.reactive import remove_state
202
+    config = hookenv.config()
203
+    remove_state('config.changed')
204
+    for opt in config.keys():
205
+        remove_state('config.changed.{}'.format(opt))
206
+        remove_state('config.set.{}'.format(opt))
207
+        remove_state('config.default.{}'.format(opt))
208
+    unitdata.kv().flush()
Back to file index

lib/charms/layer/execd.py

  1
--- 
  2
+++ lib/charms/layer/execd.py
  3
@@ -0,0 +1,138 @@
  4
+# Copyright 2014-2016 Canonical Limited.
  5
+#
  6
+# This file is part of layer-basic, the reactive base layer for Juju.
  7
+#
  8
+# charm-helpers is free software: you can redistribute it and/or modify
  9
+# it under the terms of the GNU Lesser General Public License version 3 as
 10
+# published by the Free Software Foundation.
 11
+#
 12
+# charm-helpers is distributed in the hope that it will be useful,
 13
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
 14
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 15
+# GNU Lesser General Public License for more details.
 16
+#
 17
+# You should have received a copy of the GNU Lesser General Public License
 18
+# along with charm-helpers.  If not, see <http://www.gnu.org/licenses/>.
 19
+
 20
+# This module may only import from the Python standard library.
 21
+import os
 22
+import sys
 23
+import subprocess
 24
+import time
 25
+
 26
+'''
 27
+execd/preinstall
 28
+
 29
+It is often necessary to configure and reconfigure machines
 30
+after provisioning, but before attempting to run the charm.
 31
+Common examples are specialized network configuration, enabling
 32
+of custom hardware, non-standard disk partitioning and filesystems,
 33
+adding secrets and keys required for using a secured network.
 34
+
 35
+The reactive framework's base layer invokes this mechanism as
 36
+early as possible, before any network access is made or dependencies
 37
+unpacked or non-standard modules imported (including the charms.reactive
 38
+framework itself).
 39
+
 40
+Operators needing to use this functionality may branch a charm and
 41
+create an exec.d directory in it. The exec.d directory in turn contains
 42
+one or more subdirectories, each of which contains an executable called
 43
+charm-pre-install and any other required resources. The charm-pre-install
 44
+executables are run, and if successful, state saved so they will not be
 45
+run again.
 46
+
 47
+    $CHARM_DIR/exec.d/mynamespace/charm-pre-install
 48
+
 49
+An alternative to branching a charm is to compose a new charm that contains
 50
+the exec.d directory, using the original charm as a layer,
 51
+
 52
+A charm author could also abuse this mechanism to modify the charm
 53
+environment in unusual ways, but for most purposes it is saner to use
 54
+charmhelpers.core.hookenv.atstart().
 55
+'''
 56
+
 57
+
 58
+def default_execd_dir():
 59
+    return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
 60
+
 61
+
 62
+def execd_module_paths(execd_dir=None):
 63
+    """Generate a list of full paths to modules within execd_dir."""
 64
+    if not execd_dir:
 65
+        execd_dir = default_execd_dir()
 66
+
 67
+    if not os.path.exists(execd_dir):
 68
+        return
 69
+
 70
+    for subpath in os.listdir(execd_dir):
 71
+        module = os.path.join(execd_dir, subpath)
 72
+        if os.path.isdir(module):
 73
+            yield module
 74
+
 75
+
 76
+def execd_submodule_paths(command, execd_dir=None):
 77
+    """Generate a list of full paths to the specified command within exec_dir.
 78
+    """
 79
+    for module_path in execd_module_paths(execd_dir):
 80
+        path = os.path.join(module_path, command)
 81
+        if os.access(path, os.X_OK) and os.path.isfile(path):
 82
+            yield path
 83
+
 84
+
 85
+def execd_sentinel_path(submodule_path):
 86
+    module_path = os.path.dirname(submodule_path)
 87
+    execd_path = os.path.dirname(module_path)
 88
+    module_name = os.path.basename(module_path)
 89
+    submodule_name = os.path.basename(submodule_path)
 90
+    return os.path.join(execd_path,
 91
+                        '.{}_{}.done'.format(module_name, submodule_name))
 92
+
 93
+
 94
+def execd_run(command, execd_dir=None, stop_on_error=True, stderr=None):
 95
+    """Run command for each module within execd_dir which defines it."""
 96
+    if stderr is None:
 97
+        stderr = sys.stdout
 98
+    for submodule_path in execd_submodule_paths(command, execd_dir):
 99
+        # Only run each execd once. We cannot simply run them in the
100
+        # install hook, as potentially storage hooks are run before that.
101
+        # We cannot rely on them being idempotent.
102
+        sentinel = execd_sentinel_path(submodule_path)
103
+        if os.path.exists(sentinel):
104
+            continue
105
+
106
+        try:
107
+            subprocess.check_call([submodule_path], stderr=stderr,
108
+                                  universal_newlines=True)
109
+            with open(sentinel, 'w') as f:
110
+                f.write('{} ran successfully {}\n'.format(submodule_path,
111
+                                                          time.ctime()))
112
+                f.write('Removing this file will cause it to be run again\n')
113
+        except subprocess.CalledProcessError as e:
114
+            # Logs get the details. We can't use juju-log, as the
115
+            # output may be substantial and exceed command line
116
+            # length limits.
117
+            print("ERROR ({}) running {}".format(e.returncode, e.cmd),
118
+                  file=stderr)
119
+            print("STDOUT<<EOM", file=stderr)
120
+            print(e.output, file=stderr)
121
+            print("EOM", file=stderr)
122
+
123
+            # Unit workload status gets a shorter fail message.
124
+            short_path = os.path.relpath(submodule_path)
125
+            block_msg = "Error ({}) running {}".format(e.returncode,
126
+                                                       short_path)
127
+            try:
128
+                subprocess.check_call(['status-set', 'blocked', block_msg],
129
+                                      universal_newlines=True)
130
+                if stop_on_error:
131
+                    sys.exit(0)  # Leave unit in blocked state.
132
+            except Exception:
133
+                pass  # We care about the exec.d/* failure, not status-set.
134
+
135
+            if stop_on_error:
136
+                sys.exit(e.returncode or 1)  # Error state for pre-1.24 Juju
137
+
138
+
139
+def execd_preinstall(execd_dir=None):
140
+    """Run charm-pre-install for each module within execd_dir."""
141
+    execd_run('charm-pre-install', execd_dir=execd_dir)
Back to file index

metadata.yaml

 1
--- 
 2
+++ metadata.yaml
 3
@@ -0,0 +1,43 @@
 4
+"name": "ibm-spectrum-scale-client"
 5
+"summary": "IBM SPECTRUM SCALE CLIENT"
 6
+"maintainer": "IBM Juju Support Team <jujusupp@us.ibm.com>"
 7
+"description": "IBM Spectrum Scale is a flexible software-defined storage that can\
 8
+  \ be deployed as high performance file storage\nor a cost optimized large-scale\
 9
+  \ content repository. IBM Spectrum Scale, previously known as IBM General Parallel\n\
10
+  File System (GPFS), is built from the ground up to scale performance and capacity\
11
+  \ with no bottlenecks.\nA client node is any server that has the Spectrum Scale\
12
+  \ product installed but do not support direct attached disks.\nA client node will\
13
+  \ not be part of the node pool from where file system managers and token managers\
14
+  \ are selected. \n"
15
+"tags":
16
+- "ibm"
17
+- "gpfs"
18
+- "filesystem"
19
+- "storage"
20
+"series":
21
+- "trusty"
22
+- "xenial"
23
+"requires":
24
+  "gpfsmanager":
25
+    "interface": "gpfs"
26
+  "juju-info":
27
+    "interface": "juju-info"
28
+    "scope": "container"
29
+"provides":
30
+  "client":
31
+    "interface": "spectrum-scale-client"
32
+"peers":
33
+  "quorum":
34
+    "interface": "gpfs"
35
+"resources":
36
+  "ibm_spectrum_scale_installer_client":
37
+    "type": "file"
38
+    "filename": "ibm_spectrum_scale_installer.tar.gz"
39
+    "description": "IBM Spectrum Scale install archive"
40
+  "ibm_spectrum_scale_client_fixpack":
41
+    "type": "file"
42
+    "filename": "Spectrum_Scale_Standard_Fixpack.tar.gz"
43
+    "description": "IBM Spectrum Scale fixpack install archive"
44
+"subordinate": !!bool "true"
45
+"terms":
46
+- "ibm-spectrum-scale/1"
Back to file index

reactive/ibm-spectrum-scale-client.py

   1
--- 
   2
+++ reactive/ibm-spectrum-scale-client.py
   3
@@ -0,0 +1,1180 @@
   4
+from charms.reactive import when
   5
+from charms.reactive import when_not
   6
+from charms.reactive import set_state
   7
+from charms.reactive import remove_state
   8
+from charms.reactive import is_state
   9
+from charms.reactive import hook
  10
+from charmhelpers.core import hookenv
  11
+from shlex import split
  12
+import tempfile
  13
+from charmhelpers import fetch
  14
+from charmhelpers.payload import (
  15
+    archive,
  16
+)
  17
+import platform
  18
+import os
  19
+from subprocess import (
  20
+    call,
  21
+    check_call,
  22
+    check_output,
  23
+    Popen,
  24
+    CalledProcessError,
  25
+    PIPE,
  26
+    STDOUT
  27
+
  28
+)
  29
+import shutil
  30
+import socket
  31
+import re
  32
+import glob
  33
+import time
  34
+
  35
+charm_dir = os.environ['CHARM_DIR']
  36
+CLIENT_IP_ADDRESS = hookenv.unit_get('private-address')
  37
+CLIENT_PUBLIC_IP_ADDRESS = hookenv.unit_get('public-address')
  38
+CLIENT_HOSTNAME = socket.gethostname()
  39
+SPECTRUM_SCALE_INSTALL_PATH = '/usr/lpp/mmfs'
  40
+CMD_DEB_INSTALL = ('dpkg -i /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.base*deb'
  41
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.gpl*deb'
  42
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.gskit*deb'
  43
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.msg*deb'
  44
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.ext*deb')
  45
+# Command for uninstalling deb packages
  46
+CMD_DEB_UNINSTALL = ('dpkg -P gpfs.ext gpfs.gpl gpfs.base'
  47
+                     ' gpfs.docs gpfs.gskit gpfs.msg.en-us')
  48
+# development packages needed to build kernel modules for GPFS cluster
  49
+PREREQS = ["ksh", "binutils", "m4", "libaio1", "g++", "cpp", "make",
  50
+           "gcc", "expect"]
  51
+dir_mounts = []
  52
+dfh_output = []
  53
+config = hookenv.config()
  54
+
  55
+
  56
+def add_to_path(p, new):
  57
+    return p if new in p.split(':') else p + ':' + new
  58
+
  59
+
  60
+os.environ['PATH'] = add_to_path(os.environ['PATH'], '/usr/lpp/mmfs/bin')
  61
+
  62
+
  63
+def setadd_hostname(CLIENT_HOSTNAME, CLIENT_IP_ADDRESS):
  64
+    """
  65
+    Function for adding hostname details in /etc/hosts file.
  66
+    :param  CLIENT_HOSTNAME:string - Hostname of the client
  67
+    :param  CLIENT_IP_ADDRESS:string - IP Address of the client
  68
+    """
  69
+
  70
+    ip = CLIENT_IP_ADDRESS
  71
+    hostname = CLIENT_HOSTNAME
  72
+    try:
  73
+        socket.gethostbyname(hostname)
  74
+    except:
  75
+        hookenv.log("IBM SPECTRUM SCALE : Hostname not resolving, adding\
  76
+                    to /etc/hosts")
  77
+    try:
  78
+        with open("/etc/hosts", "a") as hostfile:
  79
+            hostfile.write("%s %s\n" % (ip, hostname))
  80
+    except FileNotFoundError:
  81
+        hookenv.log("IBM SPECTRUM SCALE : File does not exist.")
  82
+
  83
+
  84
+def check_platform_architecture():
  85
+    """
  86
+    Function to check the platform architecture
  87
+    :returns: string
  88
+    """
  89
+
  90
+    return platform.processor()
  91
+
  92
+
  93
+def cluster_exists():
  94
+    """
  95
+    To check whether Spectrum Scale Cluster exists or not
  96
+    Return True if the cluster exists otherwise False.
  97
+    :returns: Boolean
  98
+    """
  99
+    try:
 100
+        return True if call('mmlscluster') == 0 else False
 101
+    except CalledProcessError:
 102
+        hookenv.log("IBM SPECTRUM SCALE : May be cluster is down or it does "
 103
+                    "not exist yet.")
 104
+    except FileNotFoundError:
 105
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 106
+                    "yet.")
 107
+
 108
+
 109
+def node_exists(nodename):
 110
+    """
 111
+    Function to check whether node exists in the Spectrum Scale cluster or not
 112
+    Return true if the node already exists in the Spectrum Scale cluster.
 113
+    :param nodename: string - Nodename of the spectrum scale node
 114
+    :returns: Boolean
 115
+    """
 116
+    try:
 117
+        lscluster = check_output('mmlscluster')
 118
+        lscluster = lscluster.decode('utf-8')
 119
+        node = re.search('^ *\d+.*%s.*$' % nodename, lscluster, re.M)
 120
+        return False if node is None else True
 121
+    except CalledProcessError:
 122
+        hookenv.log("IBM SPECTRUM SCALE : Check cluster is up an running on"
 123
+                    " the node")
 124
+    except FileNotFoundError:
 125
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 126
+                    "yet.")
 127
+
 128
+
 129
+def build_modules():
 130
+    """
 131
+    Function to build binary gpfs modules after Spectrum Scale is installed.
 132
+    :param: None
 133
+    :returns: None
 134
+    """
 135
+
 136
+    try:
 137
+        check_call(["mmbuildgpl"])
 138
+    except CalledProcessError:
 139
+        hookenv.log('IBM SPECTRUM SCALE : '
 140
+                    ' mmbuildgpl was not executed', level=hookenv.WARNING)
 141
+    except OSError:
 142
+        hookenv.log('IBM SPECTRUM SCALE : mmbuildgpl not found/installed')
 143
+
 144
+
 145
+def get_kernel_version():
 146
+    """
 147
+    Function to get the kernel version
 148
+    :param: None
 149
+    :returns: string
 150
+    """
 151
+
 152
+    return check_output(['uname', '-r']).strip()
 153
+
 154
+
 155
+def check_privssh_key(key):
 156
+    """
 157
+    To check the existence of private ssh key, so that duplicate key is
 158
+    not added again.
 159
+    :param key: String - private ssh key
 160
+    :returns: Boolean
 161
+    """
 162
+
 163
+    if key:
 164
+        key_exist = True
 165
+        v1 = []
 166
+        v2 = []
 167
+        with open("/root/.ssh/id_rsa_temp_keys", "w") as idfile:
 168
+            idfile.write(key)
 169
+        with open('/root/.ssh/id_rsa_temp_keys') as f:
 170
+            privkey = f.readlines()
 171
+        for x in privkey:
 172
+            if 'RSA PRIVATE KEY-----\n' not in x:
 173
+                v1.append(x)
 174
+        with open('/root/.ssh/id_rsa') as f1:
 175
+            keys = f1.readlines()
 176
+        for x in keys:
 177
+            if 'RSA PRIVATE KEY-----\n' not in x:
 178
+                v2.append(x)
 179
+        for element in v1:
 180
+            if element in [z for z in v2]:
 181
+                continue
 182
+            else:
 183
+                key_exist = False
 184
+                break
 185
+    return key_exist
 186
+
 187
+
 188
+def create_ssh_keys():
 189
+    """
 190
+    Function for creating the public and private ssh keys.
 191
+    :param: None
 192
+    :returns: None
 193
+    """
 194
+
 195
+    # Generate ssh keys if needed
 196
+    hookenv.log("IBM SPECTRUM SCALE : Creating SSH keys")
 197
+    if not os.path.isfile("/root/.ssh/id_rsa"):
 198
+        call(split('ssh-keygen -q -N "" -f /root/.ssh/id_rsa'))
 199
+        # Ensure permissions are good
 200
+        check_call(['chmod', '0600', '/root/.ssh/id_rsa.pub'])
 201
+        check_call(['chmod', '0600', '/root/.ssh/id_rsa'])
 202
+        with open("/root/.ssh/id_rsa.pub", "r") as idfile:
 203
+            pubkey = idfile.read()
 204
+        with open("/root/.ssh/authorized_keys", "w+") as idfile:
 205
+            idfile.write(pubkey)
 206
+
 207
+
 208
+def get_ssh_keys():
 209
+    """
 210
+    Function to get the public and private ssh keys.
 211
+    :returns: list
 212
+    """
 213
+
 214
+    with open("/root/.ssh/id_rsa.pub", "r") as idfile:
 215
+        pubkey = idfile.read()
 216
+    with open("/root/.ssh/id_rsa", "r") as idfile:
 217
+        privkey = idfile.read()
 218
+    return [privkey, pubkey]
 219
+
 220
+
 221
+def add_ssh_privkey(key):
 222
+    """
 223
+    Adding the private ssh keys of mgr units, so that ssh
 224
+    communication can happen between them.
 225
+    :param key:string - private ssh key.
 226
+    """
 227
+
 228
+    if key:
 229
+        # Check key exists or not in id_rsa file
 230
+        key_exists = check_privssh_key(key)
 231
+        if key_exists is False:
 232
+            with open("/root/.ssh/id_rsa", "a+") as f:
 233
+                f.write(key)
 234
+            check_call(['chmod', '0600', '/root/.ssh/id_rsa'])
 235
+        else:
 236
+            hookenv.log("IBM SPECTRUM SCALE : SSH key Value exists in"
 237
+                        " id_rsa file")
 238
+
 239
+
 240
+# Adding ssh keys
 241
+def add_ssh_pubkey(key):
 242
+    """
 243
+    Adding the public ssh keys of mgr and client peer units, so that ssh
 244
+    communication can happen between them.
 245
+    :param key:list - List of public ssh keys.
 246
+    """
 247
+
 248
+    key_list = [key]
 249
+    if key_list:
 250
+        filepath = "/root/.ssh/authorized_keys"
 251
+        with open(filepath, "r") as myfile:
 252
+            lines = myfile.readlines()
 253
+            myfile.close()
 254
+        if (set(key_list) & set(lines)):
 255
+            hookenv.log("IBM SPECTRUM SCALE : SSH key already exists")
 256
+        else:
 257
+            with open("/root/.ssh/authorized_keys", "a+") as idfile:
 258
+                idfile.write(key)
 259
+                idfile.close()
 260
+
 261
+
 262
+def configure_ssh():
 263
+    """
 264
+    Configuring the ssh settings.
 265
+    :returns: None
 266
+    """
 267
+
 268
+    # Configure sshd_config file to allow root
 269
+    sshconf = open("/etc/ssh/sshd_config", 'r')
 270
+    tf = tempfile.NamedTemporaryFile(mode='w+t', delete=False)
 271
+    tfn = tf.name
 272
+    for line in sshconf:
 273
+        if not line.startswith("#"):
 274
+            if "PermitRootLogin" in line:
 275
+                tf.write("# Updated by GPFS charm: ")
 276
+                tf.write(line)
 277
+            else:
 278
+                tf.write(line)
 279
+        else:
 280
+            tf.write(line)
 281
+    tf.write("# added by GPFS charm:\n")
 282
+    tf.write("PermitRootLogin without-password\n")
 283
+    sshconf.close()
 284
+    tf.close()
 285
+    shutil.copy(tfn, '/etc/ssh/sshd_config')
 286
+    call(split('service ssh reload'))
 287
+    # Avoid the host key confirmation
 288
+    with open("/root/.ssh/config", "w+") as idfile:
 289
+        idfile.write("StrictHostKeyChecking no\n")
 290
+
 291
+
 292
+def gpfs_filesystem_exists():
 293
+    """
 294
+    To check Spectrum Scale FileSystem exists or not
 295
+    Return True if the FileSystem exists otherwise False.
 296
+    :returns: Boolean
 297
+    """
 298
+
 299
+    try:
 300
+        with open(os.devnull, 'w') as FNULL:
 301
+            return True if check_call(split('mmlsfs all'), stdout=FNULL,
 302
+                                      stderr=STDOUT) == 0 else False
 303
+    except CalledProcessError:
 304
+        hookenv.log("IBM SPECTRUM SCALE : May be cluster is down or the file"
 305
+                    " system is not created yet. Please check the logs")
 306
+    except FileNotFoundError:
 307
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 308
+                    "yet.")
 309
+
 310
+
 311
+def upgrade_spectrumscale(cfg_ibm_spectrum_scale_fixpack):
 312
+    """
 313
+    Block of code for upgrading the Spectrum Scale packages/fix packs
 314
+    :param cfg_ibm_spectrum_scale_fixpack:string - Spectrum Scale fixpack
 315
+                                                   dirname
 316
+    :returns: None
 317
+    """
 318
+    # Before upgrade check that Spectrum Scale cluster exists.
 319
+    Cluster_status_flag = "Started"
 320
+    if cluster_exists():
 321
+        if gpfs_filesystem_exists():
 322
+            hookenv.log(check_output(split('mmumount all -N %s'
 323
+                        % CLIENT_HOSTNAME)))
 324
+            Cluster_status_flag = "Unmounted"
 325
+        hookenv.log(check_output(split('mmshutdown -N %s' % CLIENT_HOSTNAME)))
 326
+        Cluster_status_flag = "Stopped"
 327
+    fixpack_downloadpath = os.path.dirname(cfg_ibm_spectrum_scale_fixpack)
 328
+    os.chdir(fixpack_downloadpath)
 329
+    archivelist = glob.glob("*.tar.gz")
 330
+    if archivelist:
 331
+        archive.extract(str(archivelist[0]), fixpack_downloadpath)
 332
+        hookenv.log("IBM SPECTRUM SCALE : Extraction of Fix pack is "
 333
+                    "successfull")
 334
+        fixpackinstall_filename = glob.glob("Spectrum_*Linux-install")
 335
+        if fixpackinstall_filename:
 336
+            # Give permissions
 337
+            check_call(['chmod', '0755', fixpack_downloadpath + "/" +
 338
+                       str(fixpackinstall_filename[0])])
 339
+            install_cmd = ([fixpack_downloadpath + "/" +
 340
+                           str(fixpackinstall_filename[0]),
 341
+                           '--text-only', '--silent'])
 342
+            check_call(install_cmd, shell=False)
 343
+            check_call('cd /usr/lpp/mmfs/4.2.*', shell=True)
 344
+            try:
 345
+                check_call(CMD_DEB_INSTALL, shell=True)
 346
+                # To build GPFS portability layer.
 347
+                build_modules()
 348
+                os.chdir(SPECTRUM_SCALE_INSTALL_PATH)
 349
+                gpfs_fixpackinstall_folder = glob.glob("4.2.*")
 350
+                for val in gpfs_fixpackinstall_folder:
 351
+                    shutil.rmtree(val)
 352
+                if Cluster_status_flag == "Stopped":
 353
+                    hookenv.log(check_output(split('mmstartup -N %s'
 354
+                                                   % CLIENT_HOSTNAME)))
 355
+                    if Cluster_status_flag == "Unmounted":
 356
+                        time.sleep(50)
 357
+                        mount_cmd = ('mmmount all -N %s' % CLIENT_HOSTNAME)
 358
+                        hookenv.log(check_output(split(mount_cmd)))
 359
+                hookenv.status_set('active', "SPECTRUM SCALE "
 360
+                                   "is updated successfully")
 361
+                set_state('ibm-spectrum-scale-client.updated')
 362
+            except CalledProcessError as e:
 363
+                hookenv.log("IBM SPECTRUM SCALE : There might be issues "
 364
+                            "while applying fix pack, please check logs")
 365
+                hookenv.log(e.output)
 366
+                hookenv.status_set('blocked', "Error while updating")
 367
+                return
 368
+
 369
+
 370
+def check_lxd_name(driver_hostname):
 371
+    """
 372
+    To check whether a given lxd name exists in the lxc list of host machine
 373
+    param: driver_hostname:string - LXD name/Hostname of the driver container
 374
+    Return True if the container exists on the host machine of client.
 375
+    :returns: Boolean
 376
+    """
 377
+
 378
+    hookenv.log("IBM SPECTRUM SCALE : LXD/LXC name on which cinder/glance"
 379
+                " is deployed : %s" % driver_hostname)
 380
+    lxd_list_val = []
 381
+    try:
 382
+        lxd_list = check_output(split('lxc list'))
 383
+        lxd_list = lxd_list.decode('utf-8')
 384
+        lxd_list_val.append(lxd_list.split(' '))
 385
+        if any(driver_hostname in s for s in lxd_list_val):
 386
+                return True
 387
+    except CalledProcessError as e:
 388
+        hookenv.log(e.output)
 389
+        hookenv.log("IBM SPECTRUM SCALE : Issue while running lxc "
 390
+                    "list command !!!!.")
 391
+        return False
 392
+    except OSError:
 393
+        hookenv.log("IBM SPECTRUM SCALE : Looks like no lxc "
 394
+                    "containers initialized yet !!!!.")
 395
+        return False
 396
+
 397
+
 398
+def list_filesystems():
 399
+    """
 400
+    To return a list of gpfs filesystems
 401
+    Return True if the cluster exists otherwise False.
 402
+    :returns: list
 403
+    """
 404
+    try:
 405
+            dir_list = check_output(split('mmlsfs all -T'))
 406
+            dir_list = dir_list.decode('utf-8')
 407
+            with open(charm_dir + '/dir_gpfstemp_file', 'w+') as idfile:
 408
+                idfile.write(dir_list)
 409
+            searchfile_node = open(charm_dir + "/dir_gpfstemp_file", "r")
 410
+            for line in searchfile_node:
 411
+                if "-T " in line:
 412
+                    val = line.split()[1]
 413
+                    dir_mounts.append(val)
 414
+            searchfile_node.close()
 415
+    except CalledProcessError as e:
 416
+            hookenv.log(e.output)
 417
+            return
 418
+    except FileNotFoundError:
 419
+            hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not "
 420
+                        "installed yet.")
 421
+            return
 422
+    return dir_mounts
 423
+
 424
+
 425
+def check_lxc_securityparam(driver_hostname):
 426
+    """
 427
+    To check the container property 'security.privileged' is set to 'true'
 428
+    or not
 429
+    :param driver_hostname:string - LXC/LXD name.
 430
+    :returns: Boolean
 431
+    """
 432
+
 433
+    lxc_cmd = 'lxc config get ' + driver_hostname + ' security.privileged'
 434
+    val_security = check_output(split(lxc_cmd))
 435
+    val_security = val_security.decode("utf-8")
 436
+    val_security = val_security.strip('\n')
 437
+    if val_security == 'true':
 438
+        return True
 439
+    else:
 440
+        return False
 441
+
 442
+
 443
+def bind_mount(driver_hostname):
 444
+    """
 445
+    Function for bind mounting gpfs directories to cinder/glance
 446
+    Return True if the cluster exists otherwise False.
 447
+    param driver_hostname:string - LXD name/Hostname of the driver container
 448
+    :returns: None
 449
+    """
 450
+
 451
+    hookenv.log("IBM SPECTRUM SCALE : LXD/LXC name on which cinder/glance"
 452
+                " is deployed : %s" % driver_hostname)
 453
+    # If the client node lxc list output  contains the lxc container
 454
+    # on which cinder or glance is installed, then only run bind mount.
 455
+    lxc_flag = check_lxd_name(driver_hostname)
 456
+    if lxc_flag is True:
 457
+        # Check the current value
 458
+        param_flag = check_lxc_securityparam(driver_hostname)
 459
+        if param_flag is False:
 460
+            lxc_cmd = ('lxc config set ' + driver_hostname +
 461
+                       ' security.privileged true')
 462
+            hookenv.log(check_output(split(lxc_cmd)))
 463
+            hookenv.log("IBM SPECTRUM SCALE : Setting security parameter "
 464
+                        "and starting container again")
 465
+            lxc_restart = 'lxc restart ' + driver_hostname
 466
+            hookenv.log(check_output(split(lxc_restart)))
 467
+            time.sleep(50)
 468
+
 469
+        # Call function to return list of gpfs filesystems
 470
+        dir_mounts = list_filesystems()
 471
+        dir_list = check_output(split('df -h'))
 472
+        dir_list = dir_list.decode('utf-8')
 473
+        with open(charm_dir + '/dir_temp_file', 'w+') as idfile:
 474
+            idfile.write(dir_list)
 475
+        searchfile_node = open(charm_dir + "/dir_temp_file", "r")
 476
+        for line in searchfile_node:
 477
+            dfh_output.append(line.split()[5])
 478
+        searchfile_node.close()
 479
+        mount_exist = False
 480
+        for line2 in dir_mounts:
 481
+            for s in dfh_output:
 482
+                if line2 in s:
 483
+                    mount_exist = True
 484
+                    device_name = re.sub(r'/', "", s)
 485
+                    try:
 486
+                        bind_cmd = ('lxc config device add ' +
 487
+                                    driver_hostname + ' ' + device_name +
 488
+                                    ' disk source=' + line2 + ' path=' +
 489
+                                    line2)
 490
+                        hookenv.log(check_output(split(bind_cmd)))
 491
+                        hookenv.log("IBM SPECTRUM SCALE : Directory %s "
 492
+                                    "bind mounted successfully for "
 493
+                                    "cinder/glance driver." % line2)
 494
+                    except CalledProcessError as e:
 495
+                        hookenv.log("IBM SPECTRUM SCALE : Error while running "
 496
+                                    "bind mount command.")
 497
+                        hookenv.log(e.output)
 498
+        if mount_exist is False:
 499
+            hookenv.log("IBM SPECTRUM SCALE : Looks like no gpfs directories "
 500
+                        "are mounted yet, nothing to bind !!!!.")
 501
+
 502
+
 503
+def chk_bind_remove_devices():
 504
+    """
 505
+    Function to check for all gpfs related bind devices on the host machine
 506
+    and unbind/remove them before client gets removed from the cluster.
 507
+    Return True if the cluster exists otherwise False.
 508
+    :returns: None
 509
+    """
 510
+
 511
+    lxd_list_val = []
 512
+    lxd_list_final = []
 513
+    try:
 514
+        dir_list = check_output(split('mmlsfs all -T'))
 515
+        dir_list = dir_list.decode('utf-8')
 516
+        with open(charm_dir + "/dir_gpfstemp_file", 'w+') as idfile:
 517
+            idfile.write(dir_list)
 518
+        searchfile_node = open(charm_dir + "/dir_gpfstemp_file", "r")
 519
+        for line in searchfile_node:
 520
+            if "-T " in line:
 521
+                val = line.split()[1]
 522
+                val = re.sub(r'/', "", val)
 523
+                dir_mounts.append(val)
 524
+        searchfile_node.close()
 525
+    except CalledProcessError as e:
 526
+        hookenv.log(e.output)
 527
+        return
 528
+    except FileNotFoundError:
 529
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not "
 530
+                    "installed yet.")
 531
+        return
 532
+    try:
 533
+        lxd_list = check_output(split('lxc list --format table --columns=n'))
 534
+        lxd_list = lxd_list.decode('utf-8')
 535
+        with open(charm_dir + '/lxd_temp_file', 'w+') as idfile:
 536
+            idfile.write(lxd_list)
 537
+        searchfile_lxd = open(charm_dir + "/lxd_temp_file", "r")
 538
+        for line1 in searchfile_lxd:
 539
+            line = re.sub(r"[|+\n]", "", line1)
 540
+            lxd_list_val.append(line)
 541
+        for x in lxd_list_val:
 542
+            if x and "-----" not in x:
 543
+                if 'NAME' not in x:
 544
+                    lxd_list_final.append(x)
 545
+        for lxdname in lxd_list_final:
 546
+            mount_bind_lxd = check_output(split('lxc config device list' +
 547
+                                                ' ' + lxdname))
 548
+            mount_bind_lxd = mount_bind_lxd.decode('utf-8')
 549
+            with open(charm_dir + '/lxdbind_temp_file', 'w+') as idfile:
 550
+                idfile.write(mount_bind_lxd)
 551
+            searchfile_lxd = open(charm_dir + "/lxdbind_temp_file", "r")
 552
+            for line in searchfile_lxd:
 553
+                binded_device = line.strip("\n")
 554
+                binded_device = binded_device.split(":")[0]
 555
+                if any(binded_device in s for s in dir_mounts):
 556
+                    try:
 557
+                        unbind_cmd = ('lxc config device remove ' +
 558
+                                      lxdname + ' ' + binded_device)
 559
+                        hookenv.log(check_output(split(unbind_cmd)))
 560
+                        # Set the lxc config parameter to False, before that
 561
+                        # check its already true or false
 562
+                        param_flag = check_lxc_securityparam(lxdname)
 563
+                        if param_flag is True:
 564
+                            lxc_cmd = ('lxc config unset ' + lxdname +
 565
+                                       ' security.privileged')
 566
+                            hookenv.log(check_output(split(lxc_cmd)))
 567
+                            hookenv.log("IBM SPECTRUM SCALE : Unsetting the "
 568
+                                        "security parameter and starting "
 569
+                                        "container again")
 570
+                            lxc_restartcmd = 'lxc restart ' + lxdname
 571
+                            hookenv.log(check_output(split(lxc_restartcmd)))
 572
+                            time.sleep(50)
 573
+                    except CalledProcessError as e:
 574
+                        hookenv.log("IBM SPECTRUM SCALE : Error while "
 575
+                                    "running unbind mount command.")
 576
+                        hookenv.log(e.output)
 577
+    except CalledProcessError as e:
 578
+        hookenv.log(e.output)
 579
+        hookenv.log("IBM SPECTRUM SCALE : Issue while running lxc "
 580
+                    "list command !!!!.")
 581
+    except OSError:
 582
+        hookenv.log("IBM SPECTRUM SCALE : Looks like no lxc "
 583
+                    "containers initialized yet !!!!.")
 584
+    except FileNotFoundError:
 585
+        hookenv.log("IBM SPECTRUM SCALE : File does not exist.")
 586
+
 587
+
 588
+@when_not('ibm-spectrum-scale-client.prereqs.installed')
 589
+def install_spectrum_scale_prereqs():
 590
+    """
 591
+    To install the pre-reqs and initial configuration before installation
 592
+    begins. Clearing out the states set and creation of temp files used
 593
+    during installation and configuration.
 594
+    """
 595
+
 596
+    ARCHITECTURE = check_platform_architecture()
 597
+    exists_host = False
 598
+    if (str(ARCHITECTURE) != "x86_64") and (str(ARCHITECTURE) != "ppc64le"):
 599
+        hookenv.log("IBM SPECTRUM SCALE: Unsupported platform. IBM Spectrum"
 600
+                    " Scale installed with this Charm supports only the"
 601
+                    " x86_64 platform and POWER LE (ppc64le) platforms.")
 602
+        hookenv.status_set('blocked', 'Unsupported Platform')
 603
+        return
 604
+    else:
 605
+        hookenv.log("IBM SPECTRUM SCALE : Pre-reqs will be installed")
 606
+        # install kernel prereq and other prereqs
 607
+        linux_headers = get_kernel_version()
 608
+        linux_headers_val = linux_headers.decode('ascii')
 609
+        fetch.apt_install(PREREQS)
 610
+        fetch.apt_install(linux_headers_val)
 611
+        filepath = "/etc/hosts"
 612
+        searchtext = str(CLIENT_IP_ADDRESS)+" "+str(CLIENT_HOSTNAME)
 613
+        searchfile = open(filepath, "r")
 614
+        for line in searchfile:
 615
+            if searchtext in line:
 616
+                exists_host = True
 617
+        searchfile.close()
 618
+        if exists_host is False:
 619
+            setadd_hostname(CLIENT_HOSTNAME, CLIENT_IP_ADDRESS)
 620
+        configure_ssh()
 621
+        create_ssh_keys()
 622
+
 623
+        remove_state('ibm-spectrum-scale-client.notify_master')
 624
+        remove_state('ibm-spectrum-scale-client.node.ready')
 625
+        set_state('ibm-spectrum-scale-client.prereqs.installed')
 626
+        os.chdir(charm_dir)
 627
+        try:
 628
+            open('gpfs-allmanagerunits_info.txt', 'w').close()
 629
+            open('cinder_hosts_file.txt', 'w').close()
 630
+        except OSError:
 631
+            pass
 632
+
 633
+
 634
+@when('ibm-spectrum-scale-client.prereqs.installed')
 635
+@when_not('ibm-spectrum-scale-client.installed')
 636
+def install_spectrum_scale():
 637
+    """
 638
+    Installing Spectrum Scale 4.2.2. Check if valid packages are present, then
 639
+    only proceed towards installation.
 640
+    """
 641
+
 642
+    hookenv.log('IBM SPECTRUM SCALE : Fetching the'
 643
+                ' ibm_spectrum_scale_installer_client resource', 'INFO')
 644
+    hookenv.status_set('active', 'fetching the'
 645
+                       ' ibm_spectrum_scale_installer_client resource')
 646
+    cfg_spectrum_scale_installer = (
 647
+        hookenv.resource_get('ibm_spectrum_scale_installer_client'))
 648
+    hookenv.status_set('active', 'Fetched'
 649
+                       ' ibm_spectrum_scale_installer_client resource')
 650
+
 651
+    # If we don't have a package, report blocked status; we can't proceed.
 652
+    if not cfg_spectrum_scale_installer:
 653
+        hookenv.log('IBM SPECTRUM SCALE : Missing IBM Spectrum Scale required'
 654
+                    ' resources', 'INFO')
 655
+        hookenv.status_set('blocked', 'SPECTRUM SCALE required'
 656
+                           ' packages are missing')
 657
+        return
 658
+
 659
+    chk_empty_pkg = ["file", cfg_spectrum_scale_installer]
 660
+    p = Popen(chk_empty_pkg, stdout=PIPE, stderr=PIPE, shell=False)
 661
+    output, err = p.communicate()
 662
+    spectrumscale_installer_msg = str(output)
 663
+    if ("empty" in spectrumscale_installer_msg):
 664
+        hookenv.log('IBM SPECTRUM SCALE : The required'
 665
+                    ' ibm_spectrum_scale_installer resource is '
 666
+                    'corrupt.', 'INFO')
 667
+        hookenv.status_set(
 668
+            'blocked', 'SPECTRUM SCALE required package is not correct/empty')
 669
+        return
 670
+    else:
 671
+        gpfs_downloadpath = os.path.dirname(cfg_spectrum_scale_installer)
 672
+        # Extract the installer contents if the Spectrum Scale installer
 673
+        # is present
 674
+        os.chdir(gpfs_downloadpath)
 675
+        archivelist = glob.glob("*.tar.gz")
 676
+        if archivelist:
 677
+            archive.extract(str(archivelist[0]), gpfs_downloadpath)
 678
+            hookenv.log("IBM SPECTRUM SCALE : Extraction of IBM Spectrum SCale"
 679
+                        " packages is successfull")
 680
+            gpfs_install_filename = glob.glob("Spectrum_*Linux-install")
 681
+            if gpfs_install_filename:
 682
+                check_call(['chmod', '0755', gpfs_downloadpath + "/" +
 683
+                           str(gpfs_install_filename[0])])
 684
+                install_cmd = ([gpfs_downloadpath + "/" +
 685
+                               str(gpfs_install_filename[0]),
 686
+                               '--text-only', '--silent'])
 687
+                try:
 688
+                    check_call(install_cmd, shell=False)
 689
+                    check_call('cd /usr/lpp/mmfs/4.2.*', shell=True)
 690
+                    check_call(CMD_DEB_INSTALL, shell=True)
 691
+                    # To build GPFS portability layer.
 692
+                    build_modules()
 693
+                    # Delete the install folder after install
 694
+                    os.chdir(SPECTRUM_SCALE_INSTALL_PATH)
 695
+                    gpfs_install_folder = glob.glob("4.2.*")
 696
+                    for val in gpfs_install_folder:
 697
+                        shutil.rmtree(val)
 698
+                    hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is"
 699
+                                " installed successfully")
 700
+                    set_state('ibm-spectrum-scale-client.installed')
 701
+                    hookenv.status_set('active', 'SPECTRUM SCALE is installed')
 702
+                except CalledProcessError as e:
 703
+                    hookenv.log("IBM SPECTRUM SCALE : Error while installing"
 704
+                                " Spectrum Scale, check the logs.")
 705
+                    hookenv.log(e.output)
 706
+                    hookenv.status_set('blocked', "SPECTRUM"
 707
+                                       " SCALE : Error while installing"
 708
+                                       " SPECTRUM SCALE")
 709
+                    return
 710
+            else:
 711
+                hookenv.log("IBM SPECTRUM SCALE: Unable to extract the "
 712
+                            "SPECTRUM SCALE packages. Verify whether "
 713
+                            "the package is corrupt or not")
 714
+                hookenv.status_set('blocked', 'IBM SPECTRUM SCALE package'
 715
+                                   ' is corrupt')
 716
+                return
 717
+
 718
+
 719
+@when('ibm-spectrum-scale-client.installed')
 720
+@when_not('ibm-spectrum-scale-client.updated')
 721
+def install_spectrum_scale_fixpack():
 722
+    """
 723
+    Installing Spectrum Scale 4.2.2 fixpack. Check if valid fixpack is
 724
+    present, then only proceed towards installing the fix packs.
 725
+    """
 726
+
 727
+    # Get the fixpack resource
 728
+    hookenv.log('IBM SPECTRUM SCALE : Fetching the '
 729
+                'ibm_spectrum_scale_fixpack resource', 'INFO')
 730
+    hookenv.status_set('active', 'fetching the ibm_spectrum_scale_fixpack'
 731
+                       ' resource')
 732
+    cfg_ibm_spectrum_scale_fixpack = (hookenv.resource_get(
 733
+                                      'ibm_spectrum_scale_client_fixpack'))
 734
+    hookenv.status_set('active', 'fetched ibm_spectrum_scale_fixpack resource')
 735
+    # If we don't have a  fixpack, just exit successfully; there's nothing.
 736
+    # to do.
 737
+    if cfg_ibm_spectrum_scale_fixpack is False:
 738
+        hookenv.log('IBM SPECTRUM SCALE : No IBM Spectrum Scale fixpack'
 739
+                    ' to install', 'INFO')
 740
+        if not cluster_exists():
 741
+            hookenv.status_set('active', 'SPECTRUM SCALE is installed')
 742
+        elif cluster_exists():
 743
+            hookenv.status_set('active', 'Client node is ready')
 744
+    else:
 745
+        chk_empty_fixpack = ["file", cfg_ibm_spectrum_scale_fixpack]
 746
+        p = Popen(chk_empty_fixpack, stdout=PIPE, stderr=PIPE, shell=False)
 747
+        output, err = p.communicate()
 748
+        spectrumscale_fixpack_msg = str(output)
 749
+        if ("empty" in spectrumscale_fixpack_msg):
 750
+            hookenv.log('IBM SPECTRUM SCALE : The required '
 751
+                        'ibm_spectrum_scale_fixpack resource is'
 752
+                        ' corrupt.', 'INFO')
 753
+            if not cluster_exists():
 754
+                hookenv.status_set('active', 'SPECTRUM SCALE is installed')
 755
+                return
 756
+            elif cluster_exists():
 757
+                hookenv.status_set('active', 'Client node is ready')
 758
+                return
 759
+        else:
 760
+            upgrade_spectrumscale(cfg_ibm_spectrum_scale_fixpack)
 761
+
 762
+
 763
+@hook('upgrade-charm')
 764
+def check_fixpack():
 765
+    """
 766
+    The upgrade-charm hook will fire when a new resource is pushed for this
 767
+    charm. This is a good time to determine if we need to deal with a new
 768
+    fixpack.
 769
+    """
 770
+
 771
+    if not is_state('ibm-spectrum-scale-client.updated'):
 772
+        # If there is no prior fixpack installed, do nothing since
 773
+        # install_spectrum_scale_fixpack will handle that case.
 774
+        hookenv.log("IBM SPECTRUM SCALE : no fixpack has been"
 775
+                    " installed; nothing to upgrade.")
 776
+        return
 777
+    else:
 778
+        hookenv.log("IBM SPECTRUM SCALE : scanning for new fixpacks"
 779
+                    " to install")
 780
+        fixpack_dir = (
 781
+            charm_dir+"/../resources/ibm_spectrum_scale_fixpack/"
 782
+            "Spectrum_Scale_Standard_Fixpack.tar.gz")
 783
+        if os.path.exists(fixpack_dir):
 784
+            mdsum = ["md5sum", fixpack_dir]
 785
+            p = Popen(mdsum, stdout=PIPE, stderr=PIPE, shell=False)
 786
+            output, err = p.communicate()
 787
+            value = output.split()
 788
+            CUR_FP1_MD5 = str(value[0])
 789
+
 790
+            # Calling resource-get here will fetch the fixpack resource.
 791
+            new_fixpack = hookenv.resource_get('ibm_spectrum_scale_fixpack')
 792
+            if new_fixpack is False:
 793
+                hookenv.log("IBM SPECTRUM SCALE : No new fixpack to install")
 794
+            else:
 795
+                mdsum_new = ["md5sum", new_fixpack]
 796
+                p1 = Popen(mdsum_new, stdout=PIPE, stderr=PIPE, shell=False)
 797
+                output1, err = p1.communicate()
 798
+                value1 = output1.split()
 799
+                NEW_FP1_MD5 = str(value1[0])
 800
+                # If sums don't match, we have a new fp. Configure states so
 801
+                # we re-run install_ibm_spectrum_scale_fixpack().
 802
+                if CUR_FP1_MD5 != NEW_FP1_MD5:
 803
+                    hookenv.log("IBM SPECTRUM SCALE : new fixpack detected")
 804
+                    remove_state('ibm-spectrum-scale-client.updated')
 805
+                else:
 806
+                    hookenv.log("IBM SPECTRUM SCALE : no new fixpack"
 807
+                                " to install")
 808
+        else:
 809
+            hookenv.log("IBM SPECTRUM SCALE :  no new fixpack to install")
 810
+
 811
+
 812
+@when_not('gpfsmanager.connected')
 813
+@when('ibm-spectrum-scale-client.installed')
 814
+@when_not('ibm-spectrum-scale-client.notify_master')
 815
+def notify_user():
 816
+    """
 817
+    Notify user that spectrum scale is installed successfully and client
 818
+    is waiting to be added spectrum scale cluster. Please add a relation b/w
 819
+    client and manager.
 820
+    """
 821
+
 822
+    hookenv.log("IBM SPECTRUM SCALE : Waiting for a relation to IBM"
 823
+                " Spectrum Scale Manager or Cluster is not created yet.")
 824
+    hookenv.status_set('blocked', "Waiting to be joined to manager/manager"
 825
+                       " not ready yet")
 826
+
 827
+
 828
+@when('quorum.connected')
 829
+@when('ibm-spectrum-scale-client.installed')
 830
+def send_details_peer(quorum):
 831
+    """
 832
+    If we have multiple units of client, each client should be
 833
+    able to communicate with each, other, for that pass the connection
 834
+    info details.
 835
+    """
 836
+
 837
+    hookenv.log("IBM SPECTRUM SCALE : Sending host, ssh details to peer unit")
 838
+    # send host details
 839
+    quorum.set_hostname_peer(CLIENT_HOSTNAME)
 840
+    # send the ssh details
 841
+    with open("/root/.ssh/id_rsa.pub", "r") as idfile:
 842
+        pubkey = idfile.read()
 843
+    # Send details to the peer node
 844
+    quorum.set_ssh_key(pubkey)
 845
+
 846
+
 847
+@when('quorum.available')
 848
+@when('ibm-spectrum-scale-client.installed')
 849
+def exchange_data_peers(quorum):
 850
+    """
 851
+    Get the connection info details for each peer unit connected.
 852
+    Add the hostname/ip and public ssh key info.
 853
+    """
 854
+
 855
+    peer_hostnames = quorum.get_hostname_peers()
 856
+    peer_ips = quorum.get_unitips()
 857
+    # Get the ssh key details, so that each peer can do ssh to each other
 858
+    pubkeys = quorum.get_pub_keys()
 859
+    for public_key in pubkeys:
 860
+        public_key = str(public_key)
 861
+        add_ssh_pubkey(public_key)
 862
+    for host_peer, ip_address_peer in zip(peer_hostnames, peer_ips):
 863
+        exists = False
 864
+        host_peer = str(host_peer)
 865
+        ip_address_peer = str(ip_address_peer)
 866
+        # Check whether the peer unit host details exist or not,
 867
+        # if it does not exist, then add it
 868
+        filepath_hostfile = "/etc/hosts"
 869
+        searchtext = str(ip_address_peer)+" "+str(host_peer)
 870
+        searchfile = open(filepath_hostfile, "r")
 871
+        for line in searchfile:
 872
+            if searchtext in line:
 873
+                exists = True
 874
+        searchfile.close()
 875
+        if exists is False:
 876
+            setadd_hostname(host_peer, ip_address_peer)
 877
+
 878
+
 879
+@when('gpfsmanager.connected')
 880
+@when('ibm-spectrum-scale-client.notify_master')
 881
+def send_details_manager(gpfsmanager):
 882
+    """
 883
+    Forward the connection info details to the manager unit. This info should
 884
+    be sent once the ssh key is added to the client. Once client is ready,
 885
+    send the client connection info to manager, so that manager can now add
 886
+    this client to the gpfs cluster.
 887
+    """
 888
+
 889
+    # Send details to the gpfs manager
 890
+    hookenv.log("IBM SPECTRUM SCALE : Sending the hostname of client to"
 891
+                " Spectrum Scale Manager : %s " % CLIENT_HOSTNAME)
 892
+    hookenv.log("Inside function to pass hostname to the manager")
 893
+    gpfsmanager.set_hostname(CLIENT_HOSTNAME)
 894
+    with open("/root/.ssh/id_rsa.pub", "r") as idfile:
 895
+        pubkey = idfile.read()
 896
+    gpfsmanager.set_ssh_key(pubkey)
 897
+
 898
+
 899
+@when('quorum.departing')
 900
+def get_details_departedmgrpeer(quorum):
 901
+    hookenv.log("IBM SPECTRUM SCALE : Peer client unit is departing")
 902
+    try:
 903
+        quorum.dismiss_departed()
 904
+    except AttributeError:
 905
+        hookenv.log("IBM SPECTRUM SCALE : No peer client units to depart")
 906
+
 907
+
 908
+@when('gpfsmanager.ready')
 909
+@when('ibm-spectrum-scale-client.installed')
 910
+def get_details_manager(gpfsmanager):
 911
+    """
 912
+    To get the connection info from the manager units like hostname/ip and ssh
 913
+    keys. Set the state 'ibm-spectrum-scale-client.notify_master' once the
 914
+    connection info is successfully added, now client can send its connection
 915
+    info to manager, so that client will be added to the cluster.
 916
+    """
 917
+
 918
+    hostname_managers = gpfsmanager.get_hostnames()
 919
+    ip_managers = gpfsmanager.get_ips()
 920
+    for host_manager, ip_address_manager in zip(hostname_managers,
 921
+                                                ip_managers):
 922
+        exists = False
 923
+        host_manager = str(host_manager)
 924
+        ip_address_manager = str(ip_address_manager)
 925
+
 926
+        # Check whether the manager host details are added in /etc/hosts file,
 927
+        # if it does not exist, then add it
 928
+        filepath = "/etc/hosts"
 929
+        searchtext = str(ip_address_manager)+" "+str(host_manager)
 930
+        searchfile = open(filepath, "r")
 931
+        for line in searchfile:
 932
+            if searchtext in line:
 933
+                exists = True
 934
+        searchfile.close()
 935
+        if exists is False:
 936
+            setadd_hostname(host_manager, ip_address_manager)
 937
+    privkeys = gpfsmanager.get_priv_keys()
 938
+    pubkeys = gpfsmanager.get_pub_keys()
 939
+    for private_key, public_key in zip(privkeys, pubkeys):
 940
+        private_key = str(private_key)
 941
+        public_key = str(public_key)
 942
+        # Add the ssh keys of the master nodes
 943
+        add_ssh_pubkey(public_key)
 944
+        add_ssh_privkey(private_key)
 945
+    set_state('ibm-spectrum-scale-client.notify_master')
 946
+
 947
+
 948
+@when('ibm-spectrum-scale-client.notify_master')
 949
+@when('gpfsmanager.client-ready')
 950
+@when_not('ibm-spectrum-scale-client.node.ready')
 951
+def notify_status_client_node_ready(gpfsmanager):
 952
+    """
 953
+    To notify the client that its added to the Cluster, if the client node
 954
+    status is active. Notify user that client is active and added successfully
 955
+    to the cluster.
 956
+    """
 957
+
 958
+    try:
 959
+        time.sleep(25)
 960
+        output = check_output(split('mmgetstate'))
 961
+        output = output.decode("utf-8")
 962
+        node_status = "active"
 963
+        s = re.search(CLIENT_HOSTNAME, output)
 964
+        n = re.search(node_status, output)
 965
+        if s and n:
 966
+            if gpfs_filesystem_exists():
 967
+                try:
 968
+                    hookenv.log(check_output(split('mmmount all -a')))
 969
+                except CalledProcessError:
 970
+                    hookenv.log("IBM SPECTRUM SCALE : Issue while mounting")
 971
+            set_state('ibm-spectrum-scale-client.node.ready')
 972
+            hookenv.log("IBM SPECTRUM SCALE : Client node is ready")
 973
+            hookenv.status_set('active', 'Client node is ready')
 974
+    except CalledProcessError:
 975
+        hookenv.log("IBM SPECTRUM SCALE : Client node is not active, some"
 976
+                    " issue might have occured, please check the logs")
 977
+    except FileNotFoundError:
 978
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 979
+                    "yet.")
 980
+
 981
+
 982
+@when_not('gpfsmanager.ready')
 983
+@when('ibm-spectrum-scale-client.node.ready')
 984
+def depart_clientnode():
 985
+    """
 986
+    When relation is removed between client and manager, client node gets
 987
+    removed from the cluster. If any containers are present on the host
 988
+    machine which has resources bind(relation is established bw client and
 989
+    cinder/glance driver), those will be unbinded, before client is removed
 990
+    from cluster
 991
+    """
 992
+
 993
+    # Check the node which is getting departed
 994
+    if not node_exists(CLIENT_HOSTNAME):
 995
+        hookenv.log("IBM SPECTRUM SCALE : Node is not part of any cluster")
 996
+        return
 997
+    wait = 0
 998
+    hookenv.log("IBM SPECTRUM SCALE : Client node is getting removed "
 999
+                "from Cluster")
1000
+    # Check whether relation between cinder/glance driver is established.
1001
+    # If yes, then unbind the devices first, before shutdown the client node.
1002
+    if is_state('client.connected'):
1003
+        hookenv.log("IBM SPECTRUM SCALE : Remove any binded devices before "
1004
+                    "client get removed from cluster")
1005
+        chk_bind_remove_devices()
1006
+    try:
1007
+        hookenv.log(check_output(split('mmshutdown -N %s' % CLIENT_HOSTNAME)))
1008
+    except CalledProcessError as e:
1009
+        hookenv.log("IBM SPECTRUM SCALE : Issue while removing the client"
1010
+                    " node from the cluster, check the logs for more details")
1011
+        hookenv.log(e.output)
1012
+    while node_exists(CLIENT_HOSTNAME):
1013
+        wait = wait + 1
1014
+        time.sleep(30)
1015
+        if not node_exists(CLIENT_HOSTNAME):
1016
+            hookenv.log("IBM SPECTRUM SCALE : Client node is removed from "
1017
+                        " the cluster")
1018
+            hookenv.status_set('blocked', 'node removed from the cluster')
1019
+            break
1020
+        elif wait > 100:
1021
+            hookenv.status_set('blocked', 'IBM SPECTRUM SCALE: Client node is'
1022
+                               ' removed from the cluster.')
1023
+            hookenv.log("IBM SPECTRUM SCALE : Client node is removed from "
1024
+                        " the cluster")
1025
+            break
1026
+    remove_state('ibm-spectrum-scale-client.node.ready')
1027
+    remove_state('ibm-spectrum-scale-client.notify_master')
1028
+
1029
+
1030
+@when('client.connected')
1031
+@when('ibm-spectrum-scale-client.installed')
1032
+def send_client_details(client):
1033
+    """
1034
+    Forwarding the connection info to cinder/any charm connecting to client
1035
+    """
1036
+
1037
+    # Send details to the cinder gpfs driver
1038
+    hookenv.log("IBM SPECTRUM SCALE : Public IP Address of Spectrum Scale"
1039
+                " Client is : %s " % CLIENT_PUBLIC_IP_ADDRESS)
1040
+    client.set_publicip(CLIENT_PUBLIC_IP_ADDRESS)
1041
+    client.set_hostname(CLIENT_HOSTNAME)
1042
+    client.set_ip(CLIENT_IP_ADDRESS)
1043
+
1044
+
1045
+@when('client.ready')
1046
+def spectrumscale_getkey(client):
1047
+    """
1048
+    when relation b/w client and cinder/glance drivers is established. Add the
1049
+    cinder/glance host configuration details to the client. If Cinder/Glance is
1050
+    deployed on a LXC/LXD container, then the client will bind mount the gpfs
1051
+    mount point on those containers.
1052
+    """
1053
+
1054
+    pubkey_driver = hookenv.relation_get('driver_public_key')
1055
+    pubkey_driver = str(pubkey_driver)
1056
+    driver_hostname = hookenv.relation_get('driver_hostname')
1057
+    driver_hostname = str(driver_hostname)
1058
+    driver_ip = hookenv.relation_get('driver_private_ip_address')
1059
+    driver_ip = str(driver_ip)
1060
+    hook_compare = 'relation-departed'
1061
+    current_hook = hookenv.hook_name()
1062
+    if str(pubkey_driver) != 'None':
1063
+        # Add the ssh keys of the cinder/glance driver machine
1064
+        add_ssh_pubkey(pubkey_driver)
1065
+    if (str(driver_hostname) and str(driver_ip)) != 'None':
1066
+        # Check whether the host details are added in /etc/hosts or not
1067
+        filepath = "/etc/hosts"
1068
+        exists_driver = False
1069
+        searchtext = str(driver_ip)+" "+str(driver_hostname)
1070
+        searchfile = open(filepath, "r")
1071
+        for line in searchfile:
1072
+            if searchtext in line:
1073
+                exists_driver = True
1074
+        searchfile.close()
1075
+        if exists_driver is False:
1076
+            setadd_hostname(driver_hostname, driver_ip)
1077
+
1078
+        # If client is ready, ie is part of Spectrum Scale cluster, then
1079
+        # only the below steps for bind mount should be executed.
1080
+
1081
+        hookenv.status_set('active', 'client is ready to be used.')
1082
+        if hook_compare in current_hook:
1083
+            return
1084
+        bind_mount(driver_hostname)
1085
+
1086
+
1087
+@when('client.departing')
1088
+def spectrumscale_departing(client):
1089
+    """
1090
+    When relation b/w client and cinder/glance driver is removed, check
1091
+    the containers which have cinder/glance on it and have binded filesystems.
1092
+    Remove the unbinding if this relation no longer exists.
1093
+    """
1094
+
1095
+    driver_hostname = hookenv.relation_get('driver_hostname')
1096
+    driver_hostname = str(driver_hostname)
1097
+    hookenv.log("IBM SPECTRUM SCALE : cinder/glance LXD on which gpfs devices"
1098
+                " will be unbinded is : %s" % driver_hostname)
1099
+    # If the client node lxc list output  contains the lxc container
1100
+    # on which cinder or glance is installed, then only run bind mount.
1101
+    lxc_flag = check_lxd_name(driver_hostname)
1102
+    if lxc_flag is True:
1103
+        try:
1104
+            dir_list = check_output(split('mmlsfs all -T'))
1105
+            dir_list = dir_list.decode('utf-8')
1106
+            with open(charm_dir + "/dir_gpfstemp_file", 'w+') as idfile:
1107
+                idfile.write(dir_list)
1108
+            searchfile_node = open(charm_dir + "/dir_gpfstemp_file", "r")
1109
+            for line in searchfile_node:
1110
+                if "-T " in line:
1111
+                    val = line.split()[1]
1112
+                    val = re.sub(r'/', "", val)
1113
+                    dir_mounts.append(val)
1114
+            searchfile_node.close()
1115
+        except CalledProcessError as e:
1116
+            hookenv.log(e.output)
1117
+            return
1118
+        except FileNotFoundError:
1119
+            hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not "
1120
+                        "installed yet.")
1121
+            return
1122
+        try:
1123
+            mount_bind_lxd = check_output(split('lxc config device list' +
1124
+                                          ' ' + driver_hostname))
1125
+            mount_bind_lxd = mount_bind_lxd.decode('utf-8')
1126
+            with open(charm_dir + '/lxdbind_temp_file', 'w+') as idfile:
1127
+                idfile.write(mount_bind_lxd)
1128
+            searchfile_lxd = open(charm_dir + "/lxdbind_temp_file", "r")
1129
+            for line in searchfile_lxd:
1130
+                binded_device = line.strip("\n")
1131
+                binded_device = binded_device.split(":")[0]
1132
+                if any(binded_device in s for s in dir_mounts):
1133
+                    unbind_cmd = ('lxc config device remove ' +
1134
+                                  driver_hostname + ' ' + binded_device)
1135
+                    hookenv.log(check_output(split(unbind_cmd)))
1136
+            searchfile_lxd.close()
1137
+        except CalledProcessError:
1138
+            hookenv.log("IBM SPECTRUM SCALE : No devices for this lxd, "
1139
+                        "nothing to unbind. or unbinding failed")
1140
+        # Set the lxc config parameter to False
1141
+        lxc_cmd = ('lxc config unset ' + driver_hostname +
1142
+                   ' security.privileged')
1143
+        hookenv.log(check_output(split(lxc_cmd)))
1144
+        hookenv.log("IBM SPECTRUM SCALE : Unsetting security parameter "
1145
+                    "and starting container again")
1146
+        lxc_restartcmd = 'lxc restart ' + driver_hostname
1147
+        hookenv.log(check_output(split(lxc_restartcmd)))
1148
+        time.sleep(50)
1149
+    try:
1150
+        client.dismiss()
1151
+    except AttributeError:
1152
+        hookenv.log("IBM SPECTRUM SCALE : No more units to depart")
1153
+    hookenv.status_set('active', 'Client node ready')
1154
+
1155
+
1156
+@hook('stop')
1157
+def uninstall_spectrum_scale():
1158
+    """
1159
+    Uninstalling spectrum scale deb packages for a cleaner removal when client
1160
+    is removed. Delete the temp file and other configuration files created
1161
+    during install and configure.
1162
+    """
1163
+    try:
1164
+        check_call(CMD_DEB_UNINSTALL, shell=True)
1165
+    except CalledProcessError as e:
1166
+        hookenv.log("IBM SPECTRUM SCALE : Error while uninstalling scale "
1167
+                    "packages.")
1168
+        hookenv.log(e.output)
1169
+    try:
1170
+        shutil.rmtree('/var/mmfs')
1171
+        shutil.rmtree('/usr/lpp/mmfs')
1172
+        shutil.rmtree('/var/adm/ras')
1173
+        shutil.rmtree('/tmp/mmfs')
1174
+        shutil.rmtree("/root/.ssh/id_rsa.pub_orig")
1175
+        shutil.rmtree("/root/.ssh/id_rsa_temp_keys")
1176
+        shutil.rmtree(charm_dir + "/dir_gpfstemp_file")
1177
+        shutil.rmtree(charm_dir + '/lxd_temp_file')
1178
+        shutil.rmtree(charm_dir + "/lxdbind_temp_file")
1179
+        shutil.rmtree(charm_dir + "/dir_gpfstemp_file")
1180
+        shutil.rmtree(charm_dir + '/dir_temp_file')
1181
+    except OSError:
1182
+        pass
1183
+    remove_state('ibm-spectrum-scale-client.installed')
Back to file index

requirements.txt

1
--- 
2
+++ requirements.txt
3
@@ -0,0 +1,2 @@
4
+flake8
5
+pytest
Back to file index

revision

1
--- 
2
+++ revision
3
@@ -0,0 +1 @@
4
+0
Back to file index

tests/README.md

 1
--- 
 2
+++ tests/README.md
 3
@@ -0,0 +1,9 @@
 4
+# Overview
 5
+
 6
+This directory provides Amulet tests to verify basic deployment functionality
 7
+from the perspective of this charm, its requirements and its features, as
 8
+exercised in a subset of the full OpenStack deployment test bundle topology.
 9
+
10
+For full details on functional testing of OpenStack charms please refer to
11
+the [functional testing](http://docs.openstack.org/developer/charm-guide/testing.html#functional-testing)
12
+section of the OpenStack Charm Guide.
Back to file index

tests/basic_deployment.py

  1
--- 
  2
+++ tests/basic_deployment.py
  3
@@ -0,0 +1,168 @@
  4
+#!/usr/bin/env python
  5
+#
  6
+# Copyright 2016 Canonical Ltd
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#  http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+
 20
+"""
 21
+Basic scale client - cinder-spectrumscale functional test.
 22
+"""
 23
+
 24
+import amulet
 25
+
 26
+from charmhelpers.contrib.openstack.amulet.deployment import (
 27
+    OpenStackAmuletDeployment
 28
+)
 29
+
 30
+
 31
+seconds_to_wait = 5000
 32
+
 33
+
 34
+class ClientCinderSpectrumBasicDeployment(OpenStackAmuletDeployment):
 35
+    """Amulet tests on a basic heat deployment."""
 36
+
 37
+    def __init__(self, series=None, openstack=None, source=None, git=False,
 38
+                 stable=True):
 39
+        """Deploy the entire test environment."""
 40
+        super(ClientCinderSpectrumBasicDeployment, self).__init__(series,
 41
+                                                                  openstack,
 42
+                                                                  source,
 43
+                                                                  stable)
 44
+        self.git = git
 45
+        self._add_services()
 46
+        self._add_relations()
 47
+        self._configure_services()
 48
+        self._deploy()
 49
+
 50
+        exclude_services = []
 51
+        # Wait for deployment ready msgs, except exclusions
 52
+        self._auto_wait_for_status(exclude_services=exclude_services)
 53
+
 54
+        self._check_cinderconffile()
 55
+        self.d.sentry.wait(seconds_to_wait)
 56
+
 57
+    def _add_services(self):
 58
+        """Add the services that we're testing, where ibm spectrum scale
 59
+        client is local, and the rest of the services are from lp branches
 60
+        that are compatible with the local charm (e.g. stable or next).
 61
+        """
 62
+        # Note : spectrum-scale-client becomes a ubuntu subordinate unit.
 63
+        # Note: cinder-spectrumscale becomes a cinder subordinate unit.
 64
+        this_service = {'name': 'ibm-spectrum-scale-client', 'location': 'cs\
 65
+                        :~ibmcharmers/ibm-spectrum-scale-client-8'}
 66
+        other_services = [
 67
+            {'name': 'percona-cluster', 'constraints': {'mem': '3072M'}},
 68
+            {'name': 'keystone'},
 69
+            {'name': 'rabbitmq-server'},
 70
+            {'name': 'cinder'},
 71
+            {'name': 'ubuntu'},
 72
+            {'name': 'ibm-cinder-spectrumscale', 'location': 'cs:\
 73
+             ~ibmcharmers/ibm-cinder-spectrumscale-6'},
 74
+            {'name': 'ibm-spectrum-scale-manager', 'location': 'cs:\
 75
+             ~ibmcharmers/ibm-spectrum-scale-manager-8', 'units': 2},
 76
+        ]
 77
+        super(ClientCinderSpectrumBasicDeployment, self)._add_services(
 78
+            this_service, other_services)
 79
+
 80
+    def _add_relations(self):
 81
+        """Add all of the relations for the services."""
 82
+
 83
+        relations = {
 84
+            'cinder:storage-backend':
 85
+            'ibm-cinder-spectrumscale:storage-backend',
 86
+            'keystone:shared-db': 'percona-cluster:shared-db',
 87
+            'cinder:shared-db': 'percona-cluster:shared-db',
 88
+            'cinder:identity-service': 'keystone:identity-service',
 89
+            'cinder:amqp': 'rabbitmq-server:amqp',
 90
+            'ubuntu:juju-info': 'ibm-spectrum-scale-client:juju-info',
 91
+            'ibm-spectrum-scale-manager:gpfsmanager':
 92
+            'ibm-spectrum-scale-client:gpfsmanager',
 93
+            'ibm-spectrum-scale-client:client':
 94
+            'ibm-cinder-spectrumscale:spectrumscale',
 95
+        }
 96
+        super(ClientCinderSpectrumBasicDeployment, self)._add_relations(
 97
+            relations)
 98
+
 99
+    def _configure_services(self):
100
+        """Configure all of the services."""
101
+        keystone_config = {
102
+            'admin-password': 'openstack',
103
+            'admin-token': 'ubuntutesting'
104
+        }
105
+        pxc_config = {
106
+            'dataset-size': '25%',
107
+            'max-connections': 1000,
108
+            'root-password': 'ChangeMe123',
109
+            'sst-password': 'ChangeMe123',
110
+        }
111
+        cinder_config = {
112
+            'block-device': 'None',
113
+            'glance-api-version': '2'
114
+        }
115
+
116
+        cinder_spectrumscale_config = {
117
+            'gpfs_mount_point_base': '/ibm/gpfs0/openstack/cinder/volumes/',
118
+        }
119
+
120
+        configs = {
121
+            'keystone': keystone_config,
122
+            'percona-cluster': pxc_config,
123
+            'cinder': cinder_config,
124
+            'ibm-cinder-spectrumscale': cinder_spectrumscale_config,
125
+        }
126
+        super(ClientCinderSpectrumBasicDeployment, self)._configure_services(
127
+            configs)
128
+
129
+    def _check_cinderconffile(self):
130
+        searchttext = "[ibm-cinder-spectrumscale]"
131
+        # To check whether cinder.conf file is updated with spectrum scale
132
+        # details
133
+        unit = self.d.sentry['ibm-cinder-spectrumscale'][0]
134
+        output, code = unit.run("grep '\[ibm-cinder-spectrumscale\' \
135
+                                 /etc/cinder/cinder.conf")
136
+        if output in searchttext:
137
+            self.log.debug('Spectrum scale conf in Cinder conf file ... OK')
138
+        else:
139
+            self.log.debug('Spectrum scale conf in Cinder conf file ...Failed')
140
+        hostname, code = unit.run('hostname')
141
+
142
+        unit_client = self.d.sentry['ibm-spectrum-scale-client'][0]
143
+        host_name, code = unit_client.run("grep %s /etc/hosts | cut -d' ' -f2"
144
+                                          % hostname)
145
+
146
+        # To check whether host name of driver is added to /etc/hosts file of
147
+        # client. This is to test the interface between driver and client
148
+        if str(host_name) == str(hostname):
149
+            self.log.debug('Host name of driver in client /etc/hosts file \
150
+                           ... OK')
151
+        else:
152
+            self.log.debug('Host name of driver in client /etc/hosts file \
153
+                           ... Failed')
154
+
155
+        # To check whether SSH works between driver and client machine.
156
+        chostname, code = unit_client.run('hostname')
157
+        output, code = unit.run("ssh -o StrictHostKeyChecking=no root@%s \
158
+                                'cd /etc; ls hosts'" % chostname)
159
+        if str(output) == "hosts":
160
+            self.log.debug('SSH to client from driver  ... OK')
161
+        else:
162
+            self.log.debug('SSH to client from driver ... Failed')
163
+
164
+        cmd1, code = unit_client.run("/usr/lpp/mmfs/bin/mmlscluster")
165
+        if code != 0:
166
+            message = ('mmlscluster command fail to run , may be cluster is'
167
+                       ' down.')
168
+            amulet.raise_status(amulet.FAIL, msg=message)
169
+        self.log.debug('The output of running mmlscluster command is \n')
170
+        print(str(cmd1))
171
+        self.log.debug('\nCompleted the Tests !\n')
Back to file index

tests/charmhelpers/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/__init__.py
 3
@@ -0,0 +1,36 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+# Bootstrap charm-helpers, installing its dependencies if necessary using
19
+# only standard libraries.
20
+import subprocess
21
+import sys
22
+
23
+try:
24
+    import six  # flake8: noqa
25
+except ImportError:
26
+    if sys.version_info.major == 2:
27
+        subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
28
+    else:
29
+        subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
30
+    import six  # flake8: noqa
31
+
32
+try:
33
+    import yaml  # flake8: noqa
34
+except ImportError:
35
+    if sys.version_info.major == 2:
36
+        subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
37
+    else:
38
+        subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
39
+    import yaml  # flake8: noqa
Back to file index

tests/charmhelpers/contrib/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/amulet/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/amulet/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/amulet/deployment.py

  1
--- 
  2
+++ tests/charmhelpers/contrib/amulet/deployment.py
  3
@@ -0,0 +1,97 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import amulet
 19
+import os
 20
+import six
 21
+
 22
+
 23
+class AmuletDeployment(object):
 24
+    """Amulet deployment.
 25
+
 26
+       This class provides generic Amulet deployment and test runner
 27
+       methods.
 28
+       """
 29
+
 30
+    def __init__(self, series=None):
 31
+        """Initialize the deployment environment."""
 32
+        self.series = None
 33
+
 34
+        if series:
 35
+            self.series = series
 36
+            self.d = amulet.Deployment(series=self.series)
 37
+        else:
 38
+            self.d = amulet.Deployment()
 39
+
 40
+    def _add_services(self, this_service, other_services):
 41
+        """Add services.
 42
+
 43
+           Add services to the deployment where this_service is the local charm
 44
+           that we're testing and other_services are the other services that
 45
+           are being used in the local amulet tests.
 46
+           """
 47
+        if this_service['name'] != os.path.basename(os.getcwd()):
 48
+            s = this_service['name']
 49
+            msg = "The charm's root directory name needs to be {}".format(s)
 50
+            amulet.raise_status(amulet.FAIL, msg=msg)
 51
+
 52
+        if 'units' not in this_service:
 53
+            this_service['units'] = 1
 54
+
 55
+        self.d.add(this_service['name'], units=this_service['units'],
 56
+                   constraints=this_service.get('constraints'))
 57
+
 58
+        for svc in other_services:
 59
+            if 'location' in svc:
 60
+                branch_location = svc['location']
 61
+            elif self.series:
 62
+                branch_location = 'cs:{}/{}'.format(self.series, svc['name']),
 63
+            else:
 64
+                branch_location = None
 65
+
 66
+            if 'units' not in svc:
 67
+                svc['units'] = 1
 68
+
 69
+            self.d.add(svc['name'], charm=branch_location, units=svc['units'],
 70
+                       constraints=svc.get('constraints'))
 71
+
 72
+    def _add_relations(self, relations):
 73
+        """Add all of the relations for the services."""
 74
+        for k, v in six.iteritems(relations):
 75
+            self.d.relate(k, v)
 76
+
 77
+    def _configure_services(self, configs):
 78
+        """Configure all of the services."""
 79
+        for service, config in six.iteritems(configs):
 80
+            self.d.configure(service, config)
 81
+
 82
+    def _deploy(self):
 83
+        """Deploy environment and wait for all hooks to finish executing."""
 84
+        timeout = int(os.environ.get('AMULET_SETUP_TIMEOUT', 900))
 85
+        try:
 86
+            self.d.setup(timeout=timeout)
 87
+            self.d.sentry.wait(timeout=timeout)
 88
+        except amulet.helpers.TimeoutError:
 89
+            amulet.raise_status(
 90
+                amulet.FAIL,
 91
+                msg="Deployment timed out ({}s)".format(timeout)
 92
+            )
 93
+        except Exception:
 94
+            raise
 95
+
 96
+    def run_tests(self):
 97
+        """Run all of the methods that are prefixed with 'test_'."""
 98
+        for test in dir(self):
 99
+            if test.startswith('test_'):
100
+                getattr(self, test)()
Back to file index

tests/charmhelpers/contrib/amulet/utils.py

  1
--- 
  2
+++ tests/charmhelpers/contrib/amulet/utils.py
  3
@@ -0,0 +1,827 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import io
 19
+import json
 20
+import logging
 21
+import os
 22
+import re
 23
+import socket
 24
+import subprocess
 25
+import sys
 26
+import time
 27
+import uuid
 28
+
 29
+import amulet
 30
+import distro_info
 31
+import six
 32
+from six.moves import configparser
 33
+if six.PY3:
 34
+    from urllib import parse as urlparse
 35
+else:
 36
+    import urlparse
 37
+
 38
+
 39
+class AmuletUtils(object):
 40
+    """Amulet utilities.
 41
+
 42
+       This class provides common utility functions that are used by Amulet
 43
+       tests.
 44
+       """
 45
+
 46
+    def __init__(self, log_level=logging.ERROR):
 47
+        self.log = self.get_logger(level=log_level)
 48
+        self.ubuntu_releases = self.get_ubuntu_releases()
 49
+
 50
+    def get_logger(self, name="amulet-logger", level=logging.DEBUG):
 51
+        """Get a logger object that will log to stdout."""
 52
+        log = logging
 53
+        logger = log.getLogger(name)
 54
+        fmt = log.Formatter("%(asctime)s %(funcName)s "
 55
+                            "%(levelname)s: %(message)s")
 56
+
 57
+        handler = log.StreamHandler(stream=sys.stdout)
 58
+        handler.setLevel(level)
 59
+        handler.setFormatter(fmt)
 60
+
 61
+        logger.addHandler(handler)
 62
+        logger.setLevel(level)
 63
+
 64
+        return logger
 65
+
 66
+    def valid_ip(self, ip):
 67
+        if re.match(r"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$", ip):
 68
+            return True
 69
+        else:
 70
+            return False
 71
+
 72
+    def valid_url(self, url):
 73
+        p = re.compile(
 74
+            r'^(?:http|ftp)s?://'
 75
+            r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}\.?)|'  # noqa
 76
+            r'localhost|'
 77
+            r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})'
 78
+            r'(?::\d+)?'
 79
+            r'(?:/?|[/?]\S+)$',
 80
+            re.IGNORECASE)
 81
+        if p.match(url):
 82
+            return True
 83
+        else:
 84
+            return False
 85
+
 86
+    def get_ubuntu_release_from_sentry(self, sentry_unit):
 87
+        """Get Ubuntu release codename from sentry unit.
 88
+
 89
+        :param sentry_unit: amulet sentry/service unit pointer
 90
+        :returns: list of strings - release codename, failure message
 91
+        """
 92
+        msg = None
 93
+        cmd = 'lsb_release -cs'
 94
+        release, code = sentry_unit.run(cmd)
 95
+        if code == 0:
 96
+            self.log.debug('{} lsb_release: {}'.format(
 97
+                sentry_unit.info['unit_name'], release))
 98
+        else:
 99
+            msg = ('{} `{}` returned {} '
100
+                   '{}'.format(sentry_unit.info['unit_name'],
101
+                               cmd, release, code))
102
+        if release not in self.ubuntu_releases:
103
+            msg = ("Release ({}) not found in Ubuntu releases "
104
+                   "({})".format(release, self.ubuntu_releases))
105
+        return release, msg
106
+
107
+    def validate_services(self, commands):
108
+        """Validate that lists of commands succeed on service units.  Can be
109
+           used to verify system services are running on the corresponding
110
+           service units.
111
+
112
+        :param commands: dict with sentry keys and arbitrary command list vals
113
+        :returns: None if successful, Failure string message otherwise
114
+        """
115
+        self.log.debug('Checking status of system services...')
116
+
117
+        # /!\ DEPRECATION WARNING (beisner):
118
+        # New and existing tests should be rewritten to use
119
+        # validate_services_by_name() as it is aware of init systems.
120
+        self.log.warn('DEPRECATION WARNING:  use '
121
+                      'validate_services_by_name instead of validate_services '
122
+                      'due to init system differences.')
123
+
124
+        for k, v in six.iteritems(commands):
125
+            for cmd in v:
126
+                output, code = k.run(cmd)
127
+                self.log.debug('{} `{}` returned '
128
+                               '{}'.format(k.info['unit_name'],
129
+                                           cmd, code))
130
+                if code != 0:
131
+                    return "command `{}` returned {}".format(cmd, str(code))
132
+        return None
133
+
134
+    def validate_services_by_name(self, sentry_services):
135
+        """Validate system service status by service name, automatically
136
+           detecting init system based on Ubuntu release codename.
137
+
138
+        :param sentry_services: dict with sentry keys and svc list values
139
+        :returns: None if successful, Failure string message otherwise
140
+        """
141
+        self.log.debug('Checking status of system services...')
142
+
143
+        # Point at which systemd became a thing
144
+        systemd_switch = self.ubuntu_releases.index('vivid')
145
+
146
+        for sentry_unit, services_list in six.iteritems(sentry_services):
147
+            # Get lsb_release codename from unit
148
+            release, ret = self.get_ubuntu_release_from_sentry(sentry_unit)
149
+            if ret:
150
+                return ret
151
+
152
+            for service_name in services_list:
153
+                if (self.ubuntu_releases.index(release) >= systemd_switch or
154
+                        service_name in ['rabbitmq-server', 'apache2']):
155
+                    # init is systemd (or regular sysv)
156
+                    cmd = 'sudo service {} status'.format(service_name)
157
+                    output, code = sentry_unit.run(cmd)
158
+                    service_running = code == 0
159
+                elif self.ubuntu_releases.index(release) < systemd_switch:
160
+                    # init is upstart
161
+                    cmd = 'sudo status {}'.format(service_name)
162
+                    output, code = sentry_unit.run(cmd)
163
+                    service_running = code == 0 and "start/running" in output
164
+
165
+                self.log.debug('{} `{}` returned '
166
+                               '{}'.format(sentry_unit.info['unit_name'],
167
+                                           cmd, code))
168
+                if not service_running:
169
+                    return u"command `{}` returned {} {}".format(
170
+                        cmd, output, str(code))
171
+        return None
172
+
173
+    def _get_config(self, unit, filename):
174
+        """Get a ConfigParser object for parsing a unit's config file."""
175
+        file_contents = unit.file_contents(filename)
176
+
177
+        # NOTE(beisner):  by default, ConfigParser does not handle options
178
+        # with no value, such as the flags used in the mysql my.cnf file.
179
+        # https://bugs.python.org/issue7005
180
+        config = configparser.ConfigParser(allow_no_value=True)
181
+        config.readfp(io.StringIO(file_contents))
182
+        return config
183
+
184
+    def validate_config_data(self, sentry_unit, config_file, section,
185
+                             expected):
186
+        """Validate config file data.
187
+
188
+           Verify that the specified section of the config file contains
189
+           the expected option key:value pairs.
190
+
191
+           Compare expected dictionary data vs actual dictionary data.
192
+           The values in the 'expected' dictionary can be strings, bools, ints,
193
+           longs, or can be a function that evaluates a variable and returns a
194
+           bool.
195
+           """
196
+        self.log.debug('Validating config file data ({} in {} on {})'
197
+                       '...'.format(section, config_file,
198
+                                    sentry_unit.info['unit_name']))
199
+        config = self._get_config(sentry_unit, config_file)
200
+
201
+        if section != 'DEFAULT' and not config.has_section(section):
202
+            return "section [{}] does not exist".format(section)
203
+
204
+        for k in expected.keys():
205
+            if not config.has_option(section, k):
206
+                return "section [{}] is missing option {}".format(section, k)
207
+
208
+            actual = config.get(section, k)
209
+            v = expected[k]
210
+            if (isinstance(v, six.string_types) or
211
+                    isinstance(v, bool) or
212
+                    isinstance(v, six.integer_types)):
213
+                # handle explicit values
214
+                if actual != v:
215
+                    return "section [{}] {}:{} != expected {}:{}".format(
216
+                           section, k, actual, k, expected[k])
217
+            # handle function pointers, such as not_null or valid_ip
218
+            elif not v(actual):
219
+                return "section [{}] {}:{} != expected {}:{}".format(
220
+                       section, k, actual, k, expected[k])
221
+        return None
222
+
223
+    def _validate_dict_data(self, expected, actual):
224
+        """Validate dictionary data.
225
+
226
+           Compare expected dictionary data vs actual dictionary data.
227
+           The values in the 'expected' dictionary can be strings, bools, ints,
228
+           longs, or can be a function that evaluates a variable and returns a
229
+           bool.
230
+           """
231
+        self.log.debug('actual: {}'.format(repr(actual)))
232
+        self.log.debug('expected: {}'.format(repr(expected)))
233
+
234
+        for k, v in six.iteritems(expected):
235
+            if k in actual:
236
+                if (isinstance(v, six.string_types) or
237
+                        isinstance(v, bool) or
238
+                        isinstance(v, six.integer_types)):
239
+                    # handle explicit values
240
+                    if v != actual[k]:
241
+                        return "{}:{}".format(k, actual[k])
242
+                # handle function pointers, such as not_null or valid_ip
243
+                elif not v(actual[k]):
244
+                    return "{}:{}".format(k, actual[k])
245
+            else:
246
+                return "key '{}' does not exist".format(k)
247
+        return None
248
+
249
+    def validate_relation_data(self, sentry_unit, relation, expected):
250
+        """Validate actual relation data based on expected relation data."""
251
+        actual = sentry_unit.relation(relation[0], relation[1])
252
+        return self._validate_dict_data(expected, actual)
253
+
254
+    def _validate_list_data(self, expected, actual):
255
+        """Compare expected list vs actual list data."""
256
+        for e in expected:
257
+            if e not in actual:
258
+                return "expected item {} not found in actual list".format(e)
259
+        return None
260
+
261
+    def not_null(self, string):
262
+        if string is not None:
263
+            return True
264
+        else:
265
+            return False
266
+
267
+    def _get_file_mtime(self, sentry_unit, filename):
268
+        """Get last modification time of file."""
269
+        return sentry_unit.file_stat(filename)['mtime']
270
+
271
+    def _get_dir_mtime(self, sentry_unit, directory):
272
+        """Get last modification time of directory."""
273
+        return sentry_unit.directory_stat(directory)['mtime']
274
+
275
+    def _get_proc_start_time(self, sentry_unit, service, pgrep_full=None):
276
+        """Get start time of a process based on the last modification time
277
+           of the /proc/pid directory.
278
+
279
+        :sentry_unit:  The sentry unit to check for the service on
280
+        :service:  service name to look for in process table
281
+        :pgrep_full:  [Deprecated] Use full command line search mode with pgrep
282
+        :returns:  epoch time of service process start
283
+        :param commands:  list of bash commands
284
+        :param sentry_units:  list of sentry unit pointers
285
+        :returns:  None if successful; Failure message otherwise
286
+        """
287
+        if pgrep_full is not None:
288
+            # /!\ DEPRECATION WARNING (beisner):
289
+            # No longer implemented, as pidof is now used instead of pgrep.
290
+            # https://bugs.launchpad.net/charm-helpers/+bug/1474030
291
+            self.log.warn('DEPRECATION WARNING:  pgrep_full bool is no '
292
+                          'longer implemented re: lp 1474030.')
293
+
294
+        pid_list = self.get_process_id_list(sentry_unit, service)
295
+        pid = pid_list[0]
296
+        proc_dir = '/proc/{}'.format(pid)
297
+        self.log.debug('Pid for {} on {}: {}'.format(
298
+            service, sentry_unit.info['unit_name'], pid))
299
+
300
+        return self._get_dir_mtime(sentry_unit, proc_dir)
301
+
302
+    def service_restarted(self, sentry_unit, service, filename,
303
+                          pgrep_full=None, sleep_time=20):
304
+        """Check if service was restarted.
305
+
306
+           Compare a service's start time vs a file's last modification time
307
+           (such as a config file for that service) to determine if the service
308
+           has been restarted.
309
+           """
310
+        # /!\ DEPRECATION WARNING (beisner):
311
+        # This method is prone to races in that no before-time is known.
312
+        # Use validate_service_config_changed instead.
313
+
314
+        # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
315
+        # used instead of pgrep.  pgrep_full is still passed through to ensure
316
+        # deprecation WARNS.  lp1474030
317
+        self.log.warn('DEPRECATION WARNING:  use '
318
+                      'validate_service_config_changed instead of '
319
+                      'service_restarted due to known races.')
320
+
321
+        time.sleep(sleep_time)
322
+        if (self._get_proc_start_time(sentry_unit, service, pgrep_full) >=
323
+                self._get_file_mtime(sentry_unit, filename)):
324
+            return True
325
+        else:
326
+            return False
327
+
328
+    def service_restarted_since(self, sentry_unit, mtime, service,
329
+                                pgrep_full=None, sleep_time=20,
330
+                                retry_count=30, retry_sleep_time=10):
331
+        """Check if service was been started after a given time.
332
+
333
+        Args:
334
+          sentry_unit (sentry): The sentry unit to check for the service on
335
+          mtime (float): The epoch time to check against
336
+          service (string): service name to look for in process table
337
+          pgrep_full: [Deprecated] Use full command line search mode with pgrep
338
+          sleep_time (int): Initial sleep time (s) before looking for file
339
+          retry_sleep_time (int): Time (s) to sleep between retries
340
+          retry_count (int): If file is not found, how many times to retry
341
+
342
+        Returns:
343
+          bool: True if service found and its start time it newer than mtime,
344
+                False if service is older than mtime or if service was
345
+                not found.
346
+        """
347
+        # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
348
+        # used instead of pgrep.  pgrep_full is still passed through to ensure
349
+        # deprecation WARNS.  lp1474030
350
+
351
+        unit_name = sentry_unit.info['unit_name']
352
+        self.log.debug('Checking that %s service restarted since %s on '
353
+                       '%s' % (service, mtime, unit_name))
354
+        time.sleep(sleep_time)
355
+        proc_start_time = None
356
+        tries = 0
357
+        while tries <= retry_count and not proc_start_time:
358
+            try:
359
+                proc_start_time = self._get_proc_start_time(sentry_unit,
360
+                                                            service,
361
+                                                            pgrep_full)
362
+                self.log.debug('Attempt {} to get {} proc start time on {} '
363
+                               'OK'.format(tries, service, unit_name))
364
+            except IOError as e:
365
+                # NOTE(beisner) - race avoidance, proc may not exist yet.
366
+                # https://bugs.launchpad.net/charm-helpers/+bug/1474030
367
+                self.log.debug('Attempt {} to get {} proc start time on {} '
368
+                               'failed\n{}'.format(tries, service,
369
+                                                   unit_name, e))
370
+                time.sleep(retry_sleep_time)
371
+                tries += 1
372
+
373
+        if not proc_start_time:
374
+            self.log.warn('No proc start time found, assuming service did '
375
+                          'not start')
376
+            return False
377
+        if proc_start_time >= mtime:
378
+            self.log.debug('Proc start time is newer than provided mtime'
379
+                           '(%s >= %s) on %s (OK)' % (proc_start_time,
380
+                                                      mtime, unit_name))
381
+            return True
382
+        else:
383
+            self.log.warn('Proc start time (%s) is older than provided mtime '
384
+                          '(%s) on %s, service did not '
385
+                          'restart' % (proc_start_time, mtime, unit_name))
386
+            return False
387
+
388
+    def config_updated_since(self, sentry_unit, filename, mtime,
389
+                             sleep_time=20, retry_count=30,
390
+                             retry_sleep_time=10):
391
+        """Check if file was modified after a given time.
392
+
393
+        Args:
394
+          sentry_unit (sentry): The sentry unit to check the file mtime on
395
+          filename (string): The file to check mtime of
396
+          mtime (float): The epoch time to check against
397
+          sleep_time (int): Initial sleep time (s) before looking for file
398
+          retry_sleep_time (int): Time (s) to sleep between retries
399
+          retry_count (int): If file is not found, how many times to retry
400
+
401
+        Returns:
402
+          bool: True if file was modified more recently than mtime, False if
403
+                file was modified before mtime, or if file not found.
404
+        """
405
+        unit_name = sentry_unit.info['unit_name']
406
+        self.log.debug('Checking that %s updated since %s on '
407
+                       '%s' % (filename, mtime, unit_name))
408
+        time.sleep(sleep_time)
409
+        file_mtime = None
410
+        tries = 0
411
+        while tries <= retry_count and not file_mtime:
412
+            try:
413
+                file_mtime = self._get_file_mtime(sentry_unit, filename)
414
+                self.log.debug('Attempt {} to get {} file mtime on {} '
415
+                               'OK'.format(tries, filename, unit_name))
416
+            except IOError as e:
417
+                # NOTE(beisner) - race avoidance, file may not exist yet.
418
+                # https://bugs.launchpad.net/charm-helpers/+bug/1474030
419
+                self.log.debug('Attempt {} to get {} file mtime on {} '
420
+                               'failed\n{}'.format(tries, filename,
421
+                                                   unit_name, e))
422
+                time.sleep(retry_sleep_time)
423
+                tries += 1
424
+
425
+        if not file_mtime:
426
+            self.log.warn('Could not determine file mtime, assuming '
427
+                          'file does not exist')
428
+            return False
429
+
430
+        if file_mtime >= mtime:
431
+            self.log.debug('File mtime is newer than provided mtime '
432
+                           '(%s >= %s) on %s (OK)' % (file_mtime,
433
+                                                      mtime, unit_name))
434
+            return True
435
+        else:
436
+            self.log.warn('File mtime is older than provided mtime'
437
+                          '(%s < on %s) on %s' % (file_mtime,
438
+                                                  mtime, unit_name))
439
+            return False
440
+
441
+    def validate_service_config_changed(self, sentry_unit, mtime, service,
442
+                                        filename, pgrep_full=None,
443
+                                        sleep_time=20, retry_count=30,
444
+                                        retry_sleep_time=10):
445
+        """Check service and file were updated after mtime
446
+
447
+        Args:
448
+          sentry_unit (sentry): The sentry unit to check for the service on
449
+          mtime (float): The epoch time to check against
450
+          service (string): service name to look for in process table
451
+          filename (string): The file to check mtime of
452
+          pgrep_full: [Deprecated] Use full command line search mode with pgrep
453
+          sleep_time (int): Initial sleep in seconds to pass to test helpers
454
+          retry_count (int): If service is not found, how many times to retry
455
+          retry_sleep_time (int): Time in seconds to wait between retries
456
+
457
+        Typical Usage:
458
+            u = OpenStackAmuletUtils(ERROR)
459
+            ...
460
+            mtime = u.get_sentry_time(self.cinder_sentry)
461
+            self.d.configure('cinder', {'verbose': 'True', 'debug': 'True'})
462
+            if not u.validate_service_config_changed(self.cinder_sentry,
463
+                                                     mtime,
464
+                                                     'cinder-api',
465
+                                                     '/etc/cinder/cinder.conf')
466
+                amulet.raise_status(amulet.FAIL, msg='update failed')
467
+        Returns:
468
+          bool: True if both service and file where updated/restarted after
469
+                mtime, False if service is older than mtime or if service was
470
+                not found or if filename was modified before mtime.
471
+        """
472
+
473
+        # NOTE(beisner) pgrep_full is no longer implemented, as pidof is now
474
+        # used instead of pgrep.  pgrep_full is still passed through to ensure
475
+        # deprecation WARNS.  lp1474030
476
+
477
+        service_restart = self.service_restarted_since(
478
+            sentry_unit, mtime,
479
+            service,
480
+            pgrep_full=pgrep_full,
481
+            sleep_time=sleep_time,
482
+            retry_count=retry_count,
483
+            retry_sleep_time=retry_sleep_time)
484
+
485
+        config_update = self.config_updated_since(
486
+            sentry_unit,
487
+            filename,
488
+            mtime,
489
+            sleep_time=sleep_time,
490
+            retry_count=retry_count,
491
+            retry_sleep_time=retry_sleep_time)
492
+
493
+        return service_restart and config_update
494
+
495
+    def get_sentry_time(self, sentry_unit):
496
+        """Return current epoch time on a sentry"""
497
+        cmd = "date +'%s'"
498
+        return float(sentry_unit.run(cmd)[0])
499
+
500
+    def relation_error(self, name, data):
501
+        return 'unexpected relation data in {} - {}'.format(name, data)
502
+
503
+    def endpoint_error(self, name, data):
504
+        return 'unexpected endpoint data in {} - {}'.format(name, data)
505
+
506
+    def get_ubuntu_releases(self):
507
+        """Return a list of all Ubuntu releases in order of release."""
508
+        _d = distro_info.UbuntuDistroInfo()
509
+        _release_list = _d.all
510
+        return _release_list
511
+
512
+    def file_to_url(self, file_rel_path):
513
+        """Convert a relative file path to a file URL."""
514
+        _abs_path = os.path.abspath(file_rel_path)
515
+        return urlparse.urlparse(_abs_path, scheme='file').geturl()
516
+
517
+    def check_commands_on_units(self, commands, sentry_units):
518
+        """Check that all commands in a list exit zero on all
519
+        sentry units in a list.
520
+
521
+        :param commands:  list of bash commands
522
+        :param sentry_units:  list of sentry unit pointers
523
+        :returns: None if successful; Failure message otherwise
524
+        """
525
+        self.log.debug('Checking exit codes for {} commands on {} '
526
+                       'sentry units...'.format(len(commands),
527
+                                                len(sentry_units)))
528
+        for sentry_unit in sentry_units:
529
+            for cmd in commands:
530
+                output, code = sentry_unit.run(cmd)
531
+                if code == 0:
532
+                    self.log.debug('{} `{}` returned {} '
533
+                                   '(OK)'.format(sentry_unit.info['unit_name'],
534
+                                                 cmd, code))
535
+                else:
536
+                    return ('{} `{}` returned {} '
537
+                            '{}'.format(sentry_unit.info['unit_name'],
538
+                                        cmd, code, output))
539
+        return None
540
+
541
+    def get_process_id_list(self, sentry_unit, process_name,
542
+                            expect_success=True):
543
+        """Get a list of process ID(s) from a single sentry juju unit
544
+        for a single process name.
545
+
546
+        :param sentry_unit: Amulet sentry instance (juju unit)
547
+        :param process_name: Process name
548
+        :param expect_success: If False, expect the PID to be missing,
549
+            raise if it is present.
550
+        :returns: List of process IDs
551
+        """
552
+        cmd = 'pidof -x {}'.format(process_name)
553
+        if not expect_success:
554
+            cmd += " || exit 0 && exit 1"
555
+        output, code = sentry_unit.run(cmd)
556
+        if code != 0:
557
+            msg = ('{} `{}` returned {} '
558
+                   '{}'.format(sentry_unit.info['unit_name'],
559
+                               cmd, code, output))
560
+            amulet.raise_status(amulet.FAIL, msg=msg)
561
+        return str(output).split()
562
+
563
+    def get_unit_process_ids(self, unit_processes, expect_success=True):
564
+        """Construct a dict containing unit sentries, process names, and
565
+        process IDs.
566
+
567
+        :param unit_processes: A dictionary of Amulet sentry instance
568
+            to list of process names.
569
+        :param expect_success: if False expect the processes to not be
570
+            running, raise if they are.
571
+        :returns: Dictionary of Amulet sentry instance to dictionary
572
+            of process names to PIDs.
573
+        """
574
+        pid_dict = {}
575
+        for sentry_unit, process_list in six.iteritems(unit_processes):
576
+            pid_dict[sentry_unit] = {}
577
+            for process in process_list:
578
+                pids = self.get_process_id_list(
579
+                    sentry_unit, process, expect_success=expect_success)
580
+                pid_dict[sentry_unit].update({process: pids})
581
+        return pid_dict
582
+
583
+    def validate_unit_process_ids(self, expected, actual):
584
+        """Validate process id quantities for services on units."""
585
+        self.log.debug('Checking units for running processes...')
586
+        self.log.debug('Expected PIDs: {}'.format(expected))
587
+        self.log.debug('Actual PIDs: {}'.format(actual))
588
+
589
+        if len(actual) != len(expected):
590
+            return ('Unit count mismatch.  expected, actual: {}, '
591
+                    '{} '.format(len(expected), len(actual)))
592
+
593
+        for (e_sentry, e_proc_names) in six.iteritems(expected):
594
+            e_sentry_name = e_sentry.info['unit_name']
595
+            if e_sentry in actual.keys():
596
+                a_proc_names = actual[e_sentry]
597
+            else:
598
+                return ('Expected sentry ({}) not found in actual dict data.'
599
+                        '{}'.format(e_sentry_name, e_sentry))
600
+
601
+            if len(e_proc_names.keys()) != len(a_proc_names.keys()):
602
+                return ('Process name count mismatch.  expected, actual: {}, '
603
+                        '{}'.format(len(expected), len(actual)))
604
+
605
+            for (e_proc_name, e_pids), (a_proc_name, a_pids) in \
606
+                    zip(e_proc_names.items(), a_proc_names.items()):
607
+                if e_proc_name != a_proc_name:
608
+                    return ('Process name mismatch.  expected, actual: {}, '
609
+                            '{}'.format(e_proc_name, a_proc_name))
610
+
611
+                a_pids_length = len(a_pids)
612
+                fail_msg = ('PID count mismatch. {} ({}) expected, actual: '
613
+                            '{}, {} ({})'.format(e_sentry_name, e_proc_name,
614
+                                                 e_pids, a_pids_length,
615
+                                                 a_pids))
616
+
617
+                # If expected is a list, ensure at least one PID quantity match
618
+                if isinstance(e_pids, list) and \
619
+                        a_pids_length not in e_pids:
620
+                    return fail_msg
621
+                # If expected is not bool and not list,
622
+                # ensure PID quantities match
623
+                elif not isinstance(e_pids, bool) and \
624
+                        not isinstance(e_pids, list) and \
625
+                        a_pids_length != e_pids:
626
+                    return fail_msg
627
+                # If expected is bool True, ensure 1 or more PIDs exist
628
+                elif isinstance(e_pids, bool) and \
629
+                        e_pids is True and a_pids_length < 1:
630
+                    return fail_msg
631
+                # If expected is bool False, ensure 0 PIDs exist
632
+                elif isinstance(e_pids, bool) and \
633
+                        e_pids is False and a_pids_length != 0:
634
+                    return fail_msg
635
+                else:
636
+                    self.log.debug('PID check OK: {} {} {}: '
637
+                                   '{}'.format(e_sentry_name, e_proc_name,
638
+                                               e_pids, a_pids))
639
+        return None
640
+
641
+    def validate_list_of_identical_dicts(self, list_of_dicts):
642
+        """Check that all dicts within a list are identical."""
643
+        hashes = []
644
+        for _dict in list_of_dicts:
645
+            hashes.append(hash(frozenset(_dict.items())))
646
+
647
+        self.log.debug('Hashes: {}'.format(hashes))
648
+        if len(set(hashes)) == 1:
649
+            self.log.debug('Dicts within list are identical')
650
+        else:
651
+            return 'Dicts within list are not identical'
652
+
653
+        return None
654
+
655
+    def validate_sectionless_conf(self, file_contents, expected):
656
+        """A crude conf parser.  Useful to inspect configuration files which
657
+        do not have section headers (as would be necessary in order to use
658
+        the configparser).  Such as openstack-dashboard or rabbitmq confs."""
659
+        for line in file_contents.split('\n'):
660
+            if '=' in line:
661
+                args = line.split('=')
662
+                if len(args) <= 1:
663
+                    continue
664
+                key = args[0].strip()
665
+                value = args[1].strip()
666
+                if key in expected.keys():
667
+                    if expected[key] != value:
668
+                        msg = ('Config mismatch.  Expected, actual:  {}, '
669
+                               '{}'.format(expected[key], value))
670
+                        amulet.raise_status(amulet.FAIL, msg=msg)
671
+
672
+    def get_unit_hostnames(self, units):
673
+        """Return a dict of juju unit names to hostnames."""
674
+        host_names = {}
675
+        for unit in units:
676
+            host_names[unit.info['unit_name']] = \
677
+                str(unit.file_contents('/etc/hostname').strip())
678
+        self.log.debug('Unit host names: {}'.format(host_names))
679
+        return host_names
680
+
681
+    def run_cmd_unit(self, sentry_unit, cmd):
682
+        """Run a command on a unit, return the output and exit code."""
683
+        output, code = sentry_unit.run(cmd)
684
+        if code == 0:
685
+            self.log.debug('{} `{}` command returned {} '
686
+                           '(OK)'.format(sentry_unit.info['unit_name'],
687
+                                         cmd, code))
688
+        else:
689
+            msg = ('{} `{}` command returned {} '
690
+                   '{}'.format(sentry_unit.info['unit_name'],
691
+                               cmd, code, output))
692
+            amulet.raise_status(amulet.FAIL, msg=msg)
693
+        return str(output), code
694
+
695
+    def file_exists_on_unit(self, sentry_unit, file_name):
696
+        """Check if a file exists on a unit."""
697
+        try:
698
+            sentry_unit.file_stat(file_name)
699
+            return True
700
+        except IOError:
701
+            return False
702
+        except Exception as e:
703
+            msg = 'Error checking file {}: {}'.format(file_name, e)
704
+            amulet.raise_status(amulet.FAIL, msg=msg)
705
+
706
+    def file_contents_safe(self, sentry_unit, file_name,
707
+                           max_wait=60, fatal=False):
708
+        """Get file contents from a sentry unit.  Wrap amulet file_contents
709
+        with retry logic to address races where a file checks as existing,
710
+        but no longer exists by the time file_contents is called.
711
+        Return None if file not found. Optionally raise if fatal is True."""
712
+        unit_name = sentry_unit.info['unit_name']
713
+        file_contents = False
714
+        tries = 0
715
+        while not file_contents and tries < (max_wait / 4):
716
+            try:
717
+                file_contents = sentry_unit.file_contents(file_name)
718
+            except IOError:
719
+                self.log.debug('Attempt {} to open file {} from {} '
720
+                               'failed'.format(tries, file_name,
721
+                                               unit_name))
722
+                time.sleep(4)
723
+                tries += 1
724
+
725
+        if file_contents:
726
+            return file_contents
727
+        elif not fatal:
728
+            return None
729
+        elif fatal:
730
+            msg = 'Failed to get file contents from unit.'
731
+            amulet.raise_status(amulet.FAIL, msg)
732
+
733
+    def port_knock_tcp(self, host="localhost", port=22, timeout=15):
734
+        """Open a TCP socket to check for a listening sevice on a host.
735
+
736
+        :param host: host name or IP address, default to localhost
737
+        :param port: TCP port number, default to 22
738
+        :param timeout: Connect timeout, default to 15 seconds
739
+        :returns: True if successful, False if connect failed
740
+        """
741
+
742
+        # Resolve host name if possible
743
+        try:
744
+            connect_host = socket.gethostbyname(host)
745
+            host_human = "{} ({})".format(connect_host, host)
746
+        except socket.error as e:
747
+            self.log.warn('Unable to resolve address: '
748
+                          '{} ({}) Trying anyway!'.format(host, e))
749
+            connect_host = host
750
+            host_human = connect_host
751
+
752
+        # Attempt socket connection
753
+        try:
754
+            knock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
755
+            knock.settimeout(timeout)
756
+            knock.connect((connect_host, port))
757
+            knock.close()
758
+            self.log.debug('Socket connect OK for host '
759
+                           '{} on port {}.'.format(host_human, port))
760
+            return True
761
+        except socket.error as e:
762
+            self.log.debug('Socket connect FAIL for'
763
+                           ' {} port {} ({})'.format(host_human, port, e))
764
+            return False
765
+
766
+    def port_knock_units(self, sentry_units, port=22,
767
+                         timeout=15, expect_success=True):
768
+        """Open a TCP socket to check for a listening sevice on each
769
+        listed juju unit.
770
+
771
+        :param sentry_units: list of sentry unit pointers
772
+        :param port: TCP port number, default to 22
773
+        :param timeout: Connect timeout, default to 15 seconds
774
+        :expect_success: True by default, set False to invert logic
775
+        :returns: None if successful, Failure message otherwise
776
+        """
777
+        for unit in sentry_units:
778
+            host = unit.info['public-address']
779
+            connected = self.port_knock_tcp(host, port, timeout)
780
+            if not connected and expect_success:
781
+                return 'Socket connect failed.'
782
+            elif connected and not expect_success:
783
+                return 'Socket connected unexpectedly.'
784
+
785
+    def get_uuid_epoch_stamp(self):
786
+        """Returns a stamp string based on uuid4 and epoch time.  Useful in
787
+        generating test messages which need to be unique-ish."""
788
+        return '[{}-{}]'.format(uuid.uuid4(), time.time())
789
+
790
+# amulet juju action helpers:
791
+    def run_action(self, unit_sentry, action,
792
+                   _check_output=subprocess.check_output,
793
+                   params=None):
794
+        """Run the named action on a given unit sentry.
795
+
796
+        params a dict of parameters to use
797
+        _check_output parameter is used for dependency injection.
798
+
799
+        @return action_id.
800
+        """
801
+        unit_id = unit_sentry.info["unit_name"]
802
+        command = ["juju", "action", "do", "--format=json", unit_id, action]
803
+        if params is not None:
804
+            for key, value in params.iteritems():
805
+                command.append("{}={}".format(key, value))
806
+        self.log.info("Running command: %s\n" % " ".join(command))
807
+        output = _check_output(command, universal_newlines=True)
808
+        data = json.loads(output)
809
+        action_id = data[u'Action queued with id']
810
+        return action_id
811
+
812
+    def wait_on_action(self, action_id, _check_output=subprocess.check_output):
813
+        """Wait for a given action, returning if it completed or not.
814
+
815
+        _check_output parameter is used for dependency injection.
816
+        """
817
+        command = ["juju", "action", "fetch", "--format=json", "--wait=0",
818
+                   action_id]
819
+        output = _check_output(command, universal_newlines=True)
820
+        data = json.loads(output)
821
+        return data.get(u"status") == "completed"
822
+
823
+    def status_get(self, unit):
824
+        """Return the current service status of this unit."""
825
+        raw_status, return_code = unit.run(
826
+            "status-get --format=json --include-data")
827
+        if return_code != 0:
828
+            return ("unknown", "")
829
+        status = json.loads(raw_status)
830
+        return (status["status"], status["message"])
Back to file index

tests/charmhelpers/contrib/openstack/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/openstack/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/openstack/amulet/__init__.py

 1
--- 
 2
+++ tests/charmhelpers/contrib/openstack/amulet/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

tests/charmhelpers/contrib/openstack/amulet/deployment.py

  1
--- 
  2
+++ tests/charmhelpers/contrib/openstack/amulet/deployment.py
  3
@@ -0,0 +1,345 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import logging
 19
+import re
 20
+import sys
 21
+import six
 22
+from collections import OrderedDict
 23
+from charmhelpers.contrib.amulet.deployment import (
 24
+    AmuletDeployment
 25
+)
 26
+
 27
+DEBUG = logging.DEBUG
 28
+ERROR = logging.ERROR
 29
+
 30
+
 31
+class OpenStackAmuletDeployment(AmuletDeployment):
 32
+    """OpenStack amulet deployment.
 33
+
 34
+       This class inherits from AmuletDeployment and has additional support
 35
+       that is specifically for use by OpenStack charms.
 36
+       """
 37
+
 38
+    def __init__(self, series=None, openstack=None, source=None,
 39
+                 stable=True, log_level=DEBUG):
 40
+        """Initialize the deployment environment."""
 41
+        super(OpenStackAmuletDeployment, self).__init__(series)
 42
+        self.log = self.get_logger(level=log_level)
 43
+        self.log.info('OpenStackAmuletDeployment:  init')
 44
+        self.openstack = openstack
 45
+        self.source = source
 46
+        self.stable = stable
 47
+
 48
+    def get_logger(self, name="deployment-logger", level=logging.DEBUG):
 49
+        """Get a logger object that will log to stdout."""
 50
+        log = logging
 51
+        logger = log.getLogger(name)
 52
+        fmt = log.Formatter("%(asctime)s %(funcName)s "
 53
+                            "%(levelname)s: %(message)s")
 54
+
 55
+        handler = log.StreamHandler(stream=sys.stdout)
 56
+        handler.setLevel(level)
 57
+        handler.setFormatter(fmt)
 58
+
 59
+        logger.addHandler(handler)
 60
+        logger.setLevel(level)
 61
+
 62
+        return logger
 63
+
 64
+    def _determine_branch_locations(self, other_services):
 65
+        """Determine the branch locations for the other services.
 66
+
 67
+           Determine if the local branch being tested is derived from its
 68
+           stable or next (dev) branch, and based on this, use the corresonding
 69
+           stable or next branches for the other_services."""
 70
+
 71
+        self.log.info('OpenStackAmuletDeployment:  determine branch locations')
 72
+
 73
+        # Charms outside the ~openstack-charmers
 74
+        base_charms = {
 75
+            'mysql': ['precise', 'trusty'],
 76
+            'mongodb': ['precise', 'trusty'],
 77
+            'nrpe': ['precise', 'trusty', 'wily', 'xenial'],
 78
+        }
 79
+
 80
+        for svc in other_services:
 81
+            # If a location has been explicitly set, use it
 82
+            if svc.get('location'):
 83
+                continue
 84
+            if svc['name'] in base_charms:
 85
+                # NOTE: not all charms have support for all series we
 86
+                #       want/need to test against, so fix to most recent
 87
+                #       that each base charm supports
 88
+                target_series = self.series
 89
+                if self.series not in base_charms[svc['name']]:
 90
+                    target_series = base_charms[svc['name']][-1]
 91
+                svc['location'] = 'cs:{}/{}'.format(target_series,
 92
+                                                    svc['name'])
 93
+            elif self.stable:
 94
+                svc['location'] = 'cs:{}/{}'.format(self.series,
 95
+                                                    svc['name'])
 96
+            else:
 97
+                svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format(
 98
+                    self.series,
 99
+                    svc['name']
100
+                )
101
+
102
+        return other_services
103
+
104
+    def _add_services(self, this_service, other_services, use_source=None,
105
+                      no_origin=None):
106
+        """Add services to the deployment and optionally set
107
+        openstack-origin/source.
108
+
109
+        :param this_service dict: Service dictionary describing the service
110
+                                  whose amulet tests are being run
111
+        :param other_services dict: List of service dictionaries describing
112
+                                    the services needed to support the target
113
+                                    service
114
+        :param use_source list: List of services which use the 'source' config
115
+                                option rather than 'openstack-origin'
116
+        :param no_origin list: List of services which do not support setting
117
+                               the Cloud Archive.
118
+        Service Dict:
119
+            {
120
+                'name': str charm-name,
121
+                'units': int number of units,
122
+                'constraints': dict of juju constraints,
123
+                'location': str location of charm,
124
+            }
125
+        eg
126
+        this_service = {
127
+            'name': 'openvswitch-odl',
128
+            'constraints': {'mem': '8G'},
129
+        }
130
+        other_services = [
131
+            {
132
+                'name': 'nova-compute',
133
+                'units': 2,
134
+                'constraints': {'mem': '4G'},
135
+                'location': cs:~bob/xenial/nova-compute
136
+            },
137
+            {
138
+                'name': 'mysql',
139
+                'constraints': {'mem': '2G'},
140
+            },
141
+            {'neutron-api-odl'}]
142
+        use_source = ['mysql']
143
+        no_origin = ['neutron-api-odl']
144
+        """
145
+        self.log.info('OpenStackAmuletDeployment:  adding services')
146
+
147
+        other_services = self._determine_branch_locations(other_services)
148
+
149
+        super(OpenStackAmuletDeployment, self)._add_services(this_service,
150
+                                                             other_services)
151
+
152
+        services = other_services
153
+        services.append(this_service)
154
+
155
+        use_source = use_source or []
156
+        no_origin = no_origin or []
157
+
158
+        # Charms which should use the source config option
159
+        use_source = list(set(
160
+            use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
161
+                          'ceph-osd', 'ceph-radosgw', 'ceph-mon',
162
+                          'ceph-proxy', 'percona-cluster', 'lxd']))
163
+
164
+        # Charms which can not use openstack-origin, ie. many subordinates
165
+        no_origin = list(set(
166
+            no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch',
167
+                         'nrpe', 'openvswitch-odl', 'neutron-api-odl',
168
+                         'odl-controller', 'cinder-backup', 'nexentaedge-data',
169
+                         'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
170
+                         'cinder-nexentaedge', 'nexentaedge-mgmt']))
171
+
172
+        if self.openstack:
173
+            for svc in services:
174
+                if svc['name'] not in use_source + no_origin:
175
+                    config = {'openstack-origin': self.openstack}
176
+                    self.d.configure(svc['name'], config)
177
+
178
+        if self.source:
179
+            for svc in services:
180
+                if svc['name'] in use_source and svc['name'] not in no_origin:
181
+                    config = {'source': self.source}
182
+                    self.d.configure(svc['name'], config)
183
+
184
+    def _configure_services(self, configs):
185
+        """Configure all of the services."""
186
+        self.log.info('OpenStackAmuletDeployment:  configure services')
187
+        for service, config in six.iteritems(configs):
188
+            self.d.configure(service, config)
189
+
190
+    def _auto_wait_for_status(self, message=None, exclude_services=None,
191
+                              include_only=None, timeout=1800):
192
+        """Wait for all units to have a specific extended status, except
193
+        for any defined as excluded.  Unless specified via message, any
194
+        status containing any case of 'ready' will be considered a match.
195
+
196
+        Examples of message usage:
197
+
198
+          Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
199
+              message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
200
+
201
+          Wait for all units to reach this status (exact match):
202
+              message = re.compile('^Unit is ready and clustered$')
203
+
204
+          Wait for all units to reach any one of these (exact match):
205
+              message = re.compile('Unit is ready|OK|Ready')
206
+
207
+          Wait for at least one unit to reach this status (exact match):
208
+              message = {'ready'}
209
+
210
+        See Amulet's sentry.wait_for_messages() for message usage detail.
211
+        https://github.com/juju/amulet/blob/master/amulet/sentry.py
212
+
213
+        :param message: Expected status match
214
+        :param exclude_services: List of juju service names to ignore,
215
+            not to be used in conjuction with include_only.
216
+        :param include_only: List of juju service names to exclusively check,
217
+            not to be used in conjuction with exclude_services.
218
+        :param timeout: Maximum time in seconds to wait for status match
219
+        :returns: None.  Raises if timeout is hit.
220
+        """
221
+        self.log.info('Waiting for extended status on units...')
222
+
223
+        all_services = self.d.services.keys()
224
+
225
+        if exclude_services and include_only:
226
+            raise ValueError('exclude_services can not be used '
227
+                             'with include_only')
228
+
229
+        if message:
230
+            if isinstance(message, re._pattern_type):
231
+                match = message.pattern
232
+            else:
233
+                match = message
234
+
235
+            self.log.debug('Custom extended status wait match: '
236
+                           '{}'.format(match))
237
+        else:
238
+            self.log.debug('Default extended status wait match:  contains '
239
+                           'READY (case-insensitive)')
240
+            message = re.compile('.*ready.*', re.IGNORECASE)
241
+
242
+        if exclude_services:
243
+            self.log.debug('Excluding services from extended status match: '
244
+                           '{}'.format(exclude_services))
245
+        else:
246
+            exclude_services = []
247
+
248
+        if include_only:
249
+            services = include_only
250
+        else:
251
+            services = list(set(all_services) - set(exclude_services))
252
+
253
+        self.log.debug('Waiting up to {}s for extended status on services: '
254
+                       '{}'.format(timeout, services))
255
+        service_messages = {service: message for service in services}
256
+        self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
257
+        self.log.info('OK')
258
+
259
+    def _get_openstack_release(self):
260
+        """Get openstack release.
261
+
262
+           Return an integer representing the enum value of the openstack
263
+           release.
264
+           """
265
+        # Must be ordered by OpenStack release (not by Ubuntu release):
266
+        (self.precise_essex, self.precise_folsom, self.precise_grizzly,
267
+         self.precise_havana, self.precise_icehouse,
268
+         self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
269
+         self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
270
+         self.wily_liberty, self.trusty_mitaka,
271
+         self.xenial_mitaka, self.xenial_newton,
272
+         self.yakkety_newton) = range(16)
273
+
274
+        releases = {
275
+            ('precise', None): self.precise_essex,
276
+            ('precise', 'cloud:precise-folsom'): self.precise_folsom,
277
+            ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
278
+            ('precise', 'cloud:precise-havana'): self.precise_havana,
279
+            ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
280
+            ('trusty', None): self.trusty_icehouse,
281
+            ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
282
+            ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
283
+            ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
284
+            ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
285
+            ('utopic', None): self.utopic_juno,
286
+            ('vivid', None): self.vivid_kilo,
287
+            ('wily', None): self.wily_liberty,
288
+            ('xenial', None): self.xenial_mitaka,
289
+            ('xenial', 'cloud:xenial-newton'): self.xenial_newton,
290
+            ('yakkety', None): self.yakkety_newton,
291
+        }
292
+        return releases[(self.series, self.openstack)]
293
+
294
+    def _get_openstack_release_string(self):
295
+        """Get openstack release string.
296
+
297
+           Return a string representing the openstack release.
298
+           """
299
+        releases = OrderedDict([
300
+            ('precise', 'essex'),
301
+            ('quantal', 'folsom'),
302
+            ('raring', 'grizzly'),
303
+            ('saucy', 'havana'),
304
+            ('trusty', 'icehouse'),
305
+            ('utopic', 'juno'),
306
+            ('vivid', 'kilo'),
307
+            ('wily', 'liberty'),
308
+            ('xenial', 'mitaka'),
309
+            ('yakkety', 'newton'),
310
+        ])
311
+        if self.openstack:
312
+            os_origin = self.openstack.split(':')[1]
313
+            return os_origin.split('%s-' % self.series)[1].split('/')[0]
314
+        else:
315
+            return releases[self.series]
316
+
317
+    def get_ceph_expected_pools(self, radosgw=False):
318
+        """Return a list of expected ceph pools in a ceph + cinder + glance
319
+        test scenario, based on OpenStack release and whether ceph radosgw
320
+        is flagged as present or not."""
321
+
322
+        if self._get_openstack_release() >= self.trusty_kilo:
323
+            # Kilo or later
324
+            pools = [
325
+                'rbd',
326
+                'cinder',
327
+                'glance'
328
+            ]
329
+        else:
330
+            # Juno or earlier
331
+            pools = [
332
+                'data',
333
+                'metadata',
334
+                'rbd',
335
+                'cinder',
336
+                'glance'
337
+            ]
338
+
339
+        if radosgw:
340
+            pools.extend([
341
+                '.rgw.root',
342
+                '.rgw.control',
343
+                '.rgw',
344
+                '.rgw.gc',
345
+                '.users.uid'
346
+            ])
347
+
348
+        return pools
Back to file index

tests/charmhelpers/contrib/openstack/amulet/utils.py

   1
--- 
   2
+++ tests/charmhelpers/contrib/openstack/amulet/utils.py
   3
@@ -0,0 +1,1124 @@
   4
+# Copyright 2014-2015 Canonical Limited.
   5
+#
   6
+# Licensed under the Apache License, Version 2.0 (the "License");
   7
+# you may not use this file except in compliance with the License.
   8
+# You may obtain a copy of the License at
   9
+#
  10
+#  http://www.apache.org/licenses/LICENSE-2.0
  11
+#
  12
+# Unless required by applicable law or agreed to in writing, software
  13
+# distributed under the License is distributed on an "AS IS" BASIS,
  14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  15
+# See the License for the specific language governing permissions and
  16
+# limitations under the License.
  17
+
  18
+import amulet
  19
+import json
  20
+import logging
  21
+import os
  22
+import re
  23
+import six
  24
+import time
  25
+import urllib
  26
+
  27
+import cinderclient.v1.client as cinder_client
  28
+import glanceclient.v1.client as glance_client
  29
+import heatclient.v1.client as heat_client
  30
+import keystoneclient.v2_0 as keystone_client
  31
+from keystoneclient.auth.identity import v3 as keystone_id_v3
  32
+from keystoneclient import session as keystone_session
  33
+from keystoneclient.v3 import client as keystone_client_v3
  34
+
  35
+import novaclient.client as nova_client
  36
+import pika
  37
+import swiftclient
  38
+
  39
+from charmhelpers.contrib.amulet.utils import (
  40
+    AmuletUtils
  41
+)
  42
+
  43
+DEBUG = logging.DEBUG
  44
+ERROR = logging.ERROR
  45
+
  46
+NOVA_CLIENT_VERSION = "2"
  47
+
  48
+
  49
+class OpenStackAmuletUtils(AmuletUtils):
  50
+    """OpenStack amulet utilities.
  51
+
  52
+       This class inherits from AmuletUtils and has additional support
  53
+       that is specifically for use by OpenStack charm tests.
  54
+       """
  55
+
  56
+    def __init__(self, log_level=ERROR):
  57
+        """Initialize the deployment environment."""
  58
+        super(OpenStackAmuletUtils, self).__init__(log_level)
  59
+
  60
+    def validate_endpoint_data(self, endpoints, admin_port, internal_port,
  61
+                               public_port, expected):
  62
+        """Validate endpoint data.
  63
+
  64
+           Validate actual endpoint data vs expected endpoint data. The ports
  65
+           are used to find the matching endpoint.
  66
+           """
  67
+        self.log.debug('Validating endpoint data...')
  68
+        self.log.debug('actual: {}'.format(repr(endpoints)))
  69
+        found = False
  70
+        for ep in endpoints:
  71
+            self.log.debug('endpoint: {}'.format(repr(ep)))
  72
+            if (admin_port in ep.adminurl and
  73
+                    internal_port in ep.internalurl and
  74
+                    public_port in ep.publicurl):
  75
+                found = True
  76
+                actual = {'id': ep.id,
  77
+                          'region': ep.region,
  78
+                          'adminurl': ep.adminurl,
  79
+                          'internalurl': ep.internalurl,
  80
+                          'publicurl': ep.publicurl,
  81
+                          'service_id': ep.service_id}
  82
+                ret = self._validate_dict_data(expected, actual)
  83
+                if ret:
  84
+                    return 'unexpected endpoint data - {}'.format(ret)
  85
+
  86
+        if not found:
  87
+            return 'endpoint not found'
  88
+
  89
+    def validate_v3_endpoint_data(self, endpoints, admin_port, internal_port,
  90
+                                  public_port, expected):
  91
+        """Validate keystone v3 endpoint data.
  92
+
  93
+        Validate the v3 endpoint data which has changed from v2.  The
  94
+        ports are used to find the matching endpoint.
  95
+
  96
+        The new v3 endpoint data looks like:
  97
+
  98
+        [<Endpoint enabled=True,
  99
+                   id=0432655fc2f74d1e9fa17bdaa6f6e60b,
 100
+                   interface=admin,
 101
+                   links={u'self': u'<RESTful URL of this endpoint>'},
 102
+                   region=RegionOne,
 103
+                   region_id=RegionOne,
 104
+                   service_id=17f842a0dc084b928e476fafe67e4095,
 105
+                   url=http://10.5.6.5:9312>,
 106
+         <Endpoint enabled=True,
 107
+                   id=6536cb6cb92f4f41bf22b079935c7707,
 108
+                   interface=admin,
 109
+                   links={u'self': u'<RESTful url of this endpoint>'},
 110
+                   region=RegionOne,
 111
+                   region_id=RegionOne,
 112
+                   service_id=72fc8736fb41435e8b3584205bb2cfa3,
 113
+                   url=http://10.5.6.6:35357/v3>,
 114
+                   ... ]
 115
+        """
 116
+        self.log.debug('Validating v3 endpoint data...')
 117
+        self.log.debug('actual: {}'.format(repr(endpoints)))
 118
+        found = []
 119
+        for ep in endpoints:
 120
+            self.log.debug('endpoint: {}'.format(repr(ep)))
 121
+            if ((admin_port in ep.url and ep.interface == 'admin') or
 122
+                    (internal_port in ep.url and ep.interface == 'internal') or
 123
+                    (public_port in ep.url and ep.interface == 'public')):
 124
+                found.append(ep.interface)
 125
+                # note we ignore the links member.
 126
+                actual = {'id': ep.id,
 127
+                          'region': ep.region,
 128
+                          'region_id': ep.region_id,
 129
+                          'interface': self.not_null,
 130
+                          'url': ep.url,
 131
+                          'service_id': ep.service_id, }
 132
+                ret = self._validate_dict_data(expected, actual)
 133
+                if ret:
 134
+                    return 'unexpected endpoint data - {}'.format(ret)
 135
+
 136
+        if len(found) != 3:
 137
+            return 'Unexpected number of endpoints found'
 138
+
 139
+    def validate_svc_catalog_endpoint_data(self, expected, actual):
 140
+        """Validate service catalog endpoint data.
 141
+
 142
+           Validate a list of actual service catalog endpoints vs a list of
 143
+           expected service catalog endpoints.
 144
+           """
 145
+        self.log.debug('Validating service catalog endpoint data...')
 146
+        self.log.debug('actual: {}'.format(repr(actual)))
 147
+        for k, v in six.iteritems(expected):
 148
+            if k in actual:
 149
+                ret = self._validate_dict_data(expected[k][0], actual[k][0])
 150
+                if ret:
 151
+                    return self.endpoint_error(k, ret)
 152
+            else:
 153
+                return "endpoint {} does not exist".format(k)
 154
+        return ret
 155
+
 156
+    def validate_v3_svc_catalog_endpoint_data(self, expected, actual):
 157
+        """Validate the keystone v3 catalog endpoint data.
 158
+
 159
+        Validate a list of dictinaries that make up the keystone v3 service
 160
+        catalogue.
 161
+
 162
+        It is in the form of:
 163
+
 164
+
 165
+        {u'identity': [{u'id': u'48346b01c6804b298cdd7349aadb732e',
 166
+                        u'interface': u'admin',
 167
+                        u'region': u'RegionOne',
 168
+                        u'region_id': u'RegionOne',
 169
+                        u'url': u'http://10.5.5.224:35357/v3'},
 170
+                       {u'id': u'8414f7352a4b47a69fddd9dbd2aef5cf',
 171
+                        u'interface': u'public',
 172
+                        u'region': u'RegionOne',
 173
+                        u'region_id': u'RegionOne',
 174
+                        u'url': u'http://10.5.5.224:5000/v3'},
 175
+                       {u'id': u'd5ca31440cc24ee1bf625e2996fb6a5b',
 176
+                        u'interface': u'internal',
 177
+                        u'region': u'RegionOne',
 178
+                        u'region_id': u'RegionOne',
 179
+                        u'url': u'http://10.5.5.224:5000/v3'}],
 180
+         u'key-manager': [{u'id': u'68ebc17df0b045fcb8a8a433ebea9e62',
 181
+                           u'interface': u'public',
 182
+                           u'region': u'RegionOne',
 183
+                           u'region_id': u'RegionOne',
 184
+                           u'url': u'http://10.5.5.223:9311'},
 185
+                          {u'id': u'9cdfe2a893c34afd8f504eb218cd2f9d',
 186
+                           u'interface': u'internal',
 187
+                           u'region': u'RegionOne',
 188
+                           u'region_id': u'RegionOne',
 189
+                           u'url': u'http://10.5.5.223:9311'},
 190
+                          {u'id': u'f629388955bc407f8b11d8b7ca168086',
 191
+                           u'interface': u'admin',
 192
+                           u'region': u'RegionOne',
 193
+                           u'region_id': u'RegionOne',
 194
+                           u'url': u'http://10.5.5.223:9312'}]}
 195
+
 196
+        Note, that an added complication is that the order of admin, public,
 197
+        internal against 'interface' in each region.
 198
+
 199
+        Thus, the function sorts the expected and actual lists using the
 200
+        interface key as a sort key, prior to the comparison.
 201
+        """
 202
+        self.log.debug('Validating v3 service catalog endpoint data...')
 203
+        self.log.debug('actual: {}'.format(repr(actual)))
 204
+        for k, v in six.iteritems(expected):
 205
+            if k in actual:
 206
+                l_expected = sorted(v, key=lambda x: x['interface'])
 207
+                l_actual = sorted(actual[k], key=lambda x: x['interface'])
 208
+                if len(l_actual) != len(l_expected):
 209
+                    return ("endpoint {} has differing number of interfaces "
 210
+                            " - expected({}), actual({})"
 211
+                            .format(k, len(l_expected), len(l_actual)))
 212
+                for i_expected, i_actual in zip(l_expected, l_actual):
 213
+                    self.log.debug("checking interface {}"
 214
+                                   .format(i_expected['interface']))
 215
+                    ret = self._validate_dict_data(i_expected, i_actual)
 216
+                    if ret:
 217
+                        return self.endpoint_error(k, ret)
 218
+            else:
 219
+                return "endpoint {} does not exist".format(k)
 220
+        return ret
 221
+
 222
+    def validate_tenant_data(self, expected, actual):
 223
+        """Validate tenant data.
 224
+
 225
+           Validate a list of actual tenant data vs list of expected tenant
 226
+           data.
 227
+           """
 228
+        self.log.debug('Validating tenant data...')
 229
+        self.log.debug('actual: {}'.format(repr(actual)))
 230
+        for e in expected:
 231
+            found = False
 232
+            for act in actual:
 233
+                a = {'enabled': act.enabled, 'description': act.description,
 234
+                     'name': act.name, 'id': act.id}
 235
+                if e['name'] == a['name']:
 236
+                    found = True
 237
+                    ret = self._validate_dict_data(e, a)
 238
+                    if ret:
 239
+                        return "unexpected tenant data - {}".format(ret)
 240
+            if not found:
 241
+                return "tenant {} does not exist".format(e['name'])
 242
+        return ret
 243
+
 244
+    def validate_role_data(self, expected, actual):
 245
+        """Validate role data.
 246
+
 247
+           Validate a list of actual role data vs a list of expected role
 248
+           data.
 249
+           """
 250
+        self.log.debug('Validating role data...')
 251
+        self.log.debug('actual: {}'.format(repr(actual)))
 252
+        for e in expected:
 253
+            found = False
 254
+            for act in actual:
 255
+                a = {'name': act.name, 'id': act.id}
 256
+                if e['name'] == a['name']:
 257
+                    found = True
 258
+                    ret = self._validate_dict_data(e, a)
 259
+                    if ret:
 260
+                        return "unexpected role data - {}".format(ret)
 261
+            if not found:
 262
+                return "role {} does not exist".format(e['name'])
 263
+        return ret
 264
+
 265
+    def validate_user_data(self, expected, actual, api_version=None):
 266
+        """Validate user data.
 267
+
 268
+           Validate a list of actual user data vs a list of expected user
 269
+           data.
 270
+           """
 271
+        self.log.debug('Validating user data...')
 272
+        self.log.debug('actual: {}'.format(repr(actual)))
 273
+        for e in expected:
 274
+            found = False
 275
+            for act in actual:
 276
+                if e['name'] == act.name:
 277
+                    a = {'enabled': act.enabled, 'name': act.name,
 278
+                         'email': act.email, 'id': act.id}
 279
+                    if api_version == 3:
 280
+                        a['default_project_id'] = getattr(act,
 281
+                                                          'default_project_id',
 282
+                                                          'none')
 283
+                    else:
 284
+                        a['tenantId'] = act.tenantId
 285
+                    found = True
 286
+                    ret = self._validate_dict_data(e, a)
 287
+                    if ret:
 288
+                        return "unexpected user data - {}".format(ret)
 289
+            if not found:
 290
+                return "user {} does not exist".format(e['name'])
 291
+        return ret
 292
+
 293
+    def validate_flavor_data(self, expected, actual):
 294
+        """Validate flavor data.
 295
+
 296
+           Validate a list of actual flavors vs a list of expected flavors.
 297
+           """
 298
+        self.log.debug('Validating flavor data...')
 299
+        self.log.debug('actual: {}'.format(repr(actual)))
 300
+        act = [a.name for a in actual]
 301
+        return self._validate_list_data(expected, act)
 302
+
 303
+    def tenant_exists(self, keystone, tenant):
 304
+        """Return True if tenant exists."""
 305
+        self.log.debug('Checking if tenant exists ({})...'.format(tenant))
 306
+        return tenant in [t.name for t in keystone.tenants.list()]
 307
+
 308
+    def authenticate_cinder_admin(self, keystone_sentry, username,
 309
+                                  password, tenant):
 310
+        """Authenticates admin user with cinder."""
 311
+        # NOTE(beisner): cinder python client doesn't accept tokens.
 312
+        keystone_ip = keystone_sentry.info['public-address']
 313
+        ept = "http://{}:5000/v2.0".format(keystone_ip.strip().decode('utf-8'))
 314
+        return cinder_client.Client(username, password, tenant, ept)
 315
+
 316
+    def authenticate_keystone_admin(self, keystone_sentry, user, password,
 317
+                                    tenant=None, api_version=None,
 318
+                                    keystone_ip=None):
 319
+        """Authenticates admin user with the keystone admin endpoint."""
 320
+        self.log.debug('Authenticating keystone admin...')
 321
+        if not keystone_ip:
 322
+            keystone_ip = keystone_sentry.info['public-address']
 323
+
 324
+        base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
 325
+        if not api_version or api_version == 2:
 326
+            ep = base_ep + "/v2.0"
 327
+            return keystone_client.Client(username=user, password=password,
 328
+                                          tenant_name=tenant, auth_url=ep)
 329
+        else:
 330
+            ep = base_ep + "/v3"
 331
+            auth = keystone_id_v3.Password(
 332
+                user_domain_name='admin_domain',
 333
+                username=user,
 334
+                password=password,
 335
+                domain_name='admin_domain',
 336
+                auth_url=ep,
 337
+            )
 338
+            sess = keystone_session.Session(auth=auth)
 339
+            return keystone_client_v3.Client(session=sess)
 340
+
 341
+    def authenticate_keystone_user(self, keystone, user, password, tenant):
 342
+        """Authenticates a regular user with the keystone public endpoint."""
 343
+        self.log.debug('Authenticating keystone user ({})...'.format(user))
 344
+        ep = keystone.service_catalog.url_for(service_type='identity',
 345
+                                              endpoint_type='publicURL')
 346
+        return keystone_client.Client(username=user, password=password,
 347
+                                      tenant_name=tenant, auth_url=ep)
 348
+
 349
+    def authenticate_glance_admin(self, keystone):
 350
+        """Authenticates admin user with glance."""
 351
+        self.log.debug('Authenticating glance admin...')
 352
+        ep = keystone.service_catalog.url_for(service_type='image',
 353
+                                              endpoint_type='adminURL')
 354
+        return glance_client.Client(ep, token=keystone.auth_token)
 355
+
 356
+    def authenticate_heat_admin(self, keystone):
 357
+        """Authenticates the admin user with heat."""
 358
+        self.log.debug('Authenticating heat admin...')
 359
+        ep = keystone.service_catalog.url_for(service_type='orchestration',
 360
+                                              endpoint_type='publicURL')
 361
+        return heat_client.Client(endpoint=ep, token=keystone.auth_token)
 362
+
 363
+    def authenticate_nova_user(self, keystone, user, password, tenant):
 364
+        """Authenticates a regular user with nova-api."""
 365
+        self.log.debug('Authenticating nova user ({})...'.format(user))
 366
+        ep = keystone.service_catalog.url_for(service_type='identity',
 367
+                                              endpoint_type='publicURL')
 368
+        return nova_client.Client(NOVA_CLIENT_VERSION,
 369
+                                  username=user, api_key=password,
 370
+                                  project_id=tenant, auth_url=ep)
 371
+
 372
+    def authenticate_swift_user(self, keystone, user, password, tenant):
 373
+        """Authenticates a regular user with swift api."""
 374
+        self.log.debug('Authenticating swift user ({})...'.format(user))
 375
+        ep = keystone.service_catalog.url_for(service_type='identity',
 376
+                                              endpoint_type='publicURL')
 377
+        return swiftclient.Connection(authurl=ep,
 378
+                                      user=user,
 379
+                                      key=password,
 380
+                                      tenant_name=tenant,
 381
+                                      auth_version='2.0')
 382
+
 383
+    def create_cirros_image(self, glance, image_name):
 384
+        """Download the latest cirros image and upload it to glance,
 385
+        validate and return a resource pointer.
 386
+
 387
+        :param glance: pointer to authenticated glance connection
 388
+        :param image_name: display name for new image
 389
+        :returns: glance image pointer
 390
+        """
 391
+        self.log.debug('Creating glance cirros image '
 392
+                       '({})...'.format(image_name))
 393
+
 394
+        # Download cirros image
 395
+        http_proxy = os.getenv('AMULET_HTTP_PROXY')
 396
+        self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
 397
+        if http_proxy:
 398
+            proxies = {'http': http_proxy}
 399
+            opener = urllib.FancyURLopener(proxies)
 400
+        else:
 401
+            opener = urllib.FancyURLopener()
 402
+
 403
+        f = opener.open('http://download.cirros-cloud.net/version/released')
 404
+        version = f.read().strip()
 405
+        cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
 406
+        local_path = os.path.join('tests', cirros_img)
 407
+
 408
+        if not os.path.exists(local_path):
 409
+            cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
 410
+                                                  version, cirros_img)
 411
+            opener.retrieve(cirros_url, local_path)
 412
+        f.close()
 413
+
 414
+        # Create glance image
 415
+        with open(local_path) as f:
 416
+            image = glance.images.create(name=image_name, is_public=True,
 417
+                                         disk_format='qcow2',
 418
+                                         container_format='bare', data=f)
 419
+
 420
+        # Wait for image to reach active status
 421
+        img_id = image.id
 422
+        ret = self.resource_reaches_status(glance.images, img_id,
 423
+                                           expected_stat='active',
 424
+                                           msg='Image status wait')
 425
+        if not ret:
 426
+            msg = 'Glance image failed to reach expected state.'
 427
+            amulet.raise_status(amulet.FAIL, msg=msg)
 428
+
 429
+        # Re-validate new image
 430
+        self.log.debug('Validating image attributes...')
 431
+        val_img_name = glance.images.get(img_id).name
 432
+        val_img_stat = glance.images.get(img_id).status
 433
+        val_img_pub = glance.images.get(img_id).is_public
 434
+        val_img_cfmt = glance.images.get(img_id).container_format
 435
+        val_img_dfmt = glance.images.get(img_id).disk_format
 436
+        msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
 437
+                    'container fmt:{} disk fmt:{}'.format(
 438
+                        val_img_name, val_img_pub, img_id,
 439
+                        val_img_stat, val_img_cfmt, val_img_dfmt))
 440
+
 441
+        if val_img_name == image_name and val_img_stat == 'active' \
 442
+                and val_img_pub is True and val_img_cfmt == 'bare' \
 443
+                and val_img_dfmt == 'qcow2':
 444
+            self.log.debug(msg_attr)
 445
+        else:
 446
+            msg = ('Volume validation failed, {}'.format(msg_attr))
 447
+            amulet.raise_status(amulet.FAIL, msg=msg)
 448
+
 449
+        return image
 450
+
 451
+    def delete_image(self, glance, image):
 452
+        """Delete the specified image."""
 453
+
 454
+        # /!\ DEPRECATION WARNING
 455
+        self.log.warn('/!\\ DEPRECATION WARNING:  use '
 456
+                      'delete_resource instead of delete_image.')
 457
+        self.log.debug('Deleting glance image ({})...'.format(image))
 458
+        return self.delete_resource(glance.images, image, msg='glance image')
 459
+
 460
+    def create_instance(self, nova, image_name, instance_name, flavor):
 461
+        """Create the specified instance."""
 462
+        self.log.debug('Creating instance '
 463
+                       '({}|{}|{})'.format(instance_name, image_name, flavor))
 464
+        image = nova.images.find(name=image_name)
 465
+        flavor = nova.flavors.find(name=flavor)
 466
+        instance = nova.servers.create(name=instance_name, image=image,
 467
+                                       flavor=flavor)
 468
+
 469
+        count = 1
 470
+        status = instance.status
 471
+        while status != 'ACTIVE' and count < 60:
 472
+            time.sleep(3)
 473
+            instance = nova.servers.get(instance.id)
 474
+            status = instance.status
 475
+            self.log.debug('instance status: {}'.format(status))
 476
+            count += 1
 477
+
 478
+        if status != 'ACTIVE':
 479
+            self.log.error('instance creation timed out')
 480
+            return None
 481
+
 482
+        return instance
 483
+
 484
+    def delete_instance(self, nova, instance):
 485
+        """Delete the specified instance."""
 486
+
 487
+        # /!\ DEPRECATION WARNING
 488
+        self.log.warn('/!\\ DEPRECATION WARNING:  use '
 489
+                      'delete_resource instead of delete_instance.')
 490
+        self.log.debug('Deleting instance ({})...'.format(instance))
 491
+        return self.delete_resource(nova.servers, instance,
 492
+                                    msg='nova instance')
 493
+
 494
+    def create_or_get_keypair(self, nova, keypair_name="testkey"):
 495
+        """Create a new keypair, or return pointer if it already exists."""
 496
+        try:
 497
+            _keypair = nova.keypairs.get(keypair_name)
 498
+            self.log.debug('Keypair ({}) already exists, '
 499
+                           'using it.'.format(keypair_name))
 500
+            return _keypair
 501
+        except:
 502
+            self.log.debug('Keypair ({}) does not exist, '
 503
+                           'creating it.'.format(keypair_name))
 504
+
 505
+        _keypair = nova.keypairs.create(name=keypair_name)
 506
+        return _keypair
 507
+
 508
+    def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
 509
+                             img_id=None, src_vol_id=None, snap_id=None):
 510
+        """Create cinder volume, optionally from a glance image, OR
 511
+        optionally as a clone of an existing volume, OR optionally
 512
+        from a snapshot.  Wait for the new volume status to reach
 513
+        the expected status, validate and return a resource pointer.
 514
+
 515
+        :param vol_name: cinder volume display name
 516
+        :param vol_size: size in gigabytes
 517
+        :param img_id: optional glance image id
 518
+        :param src_vol_id: optional source volume id to clone
 519
+        :param snap_id: optional snapshot id to use
 520
+        :returns: cinder volume pointer
 521
+        """
 522
+        # Handle parameter input and avoid impossible combinations
 523
+        if img_id and not src_vol_id and not snap_id:
 524
+            # Create volume from image
 525
+            self.log.debug('Creating cinder volume from glance image...')
 526
+            bootable = 'true'
 527
+        elif src_vol_id and not img_id and not snap_id:
 528
+            # Clone an existing volume
 529
+            self.log.debug('Cloning cinder volume...')
 530
+            bootable = cinder.volumes.get(src_vol_id).bootable
 531
+        elif snap_id and not src_vol_id and not img_id:
 532
+            # Create volume from snapshot
 533
+            self.log.debug('Creating cinder volume from snapshot...')
 534
+            snap = cinder.volume_snapshots.find(id=snap_id)
 535
+            vol_size = snap.size
 536
+            snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
 537
+            bootable = cinder.volumes.get(snap_vol_id).bootable
 538
+        elif not img_id and not src_vol_id and not snap_id:
 539
+            # Create volume
 540
+            self.log.debug('Creating cinder volume...')
 541
+            bootable = 'false'
 542
+        else:
 543
+            # Impossible combination of parameters
 544
+            msg = ('Invalid method use - name:{} size:{} img_id:{} '
 545
+                   'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
 546
+                                                     img_id, src_vol_id,
 547
+                                                     snap_id))
 548
+            amulet.raise_status(amulet.FAIL, msg=msg)
 549
+
 550
+        # Create new volume
 551
+        try:
 552
+            vol_new = cinder.volumes.create(display_name=vol_name,
 553
+                                            imageRef=img_id,
 554
+                                            size=vol_size,
 555
+                                            source_volid=src_vol_id,
 556
+                                            snapshot_id=snap_id)
 557
+            vol_id = vol_new.id
 558
+        except Exception as e:
 559
+            msg = 'Failed to create volume: {}'.format(e)
 560
+            amulet.raise_status(amulet.FAIL, msg=msg)
 561
+
 562
+        # Wait for volume to reach available status
 563
+        ret = self.resource_reaches_status(cinder.volumes, vol_id,
 564
+                                           expected_stat="available",
 565
+                                           msg="Volume status wait")
 566
+        if not ret:
 567
+            msg = 'Cinder volume failed to reach expected state.'
 568
+            amulet.raise_status(amulet.FAIL, msg=msg)
 569
+
 570
+        # Re-validate new volume
 571
+        self.log.debug('Validating volume attributes...')
 572
+        val_vol_name = cinder.volumes.get(vol_id).display_name
 573
+        val_vol_boot = cinder.volumes.get(vol_id).bootable
 574
+        val_vol_stat = cinder.volumes.get(vol_id).status
 575
+        val_vol_size = cinder.volumes.get(vol_id).size
 576
+        msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
 577
+                    '{} size:{}'.format(val_vol_name, vol_id,
 578
+                                        val_vol_stat, val_vol_boot,
 579
+                                        val_vol_size))
 580
+
 581
+        if val_vol_boot == bootable and val_vol_stat == 'available' \
 582
+                and val_vol_name == vol_name and val_vol_size == vol_size:
 583
+            self.log.debug(msg_attr)
 584
+        else:
 585
+            msg = ('Volume validation failed, {}'.format(msg_attr))
 586
+            amulet.raise_status(amulet.FAIL, msg=msg)
 587
+
 588
+        return vol_new
 589
+
 590
+    def delete_resource(self, resource, resource_id,
 591
+                        msg="resource", max_wait=120):
 592
+        """Delete one openstack resource, such as one instance, keypair,
 593
+        image, volume, stack, etc., and confirm deletion within max wait time.
 594
+
 595
+        :param resource: pointer to os resource type, ex:glance_client.images
 596
+        :param resource_id: unique name or id for the openstack resource
 597
+        :param msg: text to identify purpose in logging
 598
+        :param max_wait: maximum wait time in seconds
 599
+        :returns: True if successful, otherwise False
 600
+        """
 601
+        self.log.debug('Deleting OpenStack resource '
 602
+                       '{} ({})'.format(resource_id, msg))
 603
+        num_before = len(list(resource.list()))
 604
+        resource.delete(resource_id)
 605
+
 606
+        tries = 0
 607
+        num_after = len(list(resource.list()))
 608
+        while num_after != (num_before - 1) and tries < (max_wait / 4):
 609
+            self.log.debug('{} delete check: '
 610
+                           '{} [{}:{}] {}'.format(msg, tries,
 611
+                                                  num_before,
 612
+                                                  num_after,
 613
+                                                  resource_id))
 614
+            time.sleep(4)
 615
+            num_after = len(list(resource.list()))
 616
+            tries += 1
 617
+
 618
+        self.log.debug('{}:  expected, actual count = {}, '
 619
+                       '{}'.format(msg, num_before - 1, num_after))
 620
+
 621
+        if num_after == (num_before - 1):
 622
+            return True
 623
+        else:
 624
+            self.log.error('{} delete timed out'.format(msg))
 625
+            return False
 626
+
 627
+    def resource_reaches_status(self, resource, resource_id,
 628
+                                expected_stat='available',
 629
+                                msg='resource', max_wait=120):
 630
+        """Wait for an openstack resources status to reach an
 631
+           expected status within a specified time.  Useful to confirm that
 632
+           nova instances, cinder vols, snapshots, glance images, heat stacks
 633
+           and other resources eventually reach the expected status.
 634
+
 635
+        :param resource: pointer to os resource type, ex: heat_client.stacks
 636
+        :param resource_id: unique id for the openstack resource
 637
+        :param expected_stat: status to expect resource to reach
 638
+        :param msg: text to identify purpose in logging
 639
+        :param max_wait: maximum wait time in seconds
 640
+        :returns: True if successful, False if status is not reached
 641
+        """
 642
+
 643
+        tries = 0
 644
+        resource_stat = resource.get(resource_id).status
 645
+        while resource_stat != expected_stat and tries < (max_wait / 4):
 646
+            self.log.debug('{} status check: '
 647
+                           '{} [{}:{}] {}'.format(msg, tries,
 648
+                                                  resource_stat,
 649
+                                                  expected_stat,
 650
+                                                  resource_id))
 651
+            time.sleep(4)
 652
+            resource_stat = resource.get(resource_id).status
 653
+            tries += 1
 654
+
 655
+        self.log.debug('{}:  expected, actual status = {}, '
 656
+                       '{}'.format(msg, resource_stat, expected_stat))
 657
+
 658
+        if resource_stat == expected_stat:
 659
+            return True
 660
+        else:
 661
+            self.log.debug('{} never reached expected status: '
 662
+                           '{}'.format(resource_id, expected_stat))
 663
+            return False
 664
+
 665
+    def get_ceph_osd_id_cmd(self, index):
 666
+        """Produce a shell command that will return a ceph-osd id."""
 667
+        return ("`initctl list | grep 'ceph-osd ' | "
 668
+                "awk 'NR=={} {{ print $2 }}' | "
 669
+                "grep -o '[0-9]*'`".format(index + 1))
 670
+
 671
+    def get_ceph_pools(self, sentry_unit):
 672
+        """Return a dict of ceph pools from a single ceph unit, with
 673
+        pool name as keys, pool id as vals."""
 674
+        pools = {}
 675
+        cmd = 'sudo ceph osd lspools'
 676
+        output, code = sentry_unit.run(cmd)
 677
+        if code != 0:
 678
+            msg = ('{} `{}` returned {} '
 679
+                   '{}'.format(sentry_unit.info['unit_name'],
 680
+                               cmd, code, output))
 681
+            amulet.raise_status(amulet.FAIL, msg=msg)
 682
+
 683
+        # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
 684
+        for pool in str(output).split(','):
 685
+            pool_id_name = pool.split(' ')
 686
+            if len(pool_id_name) == 2:
 687
+                pool_id = pool_id_name[0]
 688
+                pool_name = pool_id_name[1]
 689
+                pools[pool_name] = int(pool_id)
 690
+
 691
+        self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
 692
+                                                pools))
 693
+        return pools
 694
+
 695
+    def get_ceph_df(self, sentry_unit):
 696
+        """Return dict of ceph df json output, including ceph pool state.
 697
+
 698
+        :param sentry_unit: Pointer to amulet sentry instance (juju unit)
 699
+        :returns: Dict of ceph df output
 700
+        """
 701
+        cmd = 'sudo ceph df --format=json'
 702
+        output, code = sentry_unit.run(cmd)
 703
+        if code != 0:
 704
+            msg = ('{} `{}` returned {} '
 705
+                   '{}'.format(sentry_unit.info['unit_name'],
 706
+                               cmd, code, output))
 707
+            amulet.raise_status(amulet.FAIL, msg=msg)
 708
+        return json.loads(output)
 709
+
 710
+    def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
 711
+        """Take a sample of attributes of a ceph pool, returning ceph
 712
+        pool name, object count and disk space used for the specified
 713
+        pool ID number.
 714
+
 715
+        :param sentry_unit: Pointer to amulet sentry instance (juju unit)
 716
+        :param pool_id: Ceph pool ID
 717
+        :returns: List of pool name, object count, kb disk space used
 718
+        """
 719
+        df = self.get_ceph_df(sentry_unit)
 720
+        pool_name = df['pools'][pool_id]['name']
 721
+        obj_count = df['pools'][pool_id]['stats']['objects']
 722
+        kb_used = df['pools'][pool_id]['stats']['kb_used']
 723
+        self.log.debug('Ceph {} pool (ID {}): {} objects, '
 724
+                       '{} kb used'.format(pool_name, pool_id,
 725
+                                           obj_count, kb_used))
 726
+        return pool_name, obj_count, kb_used
 727
+
 728
+    def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
 729
+        """Validate ceph pool samples taken over time, such as pool
 730
+        object counts or pool kb used, before adding, after adding, and
 731
+        after deleting items which affect those pool attributes.  The
 732
+        2nd element is expected to be greater than the 1st; 3rd is expected
 733
+        to be less than the 2nd.
 734
+
 735
+        :param samples: List containing 3 data samples
 736
+        :param sample_type: String for logging and usage context
 737
+        :returns: None if successful, Failure message otherwise
 738
+        """
 739
+        original, created, deleted = range(3)
 740
+        if samples[created] <= samples[original] or \
 741
+                samples[deleted] >= samples[created]:
 742
+            return ('Ceph {} samples ({}) '
 743
+                    'unexpected.'.format(sample_type, samples))
 744
+        else:
 745
+            self.log.debug('Ceph {} samples (OK): '
 746
+                           '{}'.format(sample_type, samples))
 747
+            return None
 748
+
 749
+    # rabbitmq/amqp specific helpers:
 750
+
 751
+    def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
 752
+        """Wait for rmq units extended status to show cluster readiness,
 753
+        after an optional initial sleep period.  Initial sleep is likely
 754
+        necessary to be effective following a config change, as status
 755
+        message may not instantly update to non-ready."""
 756
+
 757
+        if init_sleep:
 758
+            time.sleep(init_sleep)
 759
+
 760
+        message = re.compile('^Unit is ready and clustered$')
 761
+        deployment._auto_wait_for_status(message=message,
 762
+                                         timeout=timeout,
 763
+                                         include_only=['rabbitmq-server'])
 764
+
 765
+    def add_rmq_test_user(self, sentry_units,
 766
+                          username="testuser1", password="changeme"):
 767
+        """Add a test user via the first rmq juju unit, check connection as
 768
+        the new user against all sentry units.
 769
+
 770
+        :param sentry_units: list of sentry unit pointers
 771
+        :param username: amqp user name, default to testuser1
 772
+        :param password: amqp user password
 773
+        :returns: None if successful.  Raise on error.
 774
+        """
 775
+        self.log.debug('Adding rmq user ({})...'.format(username))
 776
+
 777
+        # Check that user does not already exist
 778
+        cmd_user_list = 'rabbitmqctl list_users'
 779
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
 780
+        if username in output:
 781
+            self.log.warning('User ({}) already exists, returning '
 782
+                             'gracefully.'.format(username))
 783
+            return
 784
+
 785
+        perms = '".*" ".*" ".*"'
 786
+        cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
 787
+                'rabbitmqctl set_permissions {} {}'.format(username, perms)]
 788
+
 789
+        # Add user via first unit
 790
+        for cmd in cmds:
 791
+            output, _ = self.run_cmd_unit(sentry_units[0], cmd)
 792
+
 793
+        # Check connection against the other sentry_units
 794
+        self.log.debug('Checking user connect against units...')
 795
+        for sentry_unit in sentry_units:
 796
+            connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
 797
+                                                   username=username,
 798
+                                                   password=password)
 799
+            connection.close()
 800
+
 801
+    def delete_rmq_test_user(self, sentry_units, username="testuser1"):
 802
+        """Delete a rabbitmq user via the first rmq juju unit.
 803
+
 804
+        :param sentry_units: list of sentry unit pointers
 805
+        :param username: amqp user name, default to testuser1
 806
+        :param password: amqp user password
 807
+        :returns: None if successful or no such user.
 808
+        """
 809
+        self.log.debug('Deleting rmq user ({})...'.format(username))
 810
+
 811
+        # Check that the user exists
 812
+        cmd_user_list = 'rabbitmqctl list_users'
 813
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
 814
+
 815
+        if username not in output:
 816
+            self.log.warning('User ({}) does not exist, returning '
 817
+                             'gracefully.'.format(username))
 818
+            return
 819
+
 820
+        # Delete the user
 821
+        cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
 822
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
 823
+
 824
+    def get_rmq_cluster_status(self, sentry_unit):
 825
+        """Execute rabbitmq cluster status command on a unit and return
 826
+        the full output.
 827
+
 828
+        :param unit: sentry unit
 829
+        :returns: String containing console output of cluster status command
 830
+        """
 831
+        cmd = 'rabbitmqctl cluster_status'
 832
+        output, _ = self.run_cmd_unit(sentry_unit, cmd)
 833
+        self.log.debug('{} cluster_status:\n{}'.format(
 834
+            sentry_unit.info['unit_name'], output))
 835
+        return str(output)
 836
+
 837
+    def get_rmq_cluster_running_nodes(self, sentry_unit):
 838
+        """Parse rabbitmqctl cluster_status output string, return list of
 839
+        running rabbitmq cluster nodes.
 840
+
 841
+        :param unit: sentry unit
 842
+        :returns: List containing node names of running nodes
 843
+        """
 844
+        # NOTE(beisner): rabbitmqctl cluster_status output is not
 845
+        # json-parsable, do string chop foo, then json.loads that.
 846
+        str_stat = self.get_rmq_cluster_status(sentry_unit)
 847
+        if 'running_nodes' in str_stat:
 848
+            pos_start = str_stat.find("{running_nodes,") + 15
 849
+            pos_end = str_stat.find("]},", pos_start) + 1
 850
+            str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
 851
+            run_nodes = json.loads(str_run_nodes)
 852
+            return run_nodes
 853
+        else:
 854
+            return []
 855
+
 856
+    def validate_rmq_cluster_running_nodes(self, sentry_units):
 857
+        """Check that all rmq unit hostnames are represented in the
 858
+        cluster_status output of all units.
 859
+
 860
+        :param host_names: dict of juju unit names to host names
 861
+        :param units: list of sentry unit pointers (all rmq units)
 862
+        :returns: None if successful, otherwise return error message
 863
+        """
 864
+        host_names = self.get_unit_hostnames(sentry_units)
 865
+        errors = []
 866
+
 867
+        # Query every unit for cluster_status running nodes
 868
+        for query_unit in sentry_units:
 869
+            query_unit_name = query_unit.info['unit_name']
 870
+            running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
 871
+
 872
+            # Confirm that every unit is represented in the queried unit's
 873
+            # cluster_status running nodes output.
 874
+            for validate_unit in sentry_units:
 875
+                val_host_name = host_names[validate_unit.info['unit_name']]
 876
+                val_node_name = 'rabbit@{}'.format(val_host_name)
 877
+
 878
+                if val_node_name not in running_nodes:
 879
+                    errors.append('Cluster member check failed on {}: {} not '
 880
+                                  'in {}\n'.format(query_unit_name,
 881
+                                                   val_node_name,
 882
+                                                   running_nodes))
 883
+        if errors:
 884
+            return ''.join(errors)
 885
+
 886
+    def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
 887
+        """Check a single juju rmq unit for ssl and port in the config file."""
 888
+        host = sentry_unit.info['public-address']
 889
+        unit_name = sentry_unit.info['unit_name']
 890
+
 891
+        conf_file = '/etc/rabbitmq/rabbitmq.config'
 892
+        conf_contents = str(self.file_contents_safe(sentry_unit,
 893
+                                                    conf_file, max_wait=16))
 894
+        # Checks
 895
+        conf_ssl = 'ssl' in conf_contents
 896
+        conf_port = str(port) in conf_contents
 897
+
 898
+        # Port explicitly checked in config
 899
+        if port and conf_port and conf_ssl:
 900
+            self.log.debug('SSL is enabled  @{}:{} '
 901
+                           '({})'.format(host, port, unit_name))
 902
+            return True
 903
+        elif port and not conf_port and conf_ssl:
 904
+            self.log.debug('SSL is enabled @{} but not on port {} '
 905
+                           '({})'.format(host, port, unit_name))
 906
+            return False
 907
+        # Port not checked (useful when checking that ssl is disabled)
 908
+        elif not port and conf_ssl:
 909
+            self.log.debug('SSL is enabled  @{}:{} '
 910
+                           '({})'.format(host, port, unit_name))
 911
+            return True
 912
+        elif not conf_ssl:
 913
+            self.log.debug('SSL not enabled @{}:{} '
 914
+                           '({})'.format(host, port, unit_name))
 915
+            return False
 916
+        else:
 917
+            msg = ('Unknown condition when checking SSL status @{}:{} '
 918
+                   '({})'.format(host, port, unit_name))
 919
+            amulet.raise_status(amulet.FAIL, msg)
 920
+
 921
+    def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
 922
+        """Check that ssl is enabled on rmq juju sentry units.
 923
+
 924
+        :param sentry_units: list of all rmq sentry units
 925
+        :param port: optional ssl port override to validate
 926
+        :returns: None if successful, otherwise return error message
 927
+        """
 928
+        for sentry_unit in sentry_units:
 929
+            if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
 930
+                return ('Unexpected condition:  ssl is disabled on unit '
 931
+                        '({})'.format(sentry_unit.info['unit_name']))
 932
+        return None
 933
+
 934
+    def validate_rmq_ssl_disabled_units(self, sentry_units):
 935
+        """Check that ssl is enabled on listed rmq juju sentry units.
 936
+
 937
+        :param sentry_units: list of all rmq sentry units
 938
+        :returns: True if successful.  Raise on error.
 939
+        """
 940
+        for sentry_unit in sentry_units:
 941
+            if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
 942
+                return ('Unexpected condition:  ssl is enabled on unit '
 943
+                        '({})'.format(sentry_unit.info['unit_name']))
 944
+        return None
 945
+
 946
+    def configure_rmq_ssl_on(self, sentry_units, deployment,
 947
+                             port=None, max_wait=60):
 948
+        """Turn ssl charm config option on, with optional non-default
 949
+        ssl port specification.  Confirm that it is enabled on every
 950
+        unit.
 951
+
 952
+        :param sentry_units: list of sentry units
 953
+        :param deployment: amulet deployment object pointer
 954
+        :param port: amqp port, use defaults if None
 955
+        :param max_wait: maximum time to wait in seconds to confirm
 956
+        :returns: None if successful.  Raise on error.
 957
+        """
 958
+        self.log.debug('Setting ssl charm config option:  on')
 959
+
 960
+        # Enable RMQ SSL
 961
+        config = {'ssl': 'on'}
 962
+        if port:
 963
+            config['ssl_port'] = port
 964
+
 965
+        deployment.d.configure('rabbitmq-server', config)
 966
+
 967
+        # Wait for unit status
 968
+        self.rmq_wait_for_cluster(deployment)
 969
+
 970
+        # Confirm
 971
+        tries = 0
 972
+        ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
 973
+        while ret and tries < (max_wait / 4):
 974
+            time.sleep(4)
 975
+            self.log.debug('Attempt {}: {}'.format(tries, ret))
 976
+            ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
 977
+            tries += 1
 978
+
 979
+        if ret:
 980
+            amulet.raise_status(amulet.FAIL, ret)
 981
+
 982
+    def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
 983
+        """Turn ssl charm config option off, confirm that it is disabled
 984
+        on every unit.
 985
+
 986
+        :param sentry_units: list of sentry units
 987
+        :param deployment: amulet deployment object pointer
 988
+        :param max_wait: maximum time to wait in seconds to confirm
 989
+        :returns: None if successful.  Raise on error.
 990
+        """
 991
+        self.log.debug('Setting ssl charm config option:  off')
 992
+
 993
+        # Disable RMQ SSL
 994
+        config = {'ssl': 'off'}
 995
+        deployment.d.configure('rabbitmq-server', config)
 996
+
 997
+        # Wait for unit status
 998
+        self.rmq_wait_for_cluster(deployment)
 999
+
1000
+        # Confirm
1001
+        tries = 0
1002
+        ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1003
+        while ret and tries < (max_wait / 4):
1004
+            time.sleep(4)
1005
+            self.log.debug('Attempt {}: {}'.format(tries, ret))
1006
+            ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1007
+            tries += 1
1008
+
1009
+        if ret:
1010
+            amulet.raise_status(amulet.FAIL, ret)
1011
+
1012
+    def connect_amqp_by_unit(self, sentry_unit, ssl=False,
1013
+                             port=None, fatal=True,
1014
+                             username="testuser1", password="changeme"):
1015
+        """Establish and return a pika amqp connection to the rabbitmq service
1016
+        running on a rmq juju unit.
1017
+
1018
+        :param sentry_unit: sentry unit pointer
1019
+        :param ssl: boolean, default to False
1020
+        :param port: amqp port, use defaults if None
1021
+        :param fatal: boolean, default to True (raises on connect error)
1022
+        :param username: amqp user name, default to testuser1
1023
+        :param password: amqp user password
1024
+        :returns: pika amqp connection pointer or None if failed and non-fatal
1025
+        """
1026
+        host = sentry_unit.info['public-address']
1027
+        unit_name = sentry_unit.info['unit_name']
1028
+
1029
+        # Default port logic if port is not specified
1030
+        if ssl and not port:
1031
+            port = 5671
1032
+        elif not ssl and not port:
1033
+            port = 5672
1034
+
1035
+        self.log.debug('Connecting to amqp on {}:{} ({}) as '
1036
+                       '{}...'.format(host, port, unit_name, username))
1037
+
1038
+        try:
1039
+            credentials = pika.PlainCredentials(username, password)
1040
+            parameters = pika.ConnectionParameters(host=host, port=port,
1041
+                                                   credentials=credentials,
1042
+                                                   ssl=ssl,
1043
+                                                   connection_attempts=3,
1044
+                                                   retry_delay=5,
1045
+                                                   socket_timeout=1)
1046
+            connection = pika.BlockingConnection(parameters)
1047
+            assert connection.is_open is True
1048
+            assert connection.is_closing is False
1049
+            self.log.debug('Connect OK')
1050
+            return connection
1051
+        except Exception as e:
1052
+            msg = ('amqp connection failed to {}:{} as '
1053
+                   '{} ({})'.format(host, port, username, str(e)))
1054
+            if fatal:
1055
+                amulet.raise_status(amulet.FAIL, msg)
1056
+            else:
1057
+                self.log.warn(msg)
1058
+                return None
1059
+
1060
+    def publish_amqp_message_by_unit(self, sentry_unit, message,
1061
+                                     queue="test", ssl=False,
1062
+                                     username="testuser1",
1063
+                                     password="changeme",
1064
+                                     port=None):
1065
+        """Publish an amqp message to a rmq juju unit.
1066
+
1067
+        :param sentry_unit: sentry unit pointer
1068
+        :param message: amqp message string
1069
+        :param queue: message queue, default to test
1070
+        :param username: amqp user name, default to testuser1
1071
+        :param password: amqp user password
1072
+        :param ssl: boolean, default to False
1073
+        :param port: amqp port, use defaults if None
1074
+        :returns: None.  Raises exception if publish failed.
1075
+        """
1076
+        self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
1077
+                                                                    message))
1078
+        connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1079
+                                               port=port,
1080
+                                               username=username,
1081
+                                               password=password)
1082
+
1083
+        # NOTE(beisner): extra debug here re: pika hang potential:
1084
+        #   https://github.com/pika/pika/issues/297
1085
+        #   https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
1086
+        self.log.debug('Defining channel...')
1087
+        channel = connection.channel()
1088
+        self.log.debug('Declaring queue...')
1089
+        channel.queue_declare(queue=queue, auto_delete=False, durable=True)
1090
+        self.log.debug('Publishing message...')
1091
+        channel.basic_publish(exchange='', routing_key=queue, body=message)
1092
+        self.log.debug('Closing channel...')
1093
+        channel.close()
1094
+        self.log.debug('Closing connection...')
1095
+        connection.close()
1096
+
1097
+    def get_amqp_message_by_unit(self, sentry_unit, queue="test",
1098
+                                 username="testuser1",
1099
+                                 password="changeme",
1100
+                                 ssl=False, port=None):
1101
+        """Get an amqp message from a rmq juju unit.
1102
+
1103
+        :param sentry_unit: sentry unit pointer
1104
+        :param queue: message queue, default to test
1105
+        :param username: amqp user name, default to testuser1
1106
+        :param password: amqp user password
1107
+        :param ssl: boolean, default to False
1108
+        :param port: amqp port, use defaults if None
1109
+        :returns: amqp message body as string.  Raise if get fails.
1110
+        """
1111
+        connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1112
+                                               port=port,
1113
+                                               username=username,
1114
+                                               password=password)
1115
+        channel = connection.channel()
1116
+        method_frame, _, body = channel.basic_get(queue)
1117
+
1118
+        if method_frame:
1119
+            self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1120
+                                                                         body))
1121
+            channel.basic_ack(method_frame.delivery_tag)
1122
+            channel.close()
1123
+            connection.close()
1124
+            return body
1125
+        else:
1126
+            msg = 'No message retrieved.'
1127
+            amulet.raise_status(amulet.FAIL, msg)
Back to file index

tests/gate-basic-trusty-mitaka

 1
--- 
 2
+++ tests/gate-basic-trusty-mitaka
 3
@@ -0,0 +1,25 @@
 4
+#!/usr/bin/env python
 5
+#
 6
+# Copyright 2016 Canonical Ltd
 7
+#
 8
+# Licensed under the Apache License, Version 2.0 (the "License");
 9
+# you may not use this file except in compliance with the License.
10
+# You may obtain a copy of the License at
11
+#
12
+#  http://www.apache.org/licenses/LICENSE-2.0
13
+#
14
+# Unless required by applicable law or agreed to in writing, software
15
+# distributed under the License is distributed on an "AS IS" BASIS,
16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+# See the License for the specific language governing permissions and
18
+# limitations under the License.
19
+
20
+"""Amulet tests on a basic client cinder-ceph deployment on trusty-mitaka."""
21
+
22
+from basic_deployment import ClientCinderSpectrumBasicDeployment
23
+
24
+if __name__ == '__main__':
25
+    deployment = ClientCinderSpectrumBasicDeployment(series='trusty',
26
+                                           openstack='cloud:trusty-mitaka',
27
+                                           source='cloud:trusty-updates/mitaka')
28
+    deployment.run_tests()
Back to file index

tests/gate-basic-xenial-mitaka

 1
--- 
 2
+++ tests/gate-basic-xenial-mitaka
 3
@@ -0,0 +1,23 @@
 4
+#!/usr/bin/env python
 5
+#
 6
+# Copyright 2016 Canonical Ltd
 7
+#
 8
+# Licensed under the Apache License, Version 2.0 (the "License");
 9
+# you may not use this file except in compliance with the License.
10
+# You may obtain a copy of the License at
11
+#
12
+#  http://www.apache.org/licenses/LICENSE-2.0
13
+#
14
+# Unless required by applicable law or agreed to in writing, software
15
+# distributed under the License is distributed on an "AS IS" BASIS,
16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+# See the License for the specific language governing permissions and
18
+# limitations under the License.
19
+
20
+"""Amulet tests on a basic client cinder-Spectrumscale deployment on xenial-mitaka."""
21
+
22
+from basic_deployment import ClientCinderSpectrumBasicDeployment
23
+
24
+if __name__ == '__main__':
25
+    deployment = ClientCinderSpectrumBasicDeployment(series='xenial')
26
+   
Back to file index

tests/tests.yaml

 1
--- 
 2
+++ tests/tests.yaml
 3
@@ -0,0 +1,17 @@
 4
+# Bootstrap the model if necessary.
 5
+bootstrap: True
 6
+# Re-use bootstrap node.
 7
+reset: True
 8
+# Use tox/requirements to drive the venv instead of bundletester's venv feature.
 9
+virtualenv: False
10
+# Leave makefile empty, otherwise unit/lint tests will rerun ahead of amulet.
11
+makefile: []
12
+# Do not specify juju PPA sources.  Juju is presumed to be pre-installed
13
+# and configured in all test runner environments.
14
+#sources:
15
+# Do not specify or rely on system packages.
16
+#packages:
17
+# Do not specify python packages here.  Use test-requirements.txt
18
+# and tox instead.  ie. The venv is constructed before bundletester
19
+# is invoked.
20
+#python-packages:
Back to file index

tox.ini

 1
--- 
 2
+++ tox.ini
 3
@@ -0,0 +1,12 @@
 4
+[tox]
 5
+skipsdist=True
 6
+envlist = py34, py35
 7
+skip_missing_interpreters = True
 8
+
 9
+[testenv]
10
+commands = py.test -v
11
+deps =
12
+    -r{toxinidir}/requirements.txt
13
+
14
+[flake8]
15
+exclude=docs