~ibmcharmers/ibm-spectrum-scale-manager

Owner: shilkaul
Status: Needs Fixing
Vote: -1 (+2 needed for approval)

CPP?: No
OIL?: No

This charm is for IBM Spectrum Scale Manager. IBM Spectrum Scale comprises of two charms : IBM Spectrum Scale Manager and IBM Spectrum Scale Client.
The charm will deploy and create a spectrum scale cluster (only manager nodes).

Its source code can be found in the below repository
Repo : https://code.launchpad.net/~ibmcharmers/ibmlayers/layer-ibm-spectrum-scale-manager


Tests

Substrate Status Results Last Updated
aws RETRY 19 days ago
gce RETRY 19 days ago
lxc RETRY 19 days ago

Voted: -1
petevg wrote 3 months ago
Hello,

Thank you for your work on this charm. I did find a blocking issue while reviewing it:

build_modules does not raise errors in the event that it fails to run mmbuildgpl. This means that hooks that call it are not idempotent -- when they are finished running, the modules may or may not be built, without a corresponding different in status on the charm. I believe that a simple `raise` after each logging message will suffice to fix it, and make it behave as expected.

There appears to be a similar issue in setadd_hostname and add_node -- they log on Exceptions, but do not re-raise them, meaning that it can fail silently, potentially leaving the charm in an unknown state.

(cluster_exists, check_designation and other routines similarly swallow Exceptions. In those cases, the calling routine checks for a Falsey return value, however, which means that the charm does the right thing in the event of failure.)

Thank you again for your work on this charm, and please feel free to ping me if you have any questions about the above.

Add Comment

Login to comment/vote on this review.


Policy Checklist

Description Unreviewed Pass Fail

General

Must verify that any software installed or utilized is verified as coming from the intended source.
  • Any software installed from the Ubuntu or CentOS default archives satisfies this due to the apt and yum sources including cryptographic signing information.
  • Third party repositories must be listed as a configuration option that can be overridden by the user and not hard coded in the charm itself.
  • Launchpad PPAs are acceptable as the add-apt-repository command retrieves the keys securely.
  • Other third party repositories are acceptable if the signing key is embedded in the charm.
Must provide a means to protect users from known security vulnerabilities in a way consistent with best practices as defined by either operating system policies or upstream documentation.
Basically, this means there must be instructions on how to apply updates if you use software not from distribution channels.
Must have hooks that are idempotent. petevg
Should be built using charm layers.
Should use Juju Resources to deliver required payloads.

Testing and Quality

charm proof must pass without errors or warnings.
Must include passing unit, functional, or integration tests.
Tests must exercise all relations.
Tests must exercise config.
set-config, unset-config, and re-set must be tested as a minimum
Must not use anything infrastructure-provider specific (i.e. querying EC2 metadata service).
Must be self contained unless the charm is a proxy for an existing cloud service, e.g. ec2-elb charm.
Must not use symlinks.
Bundles must only use promulgated charms, they cannot reference charms in personal namespaces.
Must call Juju hook tools (relation-*, unit-*, config-*, etc) without a hard coded path.
Should include a tests.yaml for all integration tests.

Metadata

Must include a full description of what the software does.
Must include a maintainer email address for a team or individual who will be responsive to contact.
Must include a license. Call the file 'copyright' and make sure all files' licenses are specified clearly.
Must be under a Free license.
Must have a well documented and valid README.md.
Must describe the service.
Must describe how it interacts with other services, if applicable.
Must document the interfaces.
Must show how to deploy the charm.
Must define external dependencies, if applicable.
Should link to a recommend production usage bundle and recommended configuration if this differs from the default.
Should reference and link to upstream documentation and best practices.

Security

Must not run any network services using default passwords.
Must verify and validate any external payload
  • Known and understood packaging systems that verify packages like apt, pip, and yum are ok.
  • wget | sh style is not ok.
Should make use of whatever Mandatory Access Control system is provided by the distribution.
Should avoid running services as root.

All changes | Changes since last revision

Source Diff

Inline diff comments 0

No comments yet.

Back to file index

Makefile

 1
--- 
 2
+++ Makefile
 3
@@ -0,0 +1,24 @@
 4
+#!/usr/bin/make
 5
+
 6
+all: lint unit_test
 7
+
 8
+
 9
+.PHONY: clean
10
+clean:
11
+	@rm -rf .tox
12
+
13
+.PHONY: apt_prereqs
14
+apt_prereqs:
15
+	@# Need tox, but don't install the apt version unless we have to (don't want to conflict with pip)
16
+	@which tox >/dev/null || (sudo apt-get install -y python-pip && sudo pip install tox)
17
+
18
+.PHONY: lint
19
+lint: apt_prereqs
20
+	@tox --notest
21
+	@PATH=.tox/py34/bin:.tox/py35/bin flake8 $(wildcard hooks reactive lib unit_tests tests)
22
+	@charm proof
23
+
24
+.PHONY: unit_test
25
+unit_test: apt_prereqs
26
+	@echo Starting tests...
27
+	tox
Back to file index

README.md

  1
--- 
  2
+++ README.md
  3
@@ -0,0 +1,171 @@
  4
+Charm for IBM Spectrum Scale (GPFS) Manager V 4.2.2
  5
+
  6
+
  7
+Overview
  8
+-----
  9
+
 10
+IBM Spectrum Scale Manager
 11
+
 12
+IBM Spectrum Scale or GPFS provides simplified data management and integrated information lifecycle tools capable of managing petabytes of data and billions 
 13
+of files, in order to arrest the growing cost of managing ever growing amounts of data.
 14
+
 15
+A `manager node` is any server that  has the Spectrum Scale product installed with direct storage access or network access to another node.
 16
+A manager node will be part of the node pool from which file system managers and token managers can be selected.
 17
+
 18
+For details on Spectrum Scale, as well as information on purchasing, please visit:
 19
+[Product Page] [product-page] and at the [Passport Advantage Site] [passport-spectrum-scale]
 20
+
 21
+***Note that due to the GPFS kernel module, this charm will not work in a LXC/LXD container environment.***
 22
+
 23
+
 24
+Prerequisites
 25
+-------------
 26
+
 27
+This charm makes use of resources, a feature only available in Juju 2.0. During deploy, you will need to specify the installable package(s)
 28
+required by this charm. Download your licensed `IBM Spectrum Scale Standard 4.2.2` version for Ubuntu. To acquire and download IBM Spectrum Scale, follow instructions available at the [Product Page] [product-page]. 
 29
+
 30
+This charm will deploy only the Standard edition for IBM Spectrum Scale. 
 31
+
 32
+For `x86_64 Ubuntu`, the package and part number is:
 33
+    
 34
+    	IBM Spectrum Scale Standard 4.2.2 Linux for x86Series English (CNEP7EN) 
 35
+
 36
+For `Power Ubuntu`, the package and part number is:
 37
+ 
 38
+        IBM Spectrum Scale Standard 4.2.2 Linux PWR8 LE English (CNEP8EN)
 39
+
 40
+
 41
+Usage
 42
+------
 43
+To use this charm, you must agree to the Terms of Use. You can view the full license for IBM Spectrum Scale by visiting 
 44
+the [Software license agreements search website][license-info]. Search for `"IBM Spectrum Scale, V4.2.2"` and choose the license that applies to the version you are using.
 45
+The charm will automatically create a filesystem when the user makes use of Juju storage feature by specifying the storage parameter, incase the user has specific requirement
 46
+like making use of Shared disks (SAN), then the charm will just install Spectrum Scale and create a cluster, it will not automatically create a filesystem. The user has to create
 47
+the filesystem manually later. 
 48
+
 49
+
 50
+Deploy
 51
+------
 52
+
 53
+Run the following commands to deploy this charm:
 54
+Based upon user's requirement for creating filesystems, the charm deployment commands will differ. If you want to have the default filesystem created automatically for you, specify
 55
+the storage requirement while deploying the charm as shown below:
 56
+
 57
+    juju deploy ibm-spectrum-scale-manager --resource ibm_spectrum_scale_installer_manager=</path/to/installer.tar.gz>  --storage disks=ebs,1G   
 58
+   
 59
+Incase you don't want to make use of juju storage feature and want to create the filesystem manually, then just run the below deploy command.
 60
+
 61
+    juju deploy ibm-spectrum-scale-manager --resource ibm_spectrum_scale_installer_manager=</path/to/installer.tar.gz>
 62
+**Note**: This charm requires acceptance of Terms of Use. When deploying from the Charm Store, these terms will be presented to you for your consideration.
 63
+To accept the terms:
 64
+
 65
+    juju agree ibm-spectrum-scale/1
 66
+
 67
+Once you have agreed to the Terms, then only the IBM Spectrum Scale Manager charm will be deployed.
 68
+
 69
+**Note : Minimum two nodes (Spectrum scale manager units) are required to create a Spectrum Scale cluster.**
 70
+
 71
+Each manager unit will be assigned `server license` and node designation as `quorum`. 
 72
+Incase you are specifying storage disks at time of deployment or attach disks later, then the charm will create a default File-System called (`fs1`)
 73
+with blocksize of (`256K`) which is mounted at (`/gpfs`).
 74
+
 75
+
 76
+Installation Verification
 77
+-------------------------
 78
+To verify that the client node is added successfully, run the below commands:
 79
+
 80
+1) Go to the machine where Spectrum Scale manager is installed.
 81
+
 82
+2) Go to the Spectrum Scale bin folder path: `/usr/lpp/mmfs/bin`.
 83
+
 84
+3) We can need root permission to run most of the commands, so do `sudo su` to run the commands as root user.
 85
+
 86
+4) Run the `mmlscluster` command to display cluster information or `mmgetstate` command to see the status of the nodes.
 87
+
 88
+5) You can issue command `df -h` to see whether gpfs filesystem (`fs1`) is listed or not if created by the charm, otherwise you can create your own customized filesystem.
 89
+
 90
+
 91
+### Adding more units of Spectrum Scale Manager
 92
+To add more units of Spectrum Scale Manager, run the below command:
 93
+
 94
+    juju add-unit ibm-spectrum-scale-manager
 95
+Each unit added will add a quorum designated node to the existing Spectrum Scale cluster.
 96
+
 97
+
 98
+
 99
+### Upgrade
100
+
101
+Once deployed, users can install fixpacks by upgrading the charm:
102
+
103
+    juju attach ibm-spectrum-scale-manager ibm_spectrum_scale_manager_fixpack=</path/to/fixpack.tar.gz>
104
+Provide the fixpack having file format as *.tar.gz
105
+If the spectrum scale manager units are updated, please do update the spectrum scale units as well. Both `Manager` and `Client` nodes should be at same Spectrum Scale version.
106
+
107
+
108
+
109
+### Removing unit
110
+
111
+To remove a unit of Spectrum Scale manager, run the below step:
112
+
113
+    juju remove-unit <ibm-spectrum-scale-manager/unit-no>
114
+    
115
+**Please Note : The removal of a manager node from Spectrum Scale cluster depends on the fact whether the node has disks attached/NSD Server. If it does not have a disk attached or is a NSD server for existing filesystem, the node will be deleted from the cluster without any user interference required.
116
+But if the node is a NSD Server and has data, the charm will error out. This is done so that the storage admin/user can delete the disks and take appropriate action based upon the filesystem and nsd server requirements. Until user takes appropriate action on this, the charm will remain in error state. Once the node is not a part of filesystem/NSD Server, charm will come out of the error and this node will be deleted from the cluster.**
117
+
118
+`A Spectrum Scale cluster` uses a cluster mechanism called `quorum` to maintain data consistency in the event of a node failure. Quorum operates on the principle of majority rule. If only two units of Spectrum Scale cluster is remaining, and user wants to remove one of the units, then the Spectrum Scale cluster will no longer exist. Atleast two units should be there active for Spectrum Scale cluster to remain functioning.
119
+
120
+
121
+
122
+### Removing Relation 
123
+
124
+An IBM Spectrum Scale Manager charm is related to IBM Spectrum Scale client, to remove relation between them, run the below step:
125
+
126
+    juju remove-relation ibm-spectrum-scale-client ibm-spectrum-scale-manager
127
+
128
+This will remove the client node from the Spectrum Scale cluster. The GPFS file system will be unmounted before deleting the client node.
129
+
130
+
131
+
132
+
133
+IBM Spectrum Scale Information
134
+----------------
135
+(1) General Information
136
+
137
+Information on IBM Spectrum Scale available at the [Product Page] [product-page]
138
+
139
+(2) Download Information
140
+
141
+Information on procuring IBM Platform LSF product is available at the 
142
+[Passport Advantage Site][passport-spectrum-scale]
143
+
144
+(3) Spectrum Scale Infocenter
145
+
146
+To know more details about how Spectrum Scale works, refer to Spectrum Scale Infocenter
147
+[IBM Spectrum Scale Knowledge  Center][spectrum-scale-knowledgecenter]
148
+
149
+(4) License
150
+
151
+License information for IBM Spectrum Scale can be viewed at the
152
+[Software license agreements search website][license-info]
153
+
154
+(5) Contact Information
155
+
156
+For issues with this charm, please contact IBM Juju Support Team <jujusupp@us.ibm.com>
157
+
158
+(6) Known Limitations
159
+
160
+This charm makes use of Juju features that are only available in version `2.0` or
161
+greater.
162
+
163
+
164
+<!-- Links -->
165
+
166
+[product-page]: http://www-03.ibm.com/software/products/en/software
167
+
168
+[passport-spectrum-scale]: http://www-01.ibm.com/software/passportadvantage/
169
+
170
+[gpfs-info]: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General+Parallel+File+System+%28GPFS%29/page/Linux
171
+
172
+[license-info]: http://www-03.ibm.com/software/sla/sladb.nsf/search
173
+
174
+[spectrum-scale-knowledgecenter]: https://www.ibm.com/support/knowledgecenter/STXKQY_4.2.0/ibmspectrumscale42_welcome.html
Back to file index

bin/layer_option

 1
--- 
 2
+++ bin/layer_option
 3
@@ -0,0 +1,24 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+import sys
 7
+sys.path.append('lib')
 8
+
 9
+import argparse
10
+from charms.layer import options
11
+
12
+
13
+parser = argparse.ArgumentParser(description='Access layer options.')
14
+parser.add_argument('section',
15
+                    help='the section, or layer, the option is from')
16
+parser.add_argument('option',
17
+                    help='the option to access')
18
+
19
+args = parser.parse_args()
20
+value = options(args.section).get(args.option, '')
21
+if isinstance(value, bool):
22
+    sys.exit(0 if value else 1)
23
+elif isinstance(value, list):
24
+    for val in value:
25
+        print(val)
26
+else:
27
+    print(value)
Back to file index

copyright

 1
--- 
 2
+++ copyright
 3
@@ -0,0 +1,13 @@
 4
+Copyright 2016 IBM Corporation
 5
+
 6
+This Charm is licensed under the Apache License, Version 2.0 (the "License");
 7
+you may not use this file except in compliance with the License.
 8
+You may obtain a copy of the License at
 9
+
10
+    http://www.apache.org/licenses/LICENSE-2.0
11
+
12
+Unless required by applicable law or agreed to in writing, software
13
+distributed under the License is distributed on an "AS IS" BASIS,
14
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+See the License for the specific language governing permissions and
16
+limitations under the License.
Back to file index

hooks/config-changed

 1
--- 
 2
+++ hooks/config-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/disks-storage-attached

 1
--- 
 2
+++ hooks/disks-storage-attached
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/disks-storage-detaching

 1
--- 
 2
+++ hooks/disks-storage-detaching
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-broken

 1
--- 
 2
+++ hooks/gpfsmanager-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-changed

 1
--- 
 2
+++ hooks/gpfsmanager-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-departed

 1
--- 
 2
+++ hooks/gpfsmanager-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/gpfsmanager-relation-joined

 1
--- 
 2
+++ hooks/gpfsmanager-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/hook.template

 1
--- 
 2
+++ hooks/hook.template
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/install

 1
--- 
 2
+++ hooks/install
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-elected

 1
--- 
 2
+++ hooks/leader-elected
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-settings-changed

 1
--- 
 2
+++ hooks/leader-settings-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-broken

 1
--- 
 2
+++ hooks/quorum-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-changed

 1
--- 
 2
+++ hooks/quorum-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-departed

 1
--- 
 2
+++ hooks/quorum-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/quorum-relation-joined

 1
--- 
 2
+++ hooks/quorum-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/relations/gpfs/README.md

 1
--- 
 2
+++ hooks/relations/gpfs/README.md
 3
@@ -0,0 +1,61 @@
 4
+Overview
 5
+--------
 6
+
 7
+This interface layer handles the communication between IBM Spectrum Scale Manager and IBM Spectrum Scale Client. The provider end of this interface provides the Spectrum Scale Manager service (Spectrum Scale Cluster). The consumer part requires the existence of a provider to function.
 8
+This interface also handles peer communication among Spectrum Scale Manager and Client units.
 9
+
10
+
11
+Usage
12
+------
13
+##### Provides
14
+
15
+This interface layer will set the following states, as appropriate:
16
+
17
+  - `{relation_name}.joined` : The relation is established between Spectrum Scale manager and clients.  At this point, the provider should broadcast configuration details using:
18
+      * `set_hostname(manager_hostname)`
19
+      * `set_ssh_key(privkey, pubkey)`
20
+      * `sett_notify_client(notify_client)`
21
+
22
+  - `{relation_name}.ready` : Manager has provided its connection string information, and is ready to accept requests from the clients. The connection information from the client can be accessed via below methods:
23
+      - `get_hostnames() and get_ips()` - These two methods will provide Hostname and Private IP Address of the Client
24
+      - `get_privclient_keys() and get_pubclient_keys()` - These two methods will provide the Private and Public Keys of the Client.
25
+
26
+
27
+##### Requires
28
+
29
+This interface layer will set the following states, as appropriate:
30
+  - `{relation_name}.joined` : The relation is established between Spectrum Scale manager and clients. At this point, the charm waits for Manager configuration details.
31
+
32
+  -  `{relation_name}.ready` : Spectrum Scale manager is ready for the clients. The client charm can access the configuration details using the below methods:
33
+
34
+      - `get_hostnames() and get_ips()` - Hostname and Private IP Address of Spectrum Scale Manager.
35
+      - `get_priv_keys() and get_pub_keys` - Private and Public Keys of Spectrum Scale Manager.
36
+      
37
+      Also it provides hostname and public key information to the Provider i.e. Spectrum Scale Manager using the following methods:
38
+     - `set_hostname(hostname_client)` - Provides the hostname of client to Manager.
39
+     - `set_ssh_key(pubkey)` - Provides  - Provides the public key of client to Manager.
40
+
41
+  - `{relation_name}.client-ready` : To notify the client that its added to the Cluster.
42
+
43
+
44
+
45
+##### Peers
46
+This interface allows the peers of the Spectrum Scale Manager/Client deployment to be aware of each other. This interface layer will set the following states, as appropriate:
47
+
48
+  - `{relation_name}.joined` - A new peer in the Spectrum Scale manager/client service has joined. 
49
+
50
+  - `{relation_name}.available` -  It will return a list of units containing the Hostname/IP Address and SSH keys information for cluster members.
51
+This information can be accessed via the below methods:
52
+      
53
+      - `get_unitips` and `get_hostname_peers` - Provides the Private IP Address and Hostname of the peer units.
54
+      - `get_pub_keys` - Public key of peer units.
55
+      - `get_storagedisks_peers` - List of Storage locations for peer units.
56
+      - `gpfsclient_managerpeer_services` - List of peer unit names.
57
+
58
+  - `{relation_name}.cluster.ready` - To notify the manager peers that cluster is ready. 
59
+
60
+
61
+  - `{relation_name}.departed` - A peer in the Spectrum Scale Manager/Client service has departed. 
62
+
63
+
64
+
Back to file index

hooks/relations/gpfs/interface.yaml

 1
--- 
 2
+++ hooks/relations/gpfs/interface.yaml
 3
@@ -0,0 +1,7 @@
 4
+name: gpfs
 5
+summary: |
 6
+   Basic gpfs interface required for adding gpfs clients to the existing
 7
+   Spectrum Scale cluster and peer units of gpfs manager/clients.
 8
+version: 1
 9
+maintainer: IBM Juju Support Team <jujusupp@us.ibm.com>
10
+
Back to file index

hooks/relations/gpfs/peers.py

  1
--- 
  2
+++ hooks/relations/gpfs/peers.py
  3
@@ -0,0 +1,135 @@
  4
+from charms.reactive import RelationBase, hook, scopes
  5
+
  6
+
  7
+class DistPeers(RelationBase):
  8
+    scope = scopes.UNIT
  9
+    peer_gpfs_ready = "Nil"
 10
+
 11
+    @hook('{peers:gpfs}-relation-joined')
 12
+    def joined(self):
 13
+        conv = self.conversation()
 14
+        conv.remove_state('{relation_name}.departing')
 15
+        conv.set_state('{relation_name}.connected')
 16
+
 17
+    @hook('{peers:gpfs}-relation-changed')
 18
+    def changed(self):
 19
+        conv = self.conversation()
 20
+        conv.remove_state('{relation_name}.departing')
 21
+        if ((str(conv.get_remote('manager_hostname')) != "None") and
 22
+           (str(conv.get_remote('pubkey')) != "None")):
 23
+            conv.set_state('{relation_name}.available')
 24
+
 25
+        if (str(conv.get_remote('peer_gpfs_ready')) == 'ClusterReady'):
 26
+            conv.set_state('{relation_name}.cluster.ready')
 27
+
 28
+    @hook('{peers:gpfs}-relation-departed')
 29
+    def departed(self):
 30
+        conv = self.conversation()
 31
+        conv.remove_state('{relation_name}.cluster.ready')
 32
+        conv.remove_state('{relation_name}.connected')
 33
+        conv.remove_state('{relation_name}.available')
 34
+        conv.set_state('{relation_name}.departing')
 35
+
 36
+    def dismiss_departed(self):
 37
+        """
 38
+        Remove the 'departing' state so we don't fall in here again
 39
+        (until another peer leaves).
 40
+        """
 41
+
 42
+        for conv in self.conversations():
 43
+            conv.remove_state('{relation_name}.departing')
 44
+
 45
+    def get_unitips(self):
 46
+        """
 47
+        Returns client 'private ip address' info
 48
+        :returns: List of peer units private ip addresses
 49
+        """
 50
+
 51
+        ips = []
 52
+        for conv in self.conversations():
 53
+            ips.append(conv.get_remote('private-address'))
 54
+        return ips
 55
+
 56
+    def set_hostname_peer(self, manager_hostname):
 57
+        """
 58
+        Forward 'hostname' info to its peers.
 59
+        :param manager_hostname: string - Hostname of the spectrum
 60
+                                          scale peer units.
 61
+        :returns: None
 62
+        """
 63
+
 64
+        for conv in self.conversations():
 65
+            conv.set_remote('manager_hostname', manager_hostname)
 66
+
 67
+    def set_ssh_key(self, pubkey):
 68
+        """
 69
+        Forward a dict of values containing Public SSH keys.
 70
+        :param pubkey: string - Public SSH key
 71
+        :returns: None
 72
+        """
 73
+
 74
+        for conv in self.conversations():
 75
+            conv.set_remote(data={
 76
+                            'pubkey':  pubkey,
 77
+                            })
 78
+
 79
+    def set_storagedisk_peer(self, devices_list):
 80
+        """
 81
+        Forward a list of Spectrum Scale Manager device/disk locations to
 82
+        its peers.
 83
+        :param devices_list: list - List of device location of the
 84
+                                    mgr peer units.
 85
+        :returns: None
 86
+        """
 87
+
 88
+        for conv in self.conversations():
 89
+            conv.set_remote('devices_list', devices_list)
 90
+
 91
+    def notify_peerready(self, peer_gpfs_ready):
 92
+        """
 93
+        Forward readiness flag status to its mgr peers, that cluster
 94
+        is created successfully
 95
+        :param peer_gpfs_ready: string - Readiness flag status value
 96
+        :returns: None
 97
+        """
 98
+
 99
+        for conv in self.conversations():
100
+            conv.set_remote('peer_gpfs_ready', peer_gpfs_ready)
101
+
102
+    def get_hostname_peers(self):
103
+        """
104
+        Returns a list of peer units hostname info
105
+        :returns: List of hostnames
106
+        """
107
+
108
+        hosts = []
109
+        for conv in self.conversations():
110
+            hosts.append(conv.get_remote('manager_hostname'))
111
+        return hosts
112
+
113
+    def get_pub_keys(self):
114
+        """
115
+        Returns a list of peer units public ssh keys info
116
+        :returns: List of public ssh keys
117
+        """
118
+
119
+        pub_ssh_keys = []
120
+        for conv in self.conversations():
121
+            pub_ssh_keys.append(conv.get_remote('pubkey'))
122
+        return pub_ssh_keys
123
+
124
+    def get_storagedisks_peers(self):
125
+        devices_list_peers = []
126
+        for conv in self.conversations():
127
+            devices_list_peers.append(conv.get_remote('devices_list'))
128
+        return list(devices_list_peers)
129
+
130
+    def gpfsclient_managerpeer_services(self):
131
+        """
132
+        Return a list of unit names.
133
+
134
+        """
135
+        units = []
136
+        for conv in self.conversations():
137
+            units.append(conv.scope)
138
+        return units
Back to file index

hooks/relations/gpfs/provides.py

  1
--- 
  2
+++ hooks/relations/gpfs/provides.py
  3
@@ -0,0 +1,119 @@
  4
+from charms.reactive import hook
  5
+from charms.reactive import RelationBase
  6
+from charms.reactive import scopes
  7
+
  8
+
  9
+class gpfsProvides(RelationBase):
 10
+    # Every unit connecting will get the same information
 11
+    scope = scopes.UNIT
 12
+
 13
+    # Use some template magic to declare our relation(s)
 14
+    @hook('{provides:gpfs}-relation-joined')
 15
+    def joined(self):
 16
+        conversation = self.conversation()
 17
+        conversation.remove_state('{relation_name}.departing')
 18
+        conversation.set_state('{relation_name}.connected')
 19
+
 20
+    @hook('{provides:gpfs}-relation-changed')
 21
+    def changed(self):
 22
+        conversation = self.conversation()
 23
+        conversation.remove_state('{relation_name}.departing')
 24
+        if (str(conversation.get_remote('hostname_client')) != "None"):
 25
+            conversation.set_state('{relation_name}.ready')
 26
+
 27
+    @hook('{provides:gpfs}-relation-departed')
 28
+    def departed(self):
 29
+        conversation = self.conversation()
 30
+        conversation.remove_state('{relation_name}.ready')
 31
+        conversation.remove_state('{relation_name}.connected')
 32
+        conversation.set_state('{relation_name}.departing')
 33
+
 34
+    def set_hostname(self, manager_hostname):
 35
+        """
 36
+        Forward Spectrum Scale Manager Hostname to client.
 37
+        :param manager_hostname: string - Hostname of the spectrum
 38
+                                          scale manager node
 39
+        :returns: None
 40
+        """
 41
+
 42
+        for conv in self.conversations():
 43
+            conv.set_remote('manager_hostname', manager_hostname)
 44
+
 45
+    def set_ssh_key(self, privkey, pubkey):
 46
+        """
 47
+        Forward a dict of values containing Private and Public SSH keys
 48
+        to client.
 49
+        :param privkey: string - Private SSH key
 50
+        :param pubkey: string - Public SSH key
 51
+        :returns: None
 52
+        """
 53
+
 54
+        for conv in self.conversations():
 55
+            conv.set_remote(data={
 56
+                            'privkey': privkey,
 57
+                            'pubkey':  pubkey,
 58
+                            })
 59
+
 60
+    def set_notify_client(self, notify_client):
 61
+        """
 62
+        Forward readiness flag status to client, that client is added
 63
+        successfully to the cluster
 64
+        :param notify_client: string - Readiness flag status value
 65
+        :returns: None
 66
+        """
 67
+
 68
+        for conv in self.conversations():
 69
+            conv.set_remote('notify_client', notify_client)
 70
+
 71
+    def get_hostnames(self):
 72
+        """
 73
+        Returns client hostname info
 74
+        :returns: List of client hostnames
 75
+        """
 76
+
 77
+        hosts = []
 78
+        for conv in self.conversations():
 79
+            hosts.append(conv.get_remote('hostname_client'))
 80
+        return hosts
 81
+
 82
+    def get_ips(self):
 83
+        """
 84
+        Returns client Private IP address info
 85
+        :returns: List of client Private IP Addresses
 86
+        """
 87
+
 88
+        ips = []
 89
+        for conv in self.conversations():
 90
+            ips.append(conv.get_remote('private-address'))
 91
+        return ips
 92
+
 93
+    def get_privclient_keys(self):
 94
+        """
 95
+        Returns client Private ssh key info
 96
+        :returns: List of client private ssh keys
 97
+        """
 98
+
 99
+        priv_ssh_keys = []
100
+        for conv in self.conversations():
101
+            priv_ssh_keys.append(conv.get_remote('privkey'))
102
+        return priv_ssh_keys
103
+
104
+    def get_pubclient_keys(self):
105
+        """
106
+        Returns client public ssh key info
107
+        :returns: List of client public ssh keys
108
+        """
109
+
110
+        pub_ssh_keys = []
111
+        for conv in self.conversations():
112
+            pub_ssh_keys.append(conv.get_remote('pubkey'))
113
+        return pub_ssh_keys
114
+
115
+    def dismiss(self):
116
+        """
117
+        Remove the 'departing' state so we don't fall in here again
118
+        (until another client unit leaves).
119
+        """
120
+
121
+        for conv in self.conversations():
122
+            conv.remove_state('{relation_name}.departing')
Back to file index

hooks/relations/gpfs/requires.py

 1
--- 
 2
+++ hooks/relations/gpfs/requires.py
 3
@@ -0,0 +1,95 @@
 4
+from charms.reactive import hook
 5
+from charms.reactive import RelationBase
 6
+from charms.reactive import scopes
 7
+
 8
+
 9
+class gpfsRequires(RelationBase):
10
+    scope = scopes.UNIT
11
+    notify_client = "No"
12
+
13
+    @hook('{requires:gpfs}-relation-joined')
14
+    def joined(self):
15
+        conversation = self.conversation()
16
+        conversation.set_state('{relation_name}.connected')
17
+
18
+    @hook('{requires:gpfs}-relation-changed')
19
+    def changed(self):
20
+        conversation = self.conversation()
21
+        if (str(conversation.get_remote('manager_hostname')) != "None"):
22
+            conversation.set_state('{relation_name}.ready')
23
+        if (str(conversation.get_remote('notify_client')) == "Yes"):
24
+            conversation.set_state('{relation_name}.client-ready')
25
+
26
+    @hook('{requires:gpfs}-relation-departed')
27
+    def departed(self):
28
+        conversation = self.conversation()
29
+        conversation.remove_state('{relation_name}.client-ready')
30
+        conversation.remove_state('{relation_name}.ready')
31
+        conversation.remove_state('{relation_name}.connected')
32
+
33
+    def set_hostname(self, hostname_client):
34
+        """
35
+        Forward Spectrum Scale Client Hostname to manager.
36
+        :param hostname_client: string - Hostname of the spectrum
37
+                                         scale client node
38
+        :returns: None
39
+        """
40
+
41
+        for conv in self.conversations():
42
+            conv.set_remote('hostname_client', hostname_client)
43
+
44
+    def set_ssh_key(self, pubkey):
45
+        """
46
+        Forward a dict of values containing Public SSH keys to manager.
47
+        :param pubkey: string - Public SSH key
48
+        :returns: None
49
+        """
50
+
51
+        for conv in self.conversations():
52
+            conv.set_remote(data={
53
+                            'pubkey':  pubkey,
54
+                            })
55
+
56
+    def get_hostnames(self):
57
+        """
58
+        Returns manager hostname info
59
+        :returns: List of manager hostnames
60
+        """
61
+
62
+        hosts = []
63
+        for conv in self.conversations():
64
+            hosts.append(conv.get_remote('manager_hostname'))
65
+        return hosts
66
+
67
+    def get_ips(self):
68
+        """
69
+        Returns manager private ip address info
70
+        :returns: List of manager private ip addresses
71
+        """
72
+
73
+        ips = []
74
+        for conv in self.conversations():
75
+            ips.append(conv.get_remote('private-address'))
76
+        return ips
77
+
78
+    def get_priv_keys(self):
79
+        """
80
+        Returns client manager ssh key info
81
+        :returns: List of manager private ssh keys
82
+        """
83
+
84
+        priv_ssh_keys = []
85
+        for conv in self.conversations():
86
+            priv_ssh_keys.append(conv.get_remote('privkey'))
87
+        return priv_ssh_keys
88
+
89
+    def get_pub_keys(self):
90
+        """
91
+        Returns manager public ssh key info
92
+        :returns: List of manager public ssh keys
93
+        """
94
+
95
+        pub_ssh_keys = []
96
+        for conv in self.conversations():
97
+            pub_ssh_keys.append(conv.get_remote('pubkey'))
98
+        return pub_ssh_keys
Back to file index

hooks/start

 1
--- 
 2
+++ hooks/start
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/stop

 1
--- 
 2
+++ hooks/stop
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/update-status

 1
--- 
 2
+++ hooks/update-status
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
17
+# and $CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/upgrade-charm

 1
--- 
 2
+++ hooks/upgrade-charm
 3
@@ -0,0 +1,28 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import os
 8
+import sys
 9
+sys.path.append('lib')
10
+
11
+# This is an upgrade-charm context, make sure we install latest deps
12
+if not os.path.exists('wheelhouse/.upgrade'):
13
+    open('wheelhouse/.upgrade', 'w').close()
14
+    if os.path.exists('wheelhouse/.bootstrapped'):
15
+        os.unlink('wheelhouse/.bootstrapped')
16
+else:
17
+    os.unlink('wheelhouse/.upgrade')
18
+
19
+from charms.layer import basic
20
+basic.bootstrap_charm_deps()
21
+basic.init_config_states()
22
+
23
+
24
+# This will load and run the appropriate @hook and other decorated
25
+# handlers from $CHARM_DIR/reactive, $CHARM_DIR/hooks/reactive,
26
+# and $CHARM_DIR/hooks/relations.
27
+#
28
+# See https://jujucharms.com/docs/stable/authors-charm-building
29
+# for more information on this pattern.
30
+from charms.reactive import main
31
+main()
Back to file index

icon.svg

 1
--- 
 2
+++ icon.svg
 3
@@ -0,0 +1,29 @@
 4
+<?xml version="1.0" encoding="UTF-8"?>
 5
+<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
 6
+<!-- Creator: CorelDRAW X6 -->
 7
+<svg xmlns="http://www.w3.org/2000/svg" xml:space="preserve" width="1in" height="0.999996in" version="1.1" shape-rendering="geometricPrecision" text-rendering="geometricPrecision" image-rendering="optimizeQuality" fill-rule="evenodd" clip-rule="evenodd"
 8
+viewBox="0 0 1000 1000"
 9
+ xmlns:xlink="http://www.w3.org/1999/xlink">
10
+ <defs>
11
+    <linearGradient id="id0" gradientUnits="userSpaceOnUse" x1="500.002" y1="999.996" x2="500.002" y2="0">
12
+     <stop offset="0" stop-color="#A1CD3D"/>
13
+     <stop offset="1" stop-color="#DBF799"/>
14
+    </linearGradient>
15
+    <mask id="id1">
16
+      <linearGradient id="id2" gradientUnits="userSpaceOnUse" x1="500.002" y1="58.4805" x2="500.002" y2="307.017">
17
+       <stop offset="0" stop-opacity="1" stop-color="white"/>
18
+       <stop offset="0.141176" stop-opacity="89.8471" stop-color="white"/>
19
+       <stop offset="1" stop-opacity="0" stop-color="white"/>
20
+      </linearGradient>
21
+     <rect fill="url(#id2)" width="1000" height="365"/>
22
+    </mask>
23
+ </defs>
24
+ <g id="Layer_x0020_1">
25
+  <metadata id="CorelCorpID_0Corel-Layer"/>
26
+  <g id="_188173616">
27
+   <path id="Background" fill="url(#id0)" d="M0 676l0 -352c0,-283 41,-324 324,-324l352 0c284,0 324,41 324,324l0 352c0,283 -40,324 -324,324l-352 0c-283,0 -324,-41 -324,-324z"/>
28
+   <path fill="#999999" mask="url(#id1)" d="M0 365l0 -41c0,-283 41,-324 324,-324l352 0c284,0 324,41 324,324l0 41c0,-283 -40,-324 -324,-324l-352 0c-283,0 -324,41 -324,324z"/>
29
+   <path fill="white" fill-rule="nonzero" d="M438 407c12,-8 26,-14 40,-17l0 -87c-13,-4 -25,-11 -35,-20l0 0c-14,-15 -23,-35 -23,-57 0,-22 9,-42 23,-57 15,-14 35,-23 57,-23 22,0 42,9 57,23 14,15 23,35 23,57 0,22 -9,42 -23,57l0 0c-10,9 -22,16 -35,20l0 87c14,3 28,9 40,17l62 -61c-7,-13 -10,-26 -10,-40l0 0c0,-20 7,-41 23,-57 16,-15 36,-23 57,-23 20,0 41,8 57,23 15,16 23,37 23,57 0,21 -8,41 -23,57 -16,16 -37,23 -57,23l0 0c-14,0 -27,-3 -40,-10l-61 62c8,12 14,26 17,40l87 0c4,-13 11,-25 20,-35l0 0c15,-14 35,-23 57,-23 22,0 42,9 57,23 14,15 23,35 23,57 0,22 -9,42 -23,57 -15,14 -35,23 -57,23 -22,0 -42,-9 -57,-23l0 0c-9,-10 -16,-22 -20,-35l-87 0c-3,14 -9,28 -17,40l61 62c13,-7 26,-10 40,-10l0 0c20,0 41,7 57,23 15,16 23,36 23,57 0,20 -8,41 -23,57 -16,15 -37,23 -57,23 -21,0 -41,-8 -57,-23 -16,-16 -23,-37 -23,-57l0 0c0,-14 3,-27 10,-40l-62 -61c-12,8 -26,14 -40,17l0 87c13,4 25,11 35,20l0 0c14,15 23,35 23,57 0,22 -9,42 -23,57 -15,14 -35,23 -57,23 -22,0 -42,-9 -57,-23 -14,-15 -23,-35 -23,-57 0,-22 9,-42 23,-57l0 0c10,-9 22,-16 35,-20l0 -87c-14,-3 -28,-9 -40,-17l-62 61c7,13 10,26 10,40l0 0c0,20 -7,41 -23,57 -16,15 -36,23 -57,23 -20,0 -41,-8 -57,-23 -15,-16 -23,-37 -23,-57 0,-21 8,-41 23,-57 16,-16 37,-23 57,-23l0 0c14,0 27,3 40,10l61 -62c-8,-12 -14,-26 -17,-40l-87 0c-4,13 -11,25 -20,35l0 0c-15,14 -35,23 -57,23 -22,0 -42,-9 -57,-23 -14,-15 -23,-35 -23,-57 0,-22 9,-42 23,-57 15,-14 35,-23 57,-23 22,0 42,9 57,23l0 0c9,10 16,22 20,35l87 0c3,-14 9,-28 17,-40l-61 -62c-13,7 -26,10 -40,10l0 0c-20,0 -41,-7 -57,-23 -15,-16 -23,-36 -23,-57 0,-20 8,-41 23,-57 16,-15 37,-23 57,-23 21,0 41,8 57,23 16,16 23,37 23,57l0 0c0,14 -3,27 -10,40l62 61zm62 49c24,0 44,20 44,44 0,24 -20,44 -44,44 -24,0 -44,-20 -44,-44 0,-24 20,-44 44,-44zm35 -265c-9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15 -9,9 -15,21 -15,35 0,14 6,26 15,35l0 0c9,9 21,15 35,15 14,0 26,-6 35,-15l0 0c9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35zm-229 65c-13,0 -25,5 -35,15 -10,10 -15,22 -15,35 0,13 5,26 15,35 10,10 22,15 35,15l0 0c13,0 26,-5 35,-15 10,-9 15,-22 15,-35l0 0c0,-13 -5,-25 -15,-35 -9,-10 -22,-15 -35,-15zm-115 209c-9,9 -15,21 -15,35 0,14 6,26 15,35 9,9 21,15 35,15 14,0 26,-6 35,-15l0 0c9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35l0 0c-9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15zm65 229c0,13 5,25 15,35 10,10 22,15 35,15 13,0 26,-5 35,-15 10,-10 15,-22 15,-35l0 0c0,-13 -5,-26 -15,-35 -9,-10 -22,-15 -35,-15l0 0c-13,0 -25,5 -35,15 -10,9 -15,22 -15,35zm209 115c9,9 21,15 35,15 14,0 26,-6 35,-15 9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35l0 0c-9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15l0 0c-9,9 -15,21 -15,35 0,14 6,26 15,35zm229 -65c13,0 25,-5 35,-15 10,-10 15,-22 15,-35 0,-13 -5,-26 -15,-35 -10,-10 -22,-15 -35,-15l0 0c-13,0 -26,5 -35,15 -10,9 -15,22 -15,35l0 0c0,13 5,25 15,35 9,10 22,15 35,15zm115 -209c9,-9 15,-21 15,-35 0,-14 -6,-26 -15,-35 -9,-9 -21,-15 -35,-15 -14,0 -26,6 -35,15l0 0c-9,9 -15,21 -15,35 0,14 6,26 15,35l0 0c9,9 21,15 35,15 14,0 26,-6 35,-15zm-65 -229c0,-13 -5,-25 -15,-35 -10,-10 -22,-15 -35,-15 -13,0 -26,5 -35,15 -10,10 -15,22 -15,35l0 0c0,13 5,26 15,35 9,10 22,15 35,15l0 0c13,0 25,-5 35,-15 10,-9 15,-22 15,-35zm-175 195l0 -4c0,-17 -7,-33 -20,-46 -14,-14 -31,-20 -49,-20 -18,0 -35,6 -49,20 -14,14 -20,31 -20,49 0,18 6,35 20,49 13,13 29,20 46,20l4 0c17,0 35,-7 48,-20 13,-13 20,-31 20,-48z"/>
30
+  </g>
31
+ </g>
32
+</svg>
Back to file index

layer.yaml

 1
--- 
 2
+++ layer.yaml
 3
@@ -0,0 +1,13 @@
 4
+"options":
 5
+  "basic":
 6
+    "packages":
 7
+    - "tar"
 8
+    - "unzip"
 9
+    "use_venv": !!bool "false"
10
+    "include_system_packages": !!bool "false"
11
+  "ibm-spectrum-scale-manager": {}
12
+"repo": "bzr+ssh://bazaar.launchpad.net/~ibmcharmers/ibmlayers/layer-ibm-spectrum-scale-manager/"
13
+"includes":
14
+- "layer:basic"
15
+- "interface:gpfs"
16
+"is": "ibm-spectrum-scale-manager"
Back to file index

lib/charms/layer/__init__.py

 1
--- 
 2
+++ lib/charms/layer/__init__.py
 3
@@ -0,0 +1,21 @@
 4
+import os
 5
+
 6
+
 7
+class LayerOptions(dict):
 8
+    def __init__(self, layer_file, section=None):
 9
+        import yaml  # defer, might not be available until bootstrap
10
+        with open(layer_file) as f:
11
+            layer = yaml.safe_load(f.read())
12
+        opts = layer.get('options', {})
13
+        if section and section in opts:
14
+            super(LayerOptions, self).__init__(opts.get(section))
15
+        else:
16
+            super(LayerOptions, self).__init__(opts)
17
+
18
+
19
+def options(section=None, layer_file=None):
20
+    if not layer_file:
21
+        base_dir = os.environ.get('CHARM_DIR', os.getcwd())
22
+        layer_file = os.path.join(base_dir, 'layer.yaml')
23
+
24
+    return LayerOptions(layer_file, section)
Back to file index

lib/charms/layer/basic.py

  1
--- 
  2
+++ lib/charms/layer/basic.py
  3
@@ -0,0 +1,196 @@
  4
+import os
  5
+import sys
  6
+import shutil
  7
+from glob import glob
  8
+from subprocess import check_call
  9
+
 10
+from charms.layer.execd import execd_preinstall
 11
+
 12
+
 13
+def lsb_release():
 14
+    """Return /etc/lsb-release in a dict"""
 15
+    d = {}
 16
+    with open('/etc/lsb-release', 'r') as lsb:
 17
+        for l in lsb:
 18
+            k, v = l.split('=')
 19
+            d[k.strip()] = v.strip()
 20
+    return d
 21
+
 22
+
 23
+def bootstrap_charm_deps():
 24
+    """
 25
+    Set up the base charm dependencies so that the reactive system can run.
 26
+    """
 27
+    # execd must happen first, before any attempt to install packages or
 28
+    # access the network, because sites use this hook to do bespoke
 29
+    # configuration and install secrets so the rest of this bootstrap
 30
+    # and the charm itself can actually succeed. This call does nothing
 31
+    # unless the operator has created and populated $CHARM_DIR/exec.d.
 32
+    execd_preinstall()
 33
+    # ensure that $CHARM_DIR/bin is on the path, for helper scripts
 34
+    os.environ['PATH'] += ':%s' % os.path.join(os.environ['CHARM_DIR'], 'bin')
 35
+    venv = os.path.abspath('../.venv')
 36
+    vbin = os.path.join(venv, 'bin')
 37
+    vpip = os.path.join(vbin, 'pip')
 38
+    vpy = os.path.join(vbin, 'python')
 39
+    if os.path.exists('wheelhouse/.bootstrapped'):
 40
+        activate_venv()
 41
+        return
 42
+    # bootstrap wheelhouse
 43
+    if os.path.exists('wheelhouse'):
 44
+        with open('/root/.pydistutils.cfg', 'w') as fp:
 45
+            # make sure that easy_install also only uses the wheelhouse
 46
+            # (see https://github.com/pypa/pip/issues/410)
 47
+            charm_dir = os.environ['CHARM_DIR']
 48
+            fp.writelines([
 49
+                "[easy_install]\n",
 50
+                "allow_hosts = ''\n",
 51
+                "find_links = file://{}/wheelhouse/\n".format(charm_dir),
 52
+            ])
 53
+        apt_install([
 54
+            'python3-pip',
 55
+            'python3-setuptools',
 56
+            'python3-yaml',
 57
+            'python3-dev',
 58
+        ])
 59
+        from charms import layer
 60
+        cfg = layer.options('basic')
 61
+        # include packages defined in layer.yaml
 62
+        apt_install(cfg.get('packages', []))
 63
+        # if we're using a venv, set it up
 64
+        if cfg.get('use_venv'):
 65
+            if not os.path.exists(venv):
 66
+                series = lsb_release()['DISTRIB_CODENAME']
 67
+                if series in ('precise', 'trusty'):
 68
+                    apt_install(['python-virtualenv'])
 69
+                else:
 70
+                    apt_install(['virtualenv'])
 71
+                cmd = ['virtualenv', '-ppython3', '--never-download', venv]
 72
+                if cfg.get('include_system_packages'):
 73
+                    cmd.append('--system-site-packages')
 74
+                check_call(cmd)
 75
+            os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
 76
+            pip = vpip
 77
+        else:
 78
+            pip = 'pip3'
 79
+            # save a copy of system pip to prevent `pip3 install -U pip`
 80
+            # from changing it
 81
+            if os.path.exists('/usr/bin/pip'):
 82
+                shutil.copy2('/usr/bin/pip', '/usr/bin/pip.save')
 83
+        # need newer pip, to fix spurious Double Requirement error:
 84
+        # https://github.com/pypa/pip/issues/56
 85
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse',
 86
+                    'pip'])
 87
+        # install the rest of the wheelhouse deps
 88
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse'] +
 89
+                   glob('wheelhouse/*'))
 90
+        if not cfg.get('use_venv'):
 91
+            # restore system pip to prevent `pip3 install -U pip`
 92
+            # from changing it
 93
+            if os.path.exists('/usr/bin/pip.save'):
 94
+                shutil.copy2('/usr/bin/pip.save', '/usr/bin/pip')
 95
+                os.remove('/usr/bin/pip.save')
 96
+        os.remove('/root/.pydistutils.cfg')
 97
+        # flag us as having already bootstrapped so we don't do it again
 98
+        open('wheelhouse/.bootstrapped', 'w').close()
 99
+        # Ensure that the newly bootstrapped libs are available.
100
+        # Note: this only seems to be an issue with namespace packages.
101
+        # Non-namespace-package libs (e.g., charmhelpers) are available
102
+        # without having to reload the interpreter. :/
103
+        reload_interpreter(vpy if cfg.get('use_venv') else sys.argv[0])
104
+
105
+
106
+def activate_venv():
107
+    """
108
+    Activate the venv if enabled in ``layer.yaml``.
109
+
110
+    This is handled automatically for normal hooks, but actions might
111
+    need to invoke this manually, using something like:
112
+
113
+        # Load modules from $CHARM_DIR/lib
114
+        import sys
115
+        sys.path.append('lib')
116
+
117
+        from charms.layer.basic import activate_venv
118
+        activate_venv()
119
+
120
+    This will ensure that modules installed in the charm's
121
+    virtual environment are available to the action.
122
+    """
123
+    venv = os.path.abspath('../.venv')
124
+    vbin = os.path.join(venv, 'bin')
125
+    vpy = os.path.join(vbin, 'python')
126
+    from charms import layer
127
+    cfg = layer.options('basic')
128
+    if cfg.get('use_venv') and '.venv' not in sys.executable:
129
+        # activate the venv
130
+        os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
131
+        reload_interpreter(vpy)
132
+
133
+
134
+def reload_interpreter(python):
135
+    """
136
+    Reload the python interpreter to ensure that all deps are available.
137
+
138
+    Newly installed modules in namespace packages sometimes seemt to
139
+    not be picked up by Python 3.
140
+    """
141
+    os.execle(python, python, sys.argv[0], os.environ)
142
+
143
+
144
+def apt_install(packages):
145
+    """
146
+    Install apt packages.
147
+
148
+    This ensures a consistent set of options that are often missed but
149
+    should really be set.
150
+    """
151
+    if isinstance(packages, (str, bytes)):
152
+        packages = [packages]
153
+
154
+    env = os.environ.copy()
155
+
156
+    if 'DEBIAN_FRONTEND' not in env:
157
+        env['DEBIAN_FRONTEND'] = 'noninteractive'
158
+
159
+    cmd = ['apt-get',
160
+           '--option=Dpkg::Options::=--force-confold',
161
+           '--assume-yes',
162
+           'install']
163
+    check_call(cmd + packages, env=env)
164
+
165
+
166
+def init_config_states():
167
+    import yaml
168
+    from charmhelpers.core import hookenv
169
+    from charms.reactive import set_state
170
+    from charms.reactive import toggle_state
171
+    config = hookenv.config()
172
+    config_defaults = {}
173
+    config_defs = {}
174
+    config_yaml = os.path.join(hookenv.charm_dir(), 'config.yaml')
175
+    if os.path.exists(config_yaml):
176
+        with open(config_yaml) as fp:
177
+            config_defs = yaml.safe_load(fp).get('options', {})
178
+            config_defaults = {key: value.get('default')
179
+                               for key, value in config_defs.items()}
180
+    for opt in config_defs.keys():
181
+        if config.changed(opt):
182
+            set_state('config.changed')
183
+            set_state('config.changed.{}'.format(opt))
184
+        toggle_state('config.set.{}'.format(opt), config.get(opt))
185
+        toggle_state('config.default.{}'.format(opt),
186
+                     config.get(opt) == config_defaults[opt])
187
+    hookenv.atexit(clear_config_states)
188
+
189
+
190
+def clear_config_states():
191
+    from charmhelpers.core import hookenv, unitdata
192
+    from charms.reactive import remove_state
193
+    config = hookenv.config()
194
+    remove_state('config.changed')
195
+    for opt in config.keys():
196
+        remove_state('config.changed.{}'.format(opt))
197
+        remove_state('config.set.{}'.format(opt))
198
+        remove_state('config.default.{}'.format(opt))
199
+    unitdata.kv().flush()
Back to file index

lib/charms/layer/execd.py

  1
--- 
  2
+++ lib/charms/layer/execd.py
  3
@@ -0,0 +1,138 @@
  4
+# Copyright 2014-2016 Canonical Limited.
  5
+#
  6
+# This file is part of layer-basic, the reactive base layer for Juju.
  7
+#
  8
+# charm-helpers is free software: you can redistribute it and/or modify
  9
+# it under the terms of the GNU Lesser General Public License version 3 as
 10
+# published by the Free Software Foundation.
 11
+#
 12
+# charm-helpers is distributed in the hope that it will be useful,
 13
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
 14
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 15
+# GNU Lesser General Public License for more details.
 16
+#
 17
+# You should have received a copy of the GNU Lesser General Public License
 18
+# along with charm-helpers.  If not, see <http://www.gnu.org/licenses/>.
 19
+
 20
+# This module may only import from the Python standard library.
 21
+import os
 22
+import sys
 23
+import subprocess
 24
+import time
 25
+
 26
+'''
 27
+execd/preinstall
 28
+
 29
+It is often necessary to configure and reconfigure machines
 30
+after provisioning, but before attempting to run the charm.
 31
+Common examples are specialized network configuration, enabling
 32
+of custom hardware, non-standard disk partitioning and filesystems,
 33
+adding secrets and keys required for using a secured network.
 34
+
 35
+The reactive framework's base layer invokes this mechanism as
 36
+early as possible, before any network access is made or dependencies
 37
+unpacked or non-standard modules imported (including the charms.reactive
 38
+framework itself).
 39
+
 40
+Operators needing to use this functionality may branch a charm and
 41
+create an exec.d directory in it. The exec.d directory in turn contains
 42
+one or more subdirectories, each of which contains an executable called
 43
+charm-pre-install and any other required resources. The charm-pre-install
 44
+executables are run, and if successful, state saved so they will not be
 45
+run again.
 46
+
 47
+    $CHARM_DIR/exec.d/mynamespace/charm-pre-install
 48
+
 49
+An alternative to branching a charm is to compose a new charm that contains
 50
+the exec.d directory, using the original charm as a layer,
 51
+
 52
+A charm author could also abuse this mechanism to modify the charm
 53
+environment in unusual ways, but for most purposes it is saner to use
 54
+charmhelpers.core.hookenv.atstart().
 55
+'''
 56
+
 57
+
 58
+def default_execd_dir():
 59
+    return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
 60
+
 61
+
 62
+def execd_module_paths(execd_dir=None):
 63
+    """Generate a list of full paths to modules within execd_dir."""
 64
+    if not execd_dir:
 65
+        execd_dir = default_execd_dir()
 66
+
 67
+    if not os.path.exists(execd_dir):
 68
+        return
 69
+
 70
+    for subpath in os.listdir(execd_dir):
 71
+        module = os.path.join(execd_dir, subpath)
 72
+        if os.path.isdir(module):
 73
+            yield module
 74
+
 75
+
 76
+def execd_submodule_paths(command, execd_dir=None):
 77
+    """Generate a list of full paths to the specified command within exec_dir.
 78
+    """
 79
+    for module_path in execd_module_paths(execd_dir):
 80
+        path = os.path.join(module_path, command)
 81
+        if os.access(path, os.X_OK) and os.path.isfile(path):
 82
+            yield path
 83
+
 84
+
 85
+def execd_sentinel_path(submodule_path):
 86
+    module_path = os.path.dirname(submodule_path)
 87
+    execd_path = os.path.dirname(module_path)
 88
+    module_name = os.path.basename(module_path)
 89
+    submodule_name = os.path.basename(submodule_path)
 90
+    return os.path.join(execd_path,
 91
+                        '.{}_{}.done'.format(module_name, submodule_name))
 92
+
 93
+
 94
+def execd_run(command, execd_dir=None, stop_on_error=True, stderr=None):
 95
+    """Run command for each module within execd_dir which defines it."""
 96
+    if stderr is None:
 97
+        stderr = sys.stdout
 98
+    for submodule_path in execd_submodule_paths(command, execd_dir):
 99
+        # Only run each execd once. We cannot simply run them in the
100
+        # install hook, as potentially storage hooks are run before that.
101
+        # We cannot rely on them being idempotent.
102
+        sentinel = execd_sentinel_path(submodule_path)
103
+        if os.path.exists(sentinel):
104
+            continue
105
+
106
+        try:
107
+            subprocess.check_call([submodule_path], stderr=stderr,
108
+                                  universal_newlines=True)
109
+            with open(sentinel, 'w') as f:
110
+                f.write('{} ran successfully {}\n'.format(submodule_path,
111
+                                                          time.ctime()))
112
+                f.write('Removing this file will cause it to be run again\n')
113
+        except subprocess.CalledProcessError as e:
114
+            # Logs get the details. We can't use juju-log, as the
115
+            # output may be substantial and exceed command line
116
+            # length limits.
117
+            print("ERROR ({}) running {}".format(e.returncode, e.cmd),
118
+                  file=stderr)
119
+            print("STDOUT<<EOM", file=stderr)
120
+            print(e.output, file=stderr)
121
+            print("EOM", file=stderr)
122
+
123
+            # Unit workload status gets a shorter fail message.
124
+            short_path = os.path.relpath(submodule_path)
125
+            block_msg = "Error ({}) running {}".format(e.returncode,
126
+                                                       short_path)
127
+            try:
128
+                subprocess.check_call(['status-set', 'blocked', block_msg],
129
+                                      universal_newlines=True)
130
+                if stop_on_error:
131
+                    sys.exit(0)  # Leave unit in blocked state.
132
+            except Exception:
133
+                pass  # We care about the exec.d/* failure, not status-set.
134
+
135
+            if stop_on_error:
136
+                sys.exit(e.returncode or 1)  # Error state for pre-1.24 Juju
137
+
138
+
139
+def execd_preinstall(execd_dir=None):
140
+    """Run charm-pre-install for each module within execd_dir."""
141
+    execd_run('charm-pre-install', execd_dir=execd_dir)
Back to file index

metadata.yaml

 1
--- 
 2
+++ metadata.yaml
 3
@@ -0,0 +1,40 @@
 4
+"name": "ibm-spectrum-scale-manager"
 5
+"summary": "IBM SPECTRUM SCALE MANAGER"
 6
+"maintainer": "IBM Juju Support Team <jujusupp@us.ibm.com>"
 7
+"description": |
 8
+  IBM Spectrum Scale is a flexible software-defined storage that can be deployed as high performance file storage
 9
+  or a cost optimized large-scale content repository. IBM Spectrum Scale, previously known as IBM General Parallel
10
+  File System (GPFS), is built from the ground up to scale performance and capacity with no bottlenecks.
11
+  A manager node is any server that has the Spectrum Scale product installed with direct storage access or network
12
+  access to another node
13
+"tags":
14
+- "ibm"
15
+- "storage"
16
+- "gpfs"
17
+- "filesystem"
18
+"provides":
19
+  "gpfsmanager":
20
+    "interface": "gpfs"
21
+"peers":
22
+  "quorum":
23
+    "interface": "gpfs"
24
+"resources":
25
+  "ibm_spectrum_scale_installer_manager":
26
+    "type": "file"
27
+    "filename": "ibm_spectrum_scale_installer.tar.gz"
28
+    "description": "IBM Spectrum Scale install archive"
29
+  "ibm_spectrum_scale_manager_fixpack":
30
+    "type": "file"
31
+    "filename": "Spectrum_Scale_Standard_Fixpack.tar.gz"
32
+    "description": "IBM Spectrum Scale fixpack install archive"
33
+"series":
34
+- "trusty"
35
+- "xenial"
36
+"storage":
37
+  "disks":
38
+    "type": "block"
39
+    "multiple":
40
+      "range": "0-"
41
+"subordinate": !!bool "false"
42
+"terms":
43
+- "ibm-spectrum-scale/1"
Back to file index

reactive/ibm-spectrum-scale-manager.py

   1
--- 
   2
+++ reactive/ibm-spectrum-scale-manager.py
   3
@@ -0,0 +1,1388 @@
   4
+from charms.reactive import when
   5
+from charms.reactive import hook
   6
+from charms.reactive import when_any
   7
+from charms.reactive import when_not
   8
+from charms.reactive import is_state
   9
+from charms.reactive import set_state
  10
+from charms.reactive import remove_state
  11
+from charmhelpers.core import hookenv
  12
+import platform
  13
+from charmhelpers import fetch
  14
+from charmhelpers.payload import (
  15
+    archive,
  16
+)
  17
+from charmhelpers.core.hookenv import (
  18
+    storage_get,
  19
+    storage_list,
  20
+    is_leader
  21
+)
  22
+import tempfile
  23
+import sys
  24
+import time
  25
+import os
  26
+from subprocess import (
  27
+    call,
  28
+    check_call,
  29
+    check_output,
  30
+    Popen,
  31
+    CalledProcessError,
  32
+    PIPE,
  33
+    STDOUT
  34
+)
  35
+import shutil
  36
+import socket
  37
+import glob
  38
+from shlex import split
  39
+import re
  40
+
  41
+
  42
+charm_dir = os.environ['CHARM_DIR']
  43
+GPFS_FILES_PATH = charm_dir + '/gpfs_files'
  44
+MANAGER_IP_ADDRESS = hookenv.unit_get('private-address')
  45
+MANAGER_HOSTNAME = socket.gethostname()
  46
+DEVICES_LIST_MANAGER = []
  47
+NSDVAL = 'nsd'
  48
+NSDVALMGR = 'nsd'
  49
+SERVER = 'server'
  50
+DEVICE_LIST2 = []
  51
+list_del_nodes = []
  52
+FILEPATH_NSD = GPFS_FILES_PATH+"/gpfs-diskdesc.txt"
  53
+FILEPATH_GPFSNODES = GPFS_FILES_PATH+"/gpfs-nodes.list"
  54
+FILEPATH_HOSTFILE = "/etc/hosts"
  55
+PEER_GPFS_READY = "Nil"
  56
+SPECTRUM_SCALE_INSTALL_PATH = '/usr/lpp/mmfs'
  57
+CMD_DEB_INSTALL = ('dpkg -i /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.base*deb'
  58
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.gpl*deb'
  59
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.gskit*deb'
  60
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.msg*deb'
  61
+                   ' /usr/lpp/mmfs/4.2.*.*/gpfs_rpms/gpfs.ext*deb')
  62
+TEMP_FILE_DEL = SPECTRUM_SCALE_INSTALL_PATH + '/node_temp_file_deletion'
  63
+
  64
+# development packages needed to build kernel modules for GPFS cluster
  65
+PREREQS = ["ksh", "binutils", "m4", "libaio1", "g++",
  66
+           "cpp", "make", "gcc", "expect"]
  67
+config = hookenv.config()
  68
+
  69
+
  70
+def add_to_path(p, new):
  71
+    return p if new in p.split(':') else p + ':' + new
  72
+
  73
+
  74
+os.environ['PATH'] = add_to_path(os.environ['PATH'], '/usr/lpp/mmfs/bin')
  75
+
  76
+
  77
+def check_platform_architecture():
  78
+    """
  79
+    Function to check the platform architecture
  80
+    :returns: string
  81
+    """
  82
+    return platform.processor()
  83
+
  84
+
  85
+def build_modules():
  86
+    """
  87
+    Function to build binary gpfs modules after Spectrum Scale is installed.
  88
+    :param: None
  89
+    :returns: None
  90
+    """
  91
+
  92
+    try:
  93
+        check_call(["mmbuildgpl"])
  94
+    except CalledProcessError:
  95
+        hookenv.log('IBM SPECTRUM SCALE : mmbuildgpl was not\
  96
+                    executed', level=hookenv.WARNING)
  97
+    except OSError:
  98
+        hookenv.log('IBM SPECTRUM SCALE : mmbuildgpl not found/installed')
  99
+
 100
+
 101
+def get_kernel_version():
 102
+    """
 103
+    Function to get the kernel version
 104
+    :param: None
 105
+    :returns: string
 106
+    """
 107
+    return check_output(['uname', '-r']).strip()
 108
+
 109
+
 110
+def setadd_hostname(MANAGER_HOSTNAME, MANAGER_IP_ADDRESS):
 111
+    """
 112
+    Function for adding hostname details in /etc/hosts file.
 113
+    :param  MANAGER_HOSTNAME:string - Hostname of the manager
 114
+    :param  MANAGER_IP_ADDRESS:string - IP Address of the manager
 115
+    """
 116
+    ip = MANAGER_IP_ADDRESS
 117
+    hostname = MANAGER_HOSTNAME
 118
+    try:
 119
+        socket.gethostbyname(hostname)
 120
+    except:
 121
+        hookenv.log("IBM SPECTRUM SCALE : Hostname not resolving, adding\
 122
+                    to /etc/hosts")
 123
+    try:
 124
+        with open("/etc/hosts", "a") as hostfile:
 125
+            hostfile.write("%s %s\n" % (ip, hostname))
 126
+    except FileNotFoundError:
 127
+        hookenv.log("IBM SPECTRUM SCALE : File does not exist.")
 128
+
 129
+
 130
+def create_ssh_keys():
 131
+    """
 132
+    Function to create the ssh keys.
 133
+    :returns: None
 134
+    """
 135
+
 136
+    # Generate ssh keys if needed
 137
+    hookenv.log("IBM SPECTRUM SCALE : Creating SSH keys")
 138
+    if not os.path.isfile("/root/.ssh/id_rsa"):
 139
+        call(split('ssh-keygen -q -N "" -f /root/.ssh/id_rsa'))
 140
+        # Ensure permissions are good
 141
+        check_call(['chmod', '0600', '/root/.ssh/id_rsa.pub'])
 142
+        check_call(['chmod', '0600', '/root/.ssh/id_rsa'])
 143
+        with open("/root/.ssh/id_rsa.pub", "r") as idfile:
 144
+            pubkey = idfile.read()
 145
+        with open("/root/.ssh/authorized_keys", "w+") as idfile:
 146
+            idfile.write(pubkey)
 147
+
 148
+
 149
+def get_ssh_keys():
 150
+    """
 151
+    Function to get the public and private ssh keys.
 152
+    :returns: list
 153
+    """
 154
+
 155
+    with open("/root/.ssh/id_rsa.pub", "r") as idfile:
 156
+        pubkey = idfile.read()
 157
+    with open("/root/.ssh/id_rsa", "r") as idfile:
 158
+        privkey = idfile.read()
 159
+    return [privkey, pubkey]
 160
+
 161
+
 162
+def configure_ssh():
 163
+    """
 164
+    Configuring the ssh settings.
 165
+    :returns: None
 166
+    """
 167
+
 168
+    # Configure sshd_config file to allow root
 169
+    sshconf = open("/etc/ssh/sshd_config", 'r')
 170
+    tf = tempfile.NamedTemporaryFile(mode='w+t', delete=False)
 171
+    tfn = tf.name
 172
+    for line in sshconf:
 173
+        if not line.startswith("#"):
 174
+            if "PermitRootLogin" in line:
 175
+                tf.write("# Updated by Spectrum Scale charm: ")
 176
+                tf.write(line)
 177
+            else:
 178
+                tf.write(line)
 179
+        else:
 180
+            tf.write(line)
 181
+    tf.write("# added by Spectrum Scale charm:\n")
 182
+    tf.write("PermitRootLogin without-password\n")
 183
+    sshconf.close()
 184
+    tf.close()
 185
+    shutil.copy(tfn, '/etc/ssh/sshd_config')
 186
+    call(split('service ssh reload'))
 187
+
 188
+    # Avoid the host key confirmation
 189
+    with open("/root/.ssh/config", "w+") as idfile:
 190
+        idfile.write("StrictHostKeyChecking no\n")
 191
+
 192
+
 193
+def get_devices():
 194
+    '''Get a list of storage devices.'''
 195
+    devices = []
 196
+    storage_ids = storage_list()
 197
+    for sid in storage_ids:
 198
+        storage = storage_get('location', sid)
 199
+        devices.append(storage)
 200
+    return devices
 201
+
 202
+
 203
+def add_ssh_key(key):
 204
+    """
 205
+    Adding the ssh keys of mgr peer units and client, so that ssh
 206
+    communication can happen between them.
 207
+    :param key:list - List of public ssh keys.
 208
+    """
 209
+
 210
+    key_list = [key]
 211
+    if key_list:
 212
+        filepath = "/root/.ssh/authorized_keys"
 213
+        with open(filepath, "r") as myfile:
 214
+            lines = myfile.readlines()
 215
+        if (set(key_list) & set(lines)):
 216
+            hookenv.log("IBM SPECTRUM SCALE : SSH key already exists")
 217
+        else:
 218
+            with open("/root/.ssh/authorized_keys", "a+") as idfile:
 219
+                idfile.write(key)
 220
+
 221
+
 222
+def cluster_exists():
 223
+    """
 224
+    To check whether Spectrum Scale Cluster exists or not
 225
+    Return True if the cluster exists otherwise False.
 226
+    :returns: Boolean
 227
+    """
 228
+
 229
+    try:
 230
+        with open(os.devnull, 'w') as FNULL:
 231
+            return True if call('mmlscluster', stdout=FNULL,
 232
+                                stderr=STDOUT) == 0 else False
 233
+    except CalledProcessError:
 234
+        hookenv.log("IBM SPECTRUM SCALE : May be cluster is down or"
 235
+                    " the cluster does not exist. Please check the logs")
 236
+    except FileNotFoundError:
 237
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 238
+                    "yet.")
 239
+
 240
+
 241
+def check_designation(nodename):
 242
+    """
 243
+    To check designation of a spectrum scale node is quorum-manager or not
 244
+    :param nodename:string - GPFS nodename
 245
+    :returns: Boolean
 246
+    """
 247
+
 248
+    try:
 249
+        output = check_output(split('mmgetstate'))
 250
+        output = output.decode("utf-8")
 251
+        node_status = "quorum-manager"
 252
+        s = re.search(nodename, output)
 253
+        n = re.search(node_status, output)
 254
+        if s and n:
 255
+            return True
 256
+        else:
 257
+            return False
 258
+    except CalledProcessError:
 259
+        hookenv.log("IBM SPECTRUM SCALE : Check cluster is up and running")
 260
+    except FileNotFoundError:
 261
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 262
+                    "yet.")
 263
+
 264
+
 265
+def gpfs_filesystem_exists():
 266
+    """
 267
+    To check Spectrum Scale FileSystem exists or not
 268
+    Return True if the FileSystem exists otherwise False.
 269
+    :returns: Boolean
 270
+    """
 271
+
 272
+    try:
 273
+        with open(os.devnull, 'w') as FNULL:
 274
+            return True if check_call(split('mmlsfs all'), stdout=FNULL,
 275
+                                      stderr=STDOUT) == 0 else False
 276
+    except CalledProcessError:
 277
+        hookenv.log("IBM SPECTRUM SCALE : May be cluster is down or the file"
 278
+                    " system is not created yet. Please check the logs")
 279
+    except FileNotFoundError:
 280
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 281
+                    "yet.")
 282
+
 283
+
 284
+def add_node(nodename, q_designation):
 285
+    """
 286
+    To add a node to existing Spectrum Scale cluster.
 287
+    :param nodename:string - Nodename of the node
 288
+    :param q_designation:string - Node Designation (quorun, non-quorum)
 289
+    """
 290
+
 291
+    try:
 292
+        hookenv.log(check_output(split('mmaddnode -N %s:%s' %
 293
+                                       (nodename, q_designation))))
 294
+    except CalledProcessError as e:
 295
+        hookenv.log("IBM SPECTRUM SCALE : Node %s could not be added. Error"
 296
+                    " occurred %s" % nodename)
 297
+        hookenv.log(e.output)
 298
+    except FileNotFoundError:
 299
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 300
+                    "yet.")
 301
+    except TypeError:
 302
+        hookenv.log("IBM SPECTRUM SCALE : Issue while adding node, some "
 303
+                    "mismatch occurred.")
 304
+
 305
+
 306
+def node_exists(nodename):
 307
+    """
 308
+    Check if node has already been added to cluster.
 309
+    :param nodename:string - Nodename of the node
 310
+    :returns: Boolean
 311
+    """
 312
+
 313
+    # Check if node has already been added to cluster
 314
+    try:
 315
+        lscluster = check_output('mmlscluster')
 316
+        lscluster = lscluster.decode('utf-8')
 317
+        node = re.search('^ *\d+.*%s.*$' % nodename, lscluster, re.M)
 318
+        return False if node is None else True
 319
+    except CalledProcessError:
 320
+        hookenv.log("IBM SPECTRUM SCALE : Check cluster is up an running on"
 321
+                    " the node")
 322
+    except FileNotFoundError:
 323
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 324
+                    "yet.")
 325
+
 326
+
 327
+def check_nsd_node(nodename):
 328
+    """
 329
+    To check list of nsds, whether used for GPFS filesystem or
 330
+    is a free disk.
 331
+    :returns: Boolean
 332
+    """
 333
+
 334
+    try:
 335
+        nsd_list = check_output(split('mmlsnsd'))
 336
+        nsd_list = nsd_list.decode('utf-8')
 337
+        with open('nsd_temp_file', 'w+') as idfile:
 338
+            idfile.write(nsd_list)
 339
+        with open("nsd_temp_file", "r") as searchfile_nsd:
 340
+            for line in searchfile_nsd:
 341
+                if nodename in line and 'free disk' not in line:
 342
+                    return True
 343
+                elif nodename in line and 'free disk' in line:
 344
+                    return True
 345
+                elif nodename in line:
 346
+                    return False
 347
+    except CalledProcessError:
 348
+        hookenv.log("IBM SPECTRUM SCALE : No NSD disks available")
 349
+    except FileNotFoundError:
 350
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 351
+                    "yet.")
 352
+
 353
+
 354
+def check_node(nodename):
 355
+    """
 356
+    To check status of a spectrum scale node, whether active or down.
 357
+    :param nodename:string - GPFS nodename
 358
+    :returns: Boolean
 359
+    """
 360
+
 361
+    try:
 362
+        output = check_output(split('mmgetstate'))
 363
+        output = output.decode("utf-8")
 364
+        node_status = "active"
 365
+        s = re.search(nodename, output)
 366
+        n = re.search(node_status, output)
 367
+        if s and n:
 368
+            return True
 369
+        else:
 370
+            return False
 371
+    except CalledProcessError:
 372
+        hookenv.log("IBM SPECTRUM SCALE : Check cluster is up and running")
 373
+    except FileNotFoundError:
 374
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 375
+                    "yet.")
 376
+
 377
+
 378
+def check_num_quorum():
 379
+    """
 380
+    This function returns the number of active quorum nodes in cluster.
 381
+    :returns: int
 382
+    """
 383
+
 384
+    quorum_val = 0
 385
+    try:
 386
+        node_list = check_output(split('mmgetstate -s'))
 387
+        node_list = node_list.decode('utf-8')
 388
+        with open('node_temp_file', 'w+') as idfile:
 389
+            idfile.write(node_list)
 390
+        with open("node_temp_file", "r") as searchfile_node:
 391
+            for line in searchfile_node:
 392
+                if "Number of quorum nodes active in the cluster" in line:
 393
+                    quorum_val = str(line.split(":")[1])
 394
+                    quorum_val = int(quorum_val.split()[0])
 395
+    except CalledProcessError:
 396
+        hookenv.log("IBM SPECTRUM SCALE : Cluster might be down on this"
 397
+                    " node, please check the logs")
 398
+    except FileNotFoundError:
 399
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 400
+                    "yet.")
 401
+    return quorum_val
 402
+
 403
+
 404
+def del_nodes_cluster():
 405
+    """
 406
+    This function is called to check number of quorum nodes, if quorum is more
 407
+    than 2, the node marked for deletion, will be shutdown. If quorum < 2,
 408
+    then the spectrum scale cluster will no loner exist incase there is no
 409
+    filesystem. If filesystem exists, then user intervention is required, in
 410
+    this case charm will error out.
 411
+    :returns: None
 412
+    """
 413
+    quorum_val = check_num_quorum()
 414
+    if not gpfs_filesystem_exists() or not check_nsd_node(MANAGER_HOSTNAME):
 415
+        try:
 416
+            if quorum_val > 2:
 417
+                hookenv.log(check_output(split('mmshutdown -N %s'
 418
+                            % MANAGER_HOSTNAME)))
 419
+                remove_state('ibm-spectrum-scale-manager.node.ready')
 420
+            elif quorum_val <= 2:
 421
+                # Check if less than 2 nodes but filesystem exists, then
 422
+                # error out, so that user can first remove the FS, then
 423
+                # cluster should be removed.
 424
+                if gpfs_filesystem_exists():
 425
+                    hookenv.status_set('blocked', 'cluster has FS, cannot'
 426
+                                       'be removed')
 427
+                    hookenv.log("IBM SPECTRUM SCALE : Error !! The cluster "
 428
+                                "has active file system, until it is removed "
 429
+                                "manually by the admin the charm will be in "
 430
+                                "error state. Once cluster does not have any "
 431
+                                "FS, all nodes will be removed & gpfs cluster"
 432
+                                " will no longer exist !!!!!")
 433
+                    sys.exit(1)
 434
+                hookenv.log(check_output(split('mmshutdown -a ')))
 435
+                hookenv.log(check_output(split('mmdelnode -a')))
 436
+                if is_state('ibm-spectrum-scale-manager.node.ready'):
 437
+                    remove_state('ibm-spectrum-scale-manager.node.ready')
 438
+                elif is_state('ibm-spectrum-scale-manager.cluster.ready'):
 439
+                    remove_state('ibm-spectrum-scale-manager.cluster.ready')
 440
+        except CalledProcessError:
 441
+            hookenv.log("IBM SPECTRUM SCALE : Cluster might be down on this"
 442
+                        " node, please check the logs")
 443
+        except FileNotFoundError:
 444
+            hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale not installed.")
 445
+
 446
+
 447
+def add_disk_file_system(NSDVAL):
 448
+    """
 449
+    To create additional nsd's and add them to existing filesystem whenever
 450
+    additional nodes/units are added.(only when user makes use of juju
 451
+    storage feature, he is not creating FS manually).
 452
+    Return True if the cluster exists otherwise False.
 453
+    :returns: None
 454
+    """
 455
+
 456
+    if cluster_exists() and gpfs_filesystem_exists():
 457
+        exists4 = False
 458
+        try:
 459
+            output = check_output(split('mmlsmgr -c'))
 460
+            output = output.decode("utf-8")
 461
+            s = re.search(MANAGER_HOSTNAME, output)
 462
+            if s:
 463
+                nsd_list = check_output(split('mmlsnsd'))
 464
+                nsd_list = nsd_list.decode('utf-8')
 465
+                with open('nsd_temp_file', 'w+') as idfile:
 466
+                    idfile.write(nsd_list)
 467
+                with open("nsd_temp_file", "r") as searchfile_nsd:
 468
+                    for line in searchfile_nsd:
 469
+                        if NSDVAL in line:
 470
+                            exists4 = True
 471
+                            break
 472
+                if exists4 is False:
 473
+                    try:
 474
+                        hookenv.log(check_output(split('mmcrnsd -F %s'
 475
+                                    % FILEPATH_NSD)))
 476
+                        hookenv.log(check_output(split('mmadddisk fs1 -F %s'
 477
+                                    % FILEPATH_NSD)))
 478
+                    except CalledProcessError as e:
 479
+                        hookenv.log("IBM SPECTRUM SCALE : Issue "
 480
+                                    "while processing the NSD's")
 481
+                        hookenv.log(e.output)
 482
+                        return
 483
+        except CalledProcessError:
 484
+            hookenv.log("IBM SPECTRUM SCALE : Issue while issuing master "
 485
+                        "command, may be gpfs cluster is not active yet")
 486
+
 487
+
 488
+def create_file_system():
 489
+    """
 490
+    To create spectrum scale filesystem, incase user does not want to
 491
+    manually create the FS and makes use of juju storage feature, in that
 492
+    case, charm will create nsd disks and create a default FS 'fs1'
 493
+    :returns: None
 494
+    """
 495
+
 496
+    if cluster_exists() and not gpfs_filesystem_exists():
 497
+        # check whether the current node is master or not
 498
+        # Issue mmlsmgr -c command
 499
+        try:
 500
+            output = check_output(split('mmlsmgr -c'))
 501
+            output = output.decode("utf-8")
 502
+            s = re.search(MANAGER_HOSTNAME, output)
 503
+            # if match is found.
 504
+            if s:
 505
+                try:
 506
+                    hookenv.log("IBM SPECTRUM SCALE : Creation of Spectrum "
 507
+                                "Scale File System in progress")
 508
+                    hookenv.status_set('maintenance', 'File system'
 509
+                                       ' creation in Progress')
 510
+                    # Creating the NSD's using the mmcrnsd command
 511
+                    hookenv.log(check_output(split('mmcrnsd -F %s'
 512
+                                                   % FILEPATH_NSD)))
 513
+                    # Create the file system using the mmcrfs command
 514
+                    hookenv.log(check_output(split('mmcrfs /gpfs fs1 ' '-F %s'
 515
+                                             % FILEPATH_NSD +
 516
+                                             ' -A yes -B 256K')))
 517
+                    hookenv.log(check_output(split('mmmount all -a')))
 518
+                    if check_call(split('mmdf fs1')) == 0:
 519
+                        hookenv.log("IBM SPECTRUM SCALE : File System(fs1) "
 520
+                                    "created and mounted at(/gpfs)")
 521
+                        hookenv.status_set('active', 'File System '
 522
+                                           'mounted at /gpfs')
 523
+                    else:
 524
+                        hookenv.log("IBM SPECTRUM SCALE : Issue while "
 525
+                                    "creating the File System")
 526
+                        hookenv.status_set('blocked', 'File System '
 527
+                                           'Creation failed')
 528
+                        return
 529
+                except CalledProcessError as e:
 530
+                    hookenv.log(e.output)
 531
+                    return
 532
+                except FileNotFoundError:
 533
+                    hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not "
 534
+                                " installed yet.")
 535
+                    hookenv.status_set('blocked', 'Error while creating file '
 536
+                                       'system')
 537
+                    return
 538
+        except CalledProcessError:
 539
+            hookenv.log("IBM SPECTRUM SCALE : May be gpfs cluster is down")
 540
+        except FileNotFoundError:
 541
+            hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 542
+                        "yet.")
 543
+
 544
+
 545
+def create_gpfs_cluster():
 546
+    """
 547
+    To create spectrum scale cluster.
 548
+    :returns: None
 549
+    """
 550
+
 551
+    hookenv.log("IBM SPECTRUM SCALE : Creation of Spectrum Scale"
 552
+                " Cluster in progress ........ ")
 553
+    hookenv.status_set('maintenance', 'Cluster creation in Progress')
 554
+    try:
 555
+        hookenv.log(check_output(split('mmcrcluster -C spectrum_scale_cluster '
 556
+                                       '-N %s' % FILEPATH_GPFSNODES +
 557
+                                       ' --ccr-enable -r /usr/bin/ssh'
 558
+                                       ' -R /usr/bin/scp')))
 559
+        check_output(split('mmchlicense server --accept -N all'))
 560
+        # Start gpfs on all nodes
 561
+        hookenv.log(check_output(split('mmstartup -a')))
 562
+        hookenv.log("IBM SPECTRUM SCALE : SPECTRUM SCALE Cluster is created")
 563
+        set_state('ibm-spectrum-scale-manager.cluster.ready')
 564
+        hookenv.status_set('active', 'Manager node is ready')
 565
+    except CalledProcessError as e:
 566
+        hookenv.log("IBM SPECTRUM SCALE : Issue while creating the"
 567
+                    " cluster, please check the logs")
 568
+        hookenv.log(e.output)
 569
+        return
 570
+    except FileNotFoundError:
 571
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed.")
 572
+        hookenv.status_set('blocked', 'Error while creating the Cluster')
 573
+        return
 574
+
 575
+
 576
+def upgrade_spectrumscale(cfg_ibm_spectrum_scale_fixpack):
 577
+    """
 578
+    Function for upgrading the Spectrum Scale packages/fix packs.
 579
+    Return True if the cluster exists otherwise False.
 580
+    :param cfg_ibm_spectrum_scale_fixpack:string - fix pack resource name
 581
+    """
 582
+
 583
+    # Before upgrade check that Spectrum Scale cluster exists.
 584
+    Cluster_status_flag = "Started"
 585
+    if cluster_exists():
 586
+        if gpfs_filesystem_exists():
 587
+            hookenv.log(check_output(split('mmumount all -N %s'
 588
+                        % MANAGER_HOSTNAME)))
 589
+            Cluster_status_flag = "Unmounted"
 590
+        hookenv.log(check_output(split('mmshutdown -N'
 591
+                                 ' %s' % MANAGER_HOSTNAME)))
 592
+        Cluster_status_flag = "Stopped"
 593
+    fixpack_downloadpath = os.path.dirname(cfg_ibm_spectrum_scale_fixpack)
 594
+    os.chdir(fixpack_downloadpath)
 595
+    archivelist = glob.glob("*.tar.gz")
 596
+    if archivelist:
 597
+        archive.extract(str(archivelist[0]), fixpack_downloadpath)
 598
+        hookenv.log("IBM SPECTRUM SCALE : Extraction of Fix pack is "
 599
+                    "successfull")
 600
+        fixpackinstall_filename = glob.glob("Spectrum_*Linux-install")
 601
+        if fixpackinstall_filename:
 602
+            # Give permissions
 603
+            check_call(['chmod', '0755', fixpack_downloadpath + "/" +
 604
+                        str(fixpackinstall_filename[0])])
 605
+            install_cmd = ([fixpack_downloadpath + "/" +
 606
+                           str(fixpackinstall_filename[0]),
 607
+                           '--text-only', '--silent'])
 608
+            check_call(install_cmd, shell=False)
 609
+            check_call('cd /usr/lpp/mmfs/4.2.*', shell=True)
 610
+            try:
 611
+                check_call(CMD_DEB_INSTALL, shell=True)
 612
+                # To build GPFS portability layer.
 613
+                build_modules()
 614
+                os.chdir(SPECTRUM_SCALE_INSTALL_PATH)
 615
+                gpfs_fixpackinstall_folder = glob.glob("4.2.*")
 616
+                for val in gpfs_fixpackinstall_folder:
 617
+                    shutil.rmtree(val)
 618
+                if Cluster_status_flag == "Stopped":
 619
+                    hookenv.log(check_output(split('mmstartup -N %s'
 620
+                                % MANAGER_HOSTNAME)))
 621
+                    if Cluster_status_flag == "Unmounted":
 622
+                        time.sleep(50)
 623
+                        mount_cmd = ('mmmount all -N %s' % MANAGER_HOSTNAME)
 624
+                        hookenv.log(check_output(split(mount_cmd)))
 625
+                hookenv.status_set('active', "SPECTRUM SCALE "
 626
+                                   "is updated successfully")
 627
+                set_state('ibm-spectrum-scale-manager.updated')
 628
+            except CalledProcessError as e:
 629
+                hookenv.log(e.output)
 630
+                hookenv.log("IBM SPECTRUM SCALE : There might be issues "
 631
+                            "while applying fix pack, please check logs")
 632
+                hookenv.status_set('blocked', "Error while updating")
 633
+                return
 634
+
 635
+
 636
+def check_down_nodes():
 637
+    """
 638
+    checks the cluster and returns a list of gpfs nodes whose status
 639
+    is 'down', this list is used for deleting the nodes which have been
 640
+    marked for deletion.
 641
+    :returns: list
 642
+    """
 643
+
 644
+    down_nodes = []
 645
+    try:
 646
+        node_list = check_output(split('mmgetstate -a'))
 647
+        node_list = node_list.decode('utf-8')
 648
+        with open('/usr/lpp/mmfs/temp_file_del', 'w') as idfile:
 649
+            idfile.write(node_list)
 650
+        with open('/usr/lpp/mmfs/temp_file_del', "r") as searchfile_node:
 651
+            for line in searchfile_node:
 652
+                if (("down" in line) or ("unknown" in line)):
 653
+                    nodename = line.split(" ")
 654
+                    nodename = [i for i in nodename if i != '']
 655
+                    nodename = nodename[1]
 656
+                    down_nodes.append(nodename)
 657
+    except CalledProcessError:
 658
+        hookenv.log("IBM SPECTRUM SCALE : Cluster might be down on this"
 659
+                    " node, please check the logs")
 660
+    except FileNotFoundError:
 661
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed "
 662
+                    "yet.")
 663
+    return down_nodes
 664
+
 665
+
 666
+@when_not('ibm-spectrum-scale-manager.prereqs.installed')
 667
+def spectrum_scale_prereq():
 668
+    """
 669
+    To install the pre-reqs and initial configuration before installation
 670
+    begins. Clearing out the states set and creation of temp files used during
 671
+    installation and configuration.
 672
+    """
 673
+
 674
+    ARCHITECTURE = check_platform_architecture()
 675
+    if (str(ARCHITECTURE) != "x86_64") and (str(ARCHITECTURE) != "ppc64le"):
 676
+        hookenv.log("IBM SPECTRUM SCALE: Unsupported platform. IBM  SPECTRUM"
 677
+                    " SCALE installed with this Charm supports only the"
 678
+                    " x86_64 platform and POWER LE (ppc64le) platforms.")
 679
+        hookenv.status_set('blocked', 'Unsupported Platform')
 680
+        return
 681
+    else:
 682
+        hookenv.log("IBM SPECTRUM SCALE : Pre-reqs will be installed")
 683
+        # install kernel prereq and other prereqs
 684
+        linux_headers = get_kernel_version()
 685
+        linux_headers_val = "linux-headers-"+linux_headers.decode('ascii')
 686
+        fetch.apt_install(PREREQS)
 687
+        fetch.apt_install(linux_headers_val)
 688
+        # Add hostname, generate ssh keys and congfigure ssh
 689
+        # as part of pre-reqs before installing Spectrum Scale.
 690
+        setadd_hostname(MANAGER_HOSTNAME, MANAGER_IP_ADDRESS)
 691
+        create_ssh_keys()
 692
+        configure_ssh()
 693
+        # Creating node descriptor and nsd disk descriptor empty files
 694
+        try:
 695
+            os.makedirs(GPFS_FILES_PATH)
 696
+        except OSError:
 697
+            pass
 698
+        os.chdir(GPFS_FILES_PATH)
 699
+        try:
 700
+            open('gpfs-nodes.list', 'w').close()
 701
+            open('gpfs-diskdesc.txt', 'w').close()
 702
+        except OSError:
 703
+            pass
 704
+        set_state('ibm-spectrum-scale-manager.prereqs.installed')
 705
+        remove_state('ibm-spectrum-scale-manager.node.ready')
 706
+        remove_state('ibm-spectrum-scale-manager.cluster.ready')
 707
+        remove_state('ibm-spectrum-scale-manager.updated')
 708
+
 709
+
 710
+@hook('disks-storage-attached')
 711
+def storage_attached():
 712
+    '''Run every time storage is attached to the charm.'''
 713
+    # Emit a state that this hook has been called.
 714
+    set_state('ibm-spectrum-scale-manager.disks-storage-attached')
 715
+
 716
+
 717
+@when('ibm-spectrum-scale-manager.prereqs.installed')
 718
+@when_not('ibm-spectrum-scale-manager.installed')
 719
+def install_spectrum_scale():
 720
+    """
 721
+    Installing Spectrum Scale 4.2.2. Check if valid packages are present, then
 722
+    only proceed towards installation.
 723
+    """
 724
+
 725
+    hookenv.log('IBM SPECTRUM SCALE : Fetching the '
 726
+                'ibm_spectrum_scale_installer_manager resource', 'INFO')
 727
+    hookenv.status_set('active', 'fetching the '
 728
+                       'ibm_spectrum_scale_installer_manager resource')
 729
+    cfg_spectrum_scale_installer = (
 730
+        hookenv.resource_get('ibm_spectrum_scale_installer_manager'))
 731
+    hookenv.status_set('active', 'Fetched '
 732
+                       'ibm_spectrum_scale_installer_manager resource')
 733
+
 734
+    # If we don't have a package, report blocked status; we can't proceed.
 735
+    if (cfg_spectrum_scale_installer is False):
 736
+        hookenv.log('IBM SPECTRUM SCALE : Missing IBM Spectrum Scale'
 737
+                    ' required resources', 'INFO')
 738
+        hookenv.status_set('blocked', 'SPECTRUM SCALE required'
 739
+                           ' packages are missing')
 740
+        return
 741
+
 742
+    chk_empty_pkg = ["file", cfg_spectrum_scale_installer]
 743
+    p = Popen(chk_empty_pkg, stdout=PIPE, stderr=PIPE, shell=False)
 744
+    output, err = p.communicate()
 745
+    spectrumscale_installer_msg = str(output)
 746
+    if ("empty" in spectrumscale_installer_msg):
 747
+        hookenv.log('IBM SPECTRUM SCALE : The required'
 748
+                    ' ibm_spectrum_scale_installer resource is'
 749
+                    ' corrupt.', 'INFO')
 750
+        hookenv.status_set('blocked', 'SPECTRUM SCALE required'
 751
+                           ' package is not correct/empty')
 752
+        return
 753
+    else:
 754
+        gpfs_downloadpath = os.path.dirname(cfg_spectrum_scale_installer)
 755
+        # Extract the installer contents if the Spectrum Scale installer
 756
+        # is present
 757
+        os.chdir(gpfs_downloadpath)
 758
+        archivelist = glob.glob("*.tar.gz")
 759
+        if archivelist:
 760
+            archive.extract(str(archivelist[0]), gpfs_downloadpath)
 761
+            hookenv.log("IBM SPECTRUM SCALE : Extraction of IBM"
 762
+                        " Spectrum Scale packages is successfull")
 763
+            gpfs_install_filename = glob.glob("Spectrum_*Linux-install")
 764
+            if gpfs_install_filename:
 765
+                check_call(['chmod', '0755', gpfs_downloadpath +
 766
+                            "/" + str(gpfs_install_filename[0])])
 767
+                install_cmd = ([gpfs_downloadpath + "/" +
 768
+                               str(gpfs_install_filename[0]),
 769
+                               '--text-only', '--silent'])
 770
+                try:
 771
+                    check_call(install_cmd, shell=False)
 772
+                    check_call('cd /usr/lpp/mmfs/4.2.*', shell=True)
 773
+                    check_call(CMD_DEB_INSTALL, shell=True)
 774
+                    # To build GPFS portability layer.
 775
+                    build_modules()
 776
+                    # Delete the install folder after install
 777
+                    os.chdir(SPECTRUM_SCALE_INSTALL_PATH)
 778
+                    gpfs_install_folder = glob.glob("4.2.*")
 779
+                    for val in gpfs_install_folder:
 780
+                        shutil.rmtree(val)
 781
+                    hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is"
 782
+                                " installed successfully")
 783
+                    set_state('ibm-spectrum-scale-manager.installed')
 784
+                    hookenv.status_set('active', 'SPECTRUM SCALE is '
 785
+                                       'installed')
 786
+                except CalledProcessError as e:
 787
+                    hookenv.log("IBM SPECTRUM SCALE : Error while installing "
 788
+                                "Spectrum Scale, check the logs.")
 789
+                    hookenv.log(e.output)
 790
+                    hookenv.status_set('blocked', "IBM SPECTRUM SCALE : Error"
 791
+                                       " while installing SPECTRUM SCALE")
 792
+                    return
 793
+            else:
 794
+                hookenv.log("IBM SPECTRUM SCALE: Unable to extract the"
 795
+                            " SPECTRUM SCALE package content."
 796
+                            " Verify whether the package is corrupt or not")
 797
+                hookenv.status_set('blocked', 'IBM SPECTRUM SCALE package'
 798
+                                   ' is corrupt')
 799
+                return
 800
+
 801
+
 802
+@when('ibm-spectrum-scale-manager.installed')
 803
+@when_not('ibm-spectrum-scale-manager.updated')
 804
+def install_spectrum_scale_fixpack():
 805
+    """
 806
+    Installing Spectrum Scale 4.2.2 fixpack. Check if valid fixpack is
 807
+    present, then only proceed towards installing the fix packs.
 808
+    """
 809
+
 810
+    hookenv.log('IBM SPECTRUM SCALE : Fetching the '
 811
+                'ibm_spectrum_scale_fixpack resource', 'INFO')
 812
+    hookenv.status_set('active', 'fetching the ibm_spectrum_scale_fixpack'
 813
+                       ' resource')
 814
+    cfg_ibm_spectrum_scale_fixpack = (hookenv.resource_get(
 815
+                                      'ibm_spectrum_scale_manager_fixpack'))
 816
+    hookenv.status_set('active', 'fetched ibm_spectrum_scale_fixpack resource')
 817
+    # If we don't have a  fixpack, just exit successfully; there's nothing.
 818
+    # to do.
 819
+    if cfg_ibm_spectrum_scale_fixpack is False:
 820
+        hookenv.log('IBM SPECTRUM SCALE : No IBM Spectrum Scale fixpack'
 821
+                    ' to install', 'INFO')
 822
+        if not cluster_exists():
 823
+            hookenv.status_set('active', 'SPECTRUM SCALE is installed')
 824
+        elif cluster_exists():
 825
+            hookenv.status_set('active', 'Manager node is ready')
 826
+    else:
 827
+        chk_empty_fixpack = ["file", cfg_ibm_spectrum_scale_fixpack]
 828
+        p = Popen(chk_empty_fixpack, stdout=PIPE, stderr=PIPE, shell=False)
 829
+        output, err = p.communicate()
 830
+        spectrumscale_fixpack_msg = str(output)
 831
+        if ("empty" in spectrumscale_fixpack_msg):
 832
+            hookenv.log('IBM SPECTRUM SCALE : The required '
 833
+                        'ibm_spectrum_scale_fixpack resource is'
 834
+                        ' corrupt.', 'INFO')
 835
+            if not cluster_exists():
 836
+                hookenv.status_set('active', 'SPECTRUM SCALE is installed')
 837
+                return
 838
+            elif cluster_exists():
 839
+                hookenv.status_set('active', 'Manager node is ready')
 840
+                return
 841
+        else:
 842
+            upgrade_spectrumscale(cfg_ibm_spectrum_scale_fixpack)
 843
+
 844
+
 845
+@hook('upgrade-charm')
 846
+def check_fixpack():
 847
+    """
 848
+    The upgrade-charm hook will fire when a new resource is pushed for this
 849
+    charm. This is a good time to determine if we need to deal with a new
 850
+    fixpack.
 851
+    """
 852
+
 853
+    if not is_state('ibm-spectrum-scale-manager.updated'):
 854
+        hookenv.log("IBM SPECTRUM SCALE : No fixpack has been installed; "
 855
+                    "nothing to upgrade.")
 856
+        return
 857
+    else:
 858
+        hookenv.log("IBM SPECTRUM SCALE: scanning for new fixpacks to install"
 859
+                    )
 860
+        fixpack_dir = (charm_dir +
 861
+                       "/../resources/ibm_spectrum_scale_fixpack/"
 862
+                       "Spectrum_Scale_Standard_Fixpack.tar.gz")
 863
+        if os.path.exists(fixpack_dir):
 864
+            mdsum = ["md5sum", fixpack_dir]
 865
+            p = Popen(mdsum, stdout=PIPE, stderr=PIPE, shell=False)
 866
+            output, err = p.communicate()
 867
+            value = output.split()
 868
+            CUR_FP1_MD5 = str(value[0])
 869
+            # Calling resource-get here will fetch the fixpack resource.
 870
+            new_fixpack = hookenv.resource_get('ibm_spectrum_scale_fixpack')
 871
+            if new_fixpack is False:
 872
+                hookenv.log("IBM SPECTRUM SCALE : No new fixpack to install")
 873
+            else:
 874
+                mdsum_new = ["md5sum", new_fixpack]
 875
+                p1 = Popen(mdsum_new, stdout=PIPE, stderr=PIPE, shell=False)
 876
+                output1, err = p1.communicate()
 877
+                value1 = output1.split()
 878
+                NEW_FP1_MD5 = str(value1[0])
 879
+                # If sums don't match, we have a new fp. Configure states so
 880
+                # we re-run install_ibm_spectrum_scale_fixpack().
 881
+                if CUR_FP1_MD5 != NEW_FP1_MD5:
 882
+                    hookenv.log("IBM SPECTRUM SCALE : new fixpack detected")
 883
+                    remove_state('ibm-spectrum-scale-manager.updated')
 884
+                else:
 885
+                    hookenv.log("IBM SPECTRUM SCALE : no new fixpack to "
 886
+                                "install")
 887
+        else:
 888
+            hookenv.log("IBM SPECTRUM SCALE :  no new fixpack to install")
 889
+
 890
+
 891
+@when_not('quorum.connected')
 892
+@when('ibm-spectrum-scale-manager.installed')
 893
+def notify_user_gpfscluster_notready():
 894
+    """
 895
+    Notify user that if number of units < 2, cluster will not be created.
 896
+    To create a spectrum scale cluster, min 2 units are required.
 897
+    """
 898
+
 899
+    if not cluster_exists():
 900
+        # Minimum two units are required to create a Spectrum Scale cluster.
 901
+        hookenv.log("IBM SPECTRUM SCALE : Waiting to be joined to a peer unit"
 902
+                    " to form/add to a Spectrum Scale Cluster. To create a"
 903
+                    " cluster, minimum 2 units are required")
 904
+        hookenv.status_set('blocked', 'Waiting to be joined to a peer '
 905
+                           'Spectrum Scale manager unit')
 906
+
 907
+
 908
+@when('gpfsmanager.connected')
 909
+@when('ibm-spectrum-scale-manager.installed')
 910
+def send_details_client(gpfsmanager):
 911
+    """
 912
+    Forward the connection info details(ssh keys and host info) to the client
 913
+    unit.
 914
+    """
 915
+
 916
+    privkey, pubkey = get_ssh_keys()
 917
+    # Send details to the gpfs client
 918
+    gpfsmanager.set_hostname(MANAGER_HOSTNAME)
 919
+    gpfsmanager.set_ssh_key(privkey, pubkey)
 920
+
 921
+
 922
+@when('gpfsmanager.ready')
 923
+@when_any('ibm-spectrum-scale-manager.node.ready',
 924
+          'ibm-spectrum-scale-manager.cluster.ready')
 925
+def get_details_client(gpfsmanager):
 926
+    """
 927
+    To get the connection info from the client units like hostname/ip and ssh
 928
+    keys. This info will be used to add the client node to the existing
 929
+    cluster. Until cluster is created, the handler will not be called.
 930
+    """
 931
+
 932
+    hook_compare_del = 'relation-departed'
 933
+    hook_compare_stor = 'storage-detaching'
 934
+    current_hook = hookenv.hook_name()
 935
+    if hook_compare_del in current_hook or hook_compare_stor in current_hook:
 936
+        return
 937
+
 938
+    hostname_clients = gpfsmanager.get_hostnames()
 939
+    ip_clients = gpfsmanager.get_ips()
 940
+    pub_key_clients = gpfsmanager.get_pubclient_keys()
 941
+    for host_client, ip_address_client, pubclient in zip(
 942
+            hostname_clients, ip_clients, pub_key_clients):
 943
+        exists = False
 944
+        host_client = str(host_client)
 945
+        ip_address_client = str(ip_address_client)
 946
+        pubclient = str(pubclient)
 947
+        add_ssh_key(pubclient)
 948
+        hookenv.log("IBM SPECTRUM SCALE : Client Hostname getting connected "
 949
+                    "is : %s" % host_client)
 950
+        hookenv.log("IBM SPECTRUM SCALE : Client IP getting connected "
 951
+                    "is : %s" % ip_address_client)
 952
+
 953
+        # Check whether the client host details exist or not, if it
 954
+        # does not exist, then add it
 955
+        searchtext = str(ip_address_client)+" "+str(host_client)
 956
+        with open(FILEPATH_HOSTFILE, "r") as searchfile:
 957
+            for line in searchfile:
 958
+                if searchtext in line:
 959
+                    exists = True
 960
+        if exists is False:
 961
+            setadd_hostname(host_client, ip_address_client)
 962
+
 963
+        # Add the client node to the GPFS cluster
 964
+        # check whether the current node is master or not
 965
+        try:
 966
+            if check_node(MANAGER_HOSTNAME):
 967
+                output = check_output(split('mmlsmgr -c'))
 968
+                output = output.decode("utf-8")
 969
+                s = re.search(MANAGER_HOSTNAME, output)
 970
+                # Add the client node. Check cluster exists and node is not
 971
+                # part of the cluster yet
 972
+                if cluster_exists() and not node_exists(host_client) and s:
 973
+                    notify_client = "No"
 974
+                    hookenv.log("IBM SPECTRUM SCALE : Adding Spectrum Scale "
 975
+                                "client node to the cluster")
 976
+                    try:
 977
+                        add_node(host_client, q_designation='nonquorum')
 978
+                        check_output(split('mmchlicense client --accept -N %s'
 979
+                                     % host_client))
 980
+                        hookenv.log(check_output(split('mmstartup -N %s'
 981
+                                    % host_client)))
 982
+                        notify_client = "Yes"
 983
+                        gpfsmanager.set_notify_client(notify_client)
 984
+                    except CalledProcessError as e:
 985
+                        hookenv.log("IBM SPECTRUM SCALE : Issue while "
 986
+                                    "adding client node/starting the node")
 987
+                        hookenv.log(e.output)
 988
+                    except FileNotFoundError:
 989
+                        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is "
 990
+                                    "not installed.")
 991
+                        return
 992
+        except CalledProcessError:
 993
+            hookenv.log("IBM SPECTRUM SCALE : May be cluster is down.")
 994
+
 995
+
 996
+@when('gpfsmanager.departing')
 997
+def get_details_departedclient(gpfsmanager):
 998
+    """
 999
+    When relation is removed between client and manager, this handler
1000
+    will be called to mark the client nodes for deletion.
1001
+    """
1002
+
1003
+    hostname_clients = gpfsmanager.get_hostnames()
1004
+    del_nodename = str(hostname_clients[0])
1005
+    if node_exists(del_nodename) and not check_designation(del_nodename):
1006
+        with open(TEMP_FILE_DEL, 'a') as idfile:
1007
+            idfile.write(del_nodename + "\n")
1008
+    try:
1009
+        gpfsmanager.dismiss()
1010
+    except AttributeError:
1011
+        hookenv.log("IBM SPECTRUM SCALE : No client units to depart")
1012
+
1013
+
1014
+@when('quorum.connected')
1015
+@when('ibm-spectrum-scale-manager.installed')
1016
+def send_details_peer(quorum):
1017
+    """
1018
+    If we have multiple units of manager, each mgr should be
1019
+    able to communicate with each, other, for that pass the connection
1020
+    info details.
1021
+    """
1022
+
1023
+    hookenv.log("IBM SPECTRUM SCALE : Sending host details to peer unit")
1024
+    # send host details
1025
+    quorum.set_hostname_peer(MANAGER_HOSTNAME)
1026
+    # send the ssh details
1027
+    privkey, pubkey = get_ssh_keys()
1028
+    # Send details to the peer node
1029
+    quorum.set_ssh_key(pubkey)
1030
+    # For sending the attached devices list
1031
+    if is_state('ibm-spectrum-scale-manager.disks-storage-attached'):
1032
+        devices_list = []
1033
+        devices_list = get_devices()
1034
+        quorum.set_storagedisk_peer(devices_list)
1035
+        hookenv.log("IBM SPECTRUM SCALE : Sending storage disks details"
1036
+                    " to peer unit")
1037
+
1038
+
1039
+@when('quorum.available')
1040
+@when('ibm-spectrum-scale-manager.installed')
1041
+@when_not('quorum.departing')
1042
+def exchange_data_peers(quorum):
1043
+    """
1044
+    Get the connection info details for each peer unit connected. Add the
1045
+    hostname/ip and public ssh key info. Create the node description/nsd disk
1046
+    files. Each mgr peer node will be added to the existing cluster.
1047
+    """
1048
+
1049
+    hook_compare_del = 'relation-departed'
1050
+    hook_compare_stor = 'storage-detaching'
1051
+    current_hook = hookenv.hook_name()
1052
+    if hook_compare_del in current_hook or hook_compare_stor in current_hook:
1053
+        return
1054
+
1055
+    peer_hostnames = quorum.get_hostname_peers()
1056
+    peer_ips = quorum.get_unitips()
1057
+    # Get the device location list for the peers getting connected
1058
+    if not cluster_exists():
1059
+        try:
1060
+            os.chdir(GPFS_FILES_PATH)
1061
+            open('gpfs-nodes.list', 'w').close()
1062
+            open('gpfs-diskdesc.txt', 'w').close()
1063
+        except OSError:
1064
+            pass
1065
+    with open(FILEPATH_GPFSNODES, 'a') as idfile:
1066
+        if os.stat(FILEPATH_GPFSNODES).st_size == 0:
1067
+            idfile.write(MANAGER_HOSTNAME+':quorum-manager:'+"\n")
1068
+
1069
+    # Get the device location list for the peers getting connected
1070
+    if (is_state('ibm-spectrum-scale-manager.disks-storage-attached') and
1071
+            not gpfs_filesystem_exists()):
1072
+        DEVICES_LIST_MANAGER = get_devices()
1073
+        exists_flag = False
1074
+        for storage_location in DEVICES_LIST_MANAGER:
1075
+            storage_location = str(storage_location)
1076
+            NSDVALMGR = ("nsd_" + storage_location + "_" + MANAGER_HOSTNAME)
1077
+            NSDVALMGR = re.sub(r'-', "", NSDVALMGR)
1078
+            NSDVALMGR = re.sub(r'/', "", NSDVALMGR)
1079
+            SERVER = MANAGER_HOSTNAME
1080
+            with open(FILEPATH_NSD, "r") as searchfile:
1081
+                for line in searchfile:
1082
+                    if NSDVALMGR in line:
1083
+                        exists_flag = True
1084
+            if exists_flag is False:
1085
+                    with open(FILEPATH_NSD, 'a') as idfile:
1086
+                        idfile.write("%nsd:" + "\n" + " " + "device=%s"
1087
+                                     % storage_location + "\n" + " nsd=%s"
1088
+                                     % NSDVALMGR + "\n" + " servers=%s"
1089
+                                     % SERVER + "\n" +
1090
+                                     " usage=dataAndMetadata" + "\n")
1091
+    # Get the ssh key details, so that each peer can do ssh to each other
1092
+    pubkeys = quorum.get_pub_keys()
1093
+    for public_key in pubkeys:
1094
+        public_key = str(public_key)
1095
+        add_ssh_key(public_key)
1096
+    for host_peer, ip_address_peer in zip(peer_hostnames, peer_ips):
1097
+        exists = False
1098
+        exists1 = False
1099
+        exists2 = False
1100
+        PEER_GPFS_READY = 'No'
1101
+        host_peer = str(host_peer)
1102
+        ip_address_peer = str(ip_address_peer)
1103
+        # Check whether the peer unit host details exist or not,
1104
+        # if it does not exist, then add it
1105
+        FILEPATH_HOSTFILE = "/etc/hosts"
1106
+        searchtext = str(ip_address_peer)+" "+str(host_peer)
1107
+        with open(FILEPATH_HOSTFILE, "r") as searchfile:
1108
+            for line in searchfile:
1109
+                if searchtext in line:
1110
+                    exists = True
1111
+        if exists is False:
1112
+            setadd_hostname(host_peer, ip_address_peer)
1113
+
1114
+        # Adding peer node information in node descriptor file
1115
+        with open(FILEPATH_GPFSNODES, "r") as searchfile:
1116
+            for line in searchfile:
1117
+                if host_peer in line:
1118
+                    exists1 = True
1119
+        if exists1 is False:
1120
+            with open(FILEPATH_GPFSNODES, 'a') as idfile:
1121
+                idfile.write(host_peer+':quorum-manager:'+"\n")
1122
+        # Create spectrum scale cluster
1123
+    if is_leader() and not cluster_exists():
1124
+            # Call create_gpfs_cluster function
1125
+            create_gpfs_cluster()
1126
+            PEER_GPFS_READY = "ClusterReady"
1127
+            quorum.notify_peerready(PEER_GPFS_READY)
1128
+    elif cluster_exists() and not node_exists(host_peer):
1129
+            # check whether the current node is master or not
1130
+            # Issue mmlsmgr -c command
1131
+            time.sleep(120)
1132
+            try:
1133
+                output = check_output(split('mmlsmgr -c'))
1134
+                output = output.decode("utf-8")
1135
+                s = re.search(MANAGER_HOSTNAME, output)
1136
+                # if match is found.
1137
+                if s:
1138
+                    hookenv.log("IBM SPECTRUM SCALE : Adding additional"
1139
+                                " manager nodes in progress............")
1140
+                    add_node(host_peer, q_designation='quorum-manager')
1141
+                    try:
1142
+                        check_output(split('mmchlicense server --accept -N %s'
1143
+                                     % host_peer))
1144
+                        hookenv.log(check_output(split('mmstartup -N %s'
1145
+                                    % host_peer)))
1146
+                    except CalledProcessError as e:
1147
+                        hookenv.log("IBM SPECTRUM SCALE : Issue while"
1148
+                                    " applying license or starting up "
1149
+                                    "the manager unit.")
1150
+                        hookenv.log(e.output)
1151
+                        return
1152
+                    except FileNotFoundError:
1153
+                        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is "
1154
+                                    "not installed yet.")
1155
+                        return
1156
+            except CalledProcessError:
1157
+                hookenv.log("IBM SPECTRUM SCALE : May be gpfs cluster is not "
1158
+                            "active yet.")
1159
+            except FileNotFoundError:
1160
+                hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not "
1161
+                            "installed.")
1162
+    else:
1163
+            hookenv.log("IBM SPECTRUM SCALE : Manager Node already exists")
1164
+    # Create nsd disk descriptor file if one or more storage disks are
1165
+    # attached
1166
+    peer_device_lists = quorum.get_storagedisks_peers()
1167
+    for host_peer, device_list in zip(peer_hostnames, peer_device_lists):
1168
+        exists = False
1169
+        exists1 = False
1170
+        host_peer = str(host_peer)
1171
+        device_list = str(device_list)
1172
+        if str(device_list) != "None":
1173
+            for val in '[\']':
1174
+                device_list = device_list.replace(val, '')
1175
+            for val in device_list.split(','):
1176
+                DEVICE_LIST2.append(val)
1177
+        # Adding peer node nsd disks information
1178
+        for storage_location_peer in DEVICE_LIST2:
1179
+            exists2 = False
1180
+            storage_location_peer = storage_location_peer.strip()
1181
+            NSDVAL = "nsd_" + storage_location_peer + "_" + host_peer
1182
+            NSDVAL = re.sub(r'-', "", NSDVAL)
1183
+            NSDVAL = re.sub(r'/', "", NSDVAL)
1184
+            SERVER = host_peer
1185
+            with open(FILEPATH_NSD, "r") as searchfile:
1186
+                for line in searchfile:
1187
+                    if NSDVAL in line:
1188
+                        exists2 = True
1189
+            if exists2 is False:
1190
+                with open(FILEPATH_NSD, 'a') as idfile:
1191
+                    idfile.write("%nsd:"+"\n" + " "+"device=%s"
1192
+                                 % storage_location_peer + "\n" + " nsd=%s"
1193
+                                 % NSDVAL + "\n" + "servers=%s" % SERVER +
1194
+                                 "\n" + " usage=dataAndMetadata" + "\n")
1195
+            # Call function for adding disks to existing filesystem
1196
+            add_disk_file_system(NSDVAL)
1197
+        DEVICE_LIST2.clear()
1198
+    if os.stat(FILEPATH_NSD).st_size != 0:
1199
+        # Create spectrum scale file system
1200
+        # Call function create_file_system to create gpfs filesystem
1201
+        create_file_system()
1202
+
1203
+
1204
+@when_not('ibm-spectrum-scale-manager.node.ready')
1205
+@when('quorum.cluster.ready')
1206
+def cluster_ready(quorum):
1207
+    """
1208
+    Check the current node is part of cluster and active, if active then set
1209
+    the state.
1210
+    """
1211
+
1212
+    if check_node(MANAGER_HOSTNAME):
1213
+        set_state('ibm-spectrum-scale-manager.node.ready')
1214
+        hookenv.log("IBM SPECTRUM SCALE : Manager node is active and"
1215
+                    " successfully added to the cluster")
1216
+        hookenv.status_set('active', 'Manager node is ready')
1217
+    else:
1218
+        hookenv.log("IBM SPECTRUM SCALE : Manager node is not active, some"
1219
+                    " issue might have occured, please check the logs")
1220
+
1221
+
1222
+@when('quorum.departing')
1223
+def get_details_departedmgrpeer(quorum):
1224
+    """
1225
+    When a peer manager unit departs, this handler will be called to
1226
+    mark that node for deletion.
1227
+    """
1228
+    peer_hostnames = quorum.get_hostname_peers()
1229
+    del_peer_node = str(peer_hostnames[0])
1230
+    with open(TEMP_FILE_DEL, 'a') as idfile:
1231
+            idfile.write(del_peer_node + "\n")
1232
+    try:
1233
+        quorum.dismiss_departed()
1234
+    except AttributeError:
1235
+        hookenv.log("IBM SPECTRUM SCALE : No peer manager units to depart")
1236
+
1237
+
1238
+@when_any('ibm-spectrum-scale-manager.node.ready',
1239
+          'ibm-spectrum-scale-manager.cluster.ready')
1240
+def check_cluster_for_unknown_nodes():
1241
+    """
1242
+    This handler checks the nodes which are down as well as marked
1243
+    for deletion, if the condition matches, then node is removed from
1244
+    the cluster.
1245
+    """
1246
+
1247
+    quorum_val = check_num_quorum()
1248
+    if quorum_val < 2:
1249
+        return
1250
+    list_del_nodes = ['none']
1251
+    try:
1252
+        output = check_output(split('mmlsmgr -c'))
1253
+        output = output.decode("utf-8")
1254
+        s = re.search(MANAGER_HOSTNAME, output)
1255
+        # if match is found.
1256
+        if s:
1257
+            del_down_nodes = check_down_nodes()
1258
+            if not del_down_nodes:
1259
+                return
1260
+            try:
1261
+                with open(TEMP_FILE_DEL, 'r') as idfile:
1262
+                    list_del_nodes = idfile.readlines()
1263
+            except FileNotFoundError:
1264
+                pass
1265
+                return
1266
+            for node in del_down_nodes:
1267
+                if any(node in x for x in list_del_nodes):
1268
+                    hookenv.log("IBM SPECTRUM SCALE : Node going to be "
1269
+                                "deleted is : %s" % node)
1270
+                    try:
1271
+                        hookenv.log(check_output(split('mmdelnode -N %s'
1272
+                                    % node)))
1273
+                        f = open(TEMP_FILE_DEL, 'r+')
1274
+                        d = f.readlines()
1275
+                        f.seek(0)
1276
+                        for i in d:
1277
+                            if node not in i:
1278
+                                f.write(i)
1279
+                        f.truncate()
1280
+                        f.close()
1281
+                    except CalledProcessError:
1282
+                        hookenv.log("IBM SPECTRUM SCALE : Issue with node "
1283
+                                    " deletion, check the logs")
1284
+        elif not s and cluster_exists():
1285
+            # for nodes which are not leader, update the node temp file based
1286
+            # upon the nodes part of cluster, if some node which is there in
1287
+            # the temp_del file but not part of cluster, remove that entry.
1288
+            try:
1289
+                with open(TEMP_FILE_DEL, 'r') as idfile:
1290
+                    list_del_nodes = idfile.readlines()
1291
+            except FileNotFoundError:
1292
+                pass
1293
+                return
1294
+            if not list_del_nodes:
1295
+                return
1296
+            for node in list_del_nodes:
1297
+                if not node_exists(node) and not node:
1298
+                    f = open(TEMP_FILE_DEL, 'r+')
1299
+                    d = f.readlines()
1300
+                    f.seek(0)
1301
+                    for i in d:
1302
+                        if node not in i:
1303
+                            f.write(i)
1304
+                    f.truncate()
1305
+                    f.close()
1306
+    except CalledProcessError:
1307
+        hookenv.log("IBM SPECTRUM SCALE : May be gpfs cluster is not "
1308
+                    "active yet.")
1309
+    except FileNotFoundError:
1310
+        hookenv.log("IBM SPECTRUM SCALE : Spectrum Scale is not installed.")
1311
+
1312
+    if cluster_exists() is False:
1313
+        try:
1314
+            os.chdir(GPFS_FILES_PATH)
1315
+            hookenv.log("IBM SPECTRUM SCALE : Cleaning out the node and"
1316
+                        " nsd descriptor files if cluster is removed.")
1317
+            open('gpfs-nodes.list', 'w').close()
1318
+            open('gpfs-diskdesc.txt', 'w').close()
1319
+            open('/usr/lpp/mmfs/temp_file_del', 'w').close()
1320
+            open(TEMP_FILE_DEL, 'w').close()
1321
+            remove_state('ibm-spectrum-scale-manager.cluster.ready')
1322
+            remove_state('ibm-spectrum-scale-manager.node.ready')
1323
+        except OSError:
1324
+            pass
1325
+
1326
+
1327
+@hook('disks-storage-detaching')
1328
+def stop_disk_dettach():
1329
+    """
1330
+    If a mgr unit is getting removed, check whether it has attached disks or
1331
+    not, if it has, then it cannot be removed, as there data on this disk used
1332
+    by scale cluster. User manual intervention is required, so the charm will
1333
+    error out, so that user performs proper steps for deletion, so that no
1334
+    data is lost.
1335
+    """
1336
+
1337
+    if check_nsd_node(MANAGER_HOSTNAME):
1338
+        # Filesystem exists and this node has disks attached or is nsd server
1339
+        hookenv.status_set('blocked', 'This node has disk')
1340
+        hookenv.log("IBM SPECTRUM SCALE : Error !! This node is a NSD server"
1341
+                    " and has disk attached, the disk cannot be deattached now"
1342
+                    ". The storage admin has to perform the steps for removing"
1343
+                    " this node as nsd server. Once unmarked as nsd server"
1344
+                    " error will be resolved and this disk will be deattached")
1345
+        sys.exit(1)
1346
+
1347
+
1348
+@hook('stop')
1349
+def remove_unit_fromcluster():
1350
+    """
1351
+    Check whether the node going to be removed is a nsd server or not
1352
+    If yes, the machine should not be released, error out, so that the
1353
+    storage admin can manually perform the steps for node deletion
1354
+    without data getting affected.
1355
+    """
1356
+
1357
+    server_mmfs_filepath = '/var/mmfs/gen/nodeFiles/mmfsserverlicense.rel'
1358
+    d = ['none']
1359
+    nodefile = glob.glob(server_mmfs_filepath)
1360
+    if check_nsd_node(MANAGER_HOSTNAME):
1361
+        hookenv.status_set('blocked', 'node has disk/nsd server')
1362
+        hookenv.log("IBM SPECTRUM SCALE : Error !! This node is a NSD server "
1363
+                    "for a disk, the node cannot be deleted, admin has to "
1364
+                    "perform proper steps for removing this NSD server "
1365
+                    "manually. Once this node is no more a NSD server, "
1366
+                    "node will be removed and machine will be released !!!!!")
1367
+        sys.exit(1)
1368
+        # Node not having any disks attached or is nsd server
1369
+        # Delete those nodes from cluster and release the machine
1370
+    elif not check_nsd_node(MANAGER_HOSTNAME) and cluster_exists():
1371
+        hookenv.log("IBM SPECTRUM SCALE : Node deletion is going "
1372
+                    "on please wait until this unit is removed")
1373
+        del_nodes_cluster()
1374
+        if nodefile:
1375
+            try:
1376
+                with open(server_mmfs_filepath, "r+") as f:
1377
+                    d = f.readlines()
1378
+            except FileNotFoundError:
1379
+                pass
1380
+            for i in d:
1381
+                wait = 0
1382
+                while MANAGER_HOSTNAME in i:
1383
+                    # Wait until this node gets removed from the cluster
1384
+                    time.sleep(40)
1385
+                    wait = wait + 1
1386
+                    nodefile = (glob.glob(
1387
+                        "/var/mmfs/gen/nodeFiles/mmfsserverlicense.rel"))
1388
+                    if not nodefile:
1389
+                        break
1390
+                    elif wait > 100:
1391
+                        break
Back to file index

requirements.txt

1
--- 
2
+++ requirements.txt
3
@@ -0,0 +1,2 @@
4
+flake8
5
+pytest
Back to file index

revision

1
--- 
2
+++ revision
3
@@ -0,0 +1 @@
4
+0
Back to file index

tests/01-deploy.py

 1
--- 
 2
+++ tests/01-deploy.py
 3
@@ -0,0 +1,46 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+import unittest
 7
+import amulet
 8
+
 9
+seconds_to_wait = 20000
10
+
11
+
12
+class BundleTest(unittest.TestCase):
13
+    """ Create a class for testing the charm in the unit test framework. """
14
+    @classmethod
15
+    def setUpClass(self):
16
+        """
17
+        Deployment test for IBM Spectrum Scale Manager.
18
+        This will test for creation of spectrum scale cluster
19
+        and adding a client node to the cluster.
20
+        """
21
+        self.d = amulet.Deployment(series='xenial')
22
+        self.d.add('ibm-spectrum-scale-manager',
23
+                   'cs:~ibmcharmers/ibm-spectrum-scale-manager',
24
+                   units=2)
25
+        self.d.add('ibm-spectrum-scale-client',
26
+                   'cs:~ibmcharmers/ibm-spectrum-scale-client')
27
+        self.d.add('ubuntu')
28
+        self.d.relate('ubuntu:juju-info',
29
+                      'ibm-spectrum-scale-client:juju-info')
30
+        self.d.relate('ibm-spectrum-scale-manager:gpfsmanager',
31
+                      'ibm-spectrum-scale-client:gpfsmanager')
32
+        self.d.setup(seconds_to_wait)
33
+        self.d.sentry.wait(seconds_to_wait)
34
+
35
+    def test_unit_deployed(self):
36
+        # verify unit
37
+        self.assertTrue(self.d.deployed)
38
+        unit_manager_0 = self.d.sentry['ibm-spectrum-scale-manager'][0]
39
+        cmd1, code = unit_manager_0.run("/usr/lpp/mmfs/bin/mmlscluster")
40
+        if code != 0:
41
+            message = ('mmlscluster command fail to run, may be'
42
+                       ' cluster is down.')
43
+            amulet.raise_status(amulet.FAIL, msg=message)
44
+        print('The output of running mmlscluster command is \n')
45
+        print(str(cmd1))
46
+
47
+
48
+if __name__ == '__main__':
49
+    unittest.main()
Back to file index

tests/tests.yaml

1
--- 
2
+++ tests/tests.yaml
3
@@ -0,0 +1,4 @@
4
+packages:
5
+  - amulet
6
+  - python3
7
+  - tar
Back to file index

tox.ini

 1
--- 
 2
+++ tox.ini
 3
@@ -0,0 +1,12 @@
 4
+[tox]
 5
+skipsdist=True
 6
+envlist = py34, py35
 7
+skip_missing_interpreters = True
 8
+
 9
+[testenv]
10
+commands = py.test -v
11
+deps =
12
+    -r{toxinidir}/requirements.txt
13
+
14
+[flake8]
15
+exclude=docs