~ibmcharmers/trusty/ibm-cinder-spectrumscale

Owner: shilkaul
Status: Needs Review
Vote: +0 (+2 needed for approval)

CPP?: No
OIL?: No

This charm is for IBM Cinder-SpectrumScale. The charm provides a Spectrum Scale storage backend for Cinder.

The code can be found in the below repository
Repo : https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-cinder-spectrumscale/trunk


Tests

Substrate Status Results Last Updated
lxc RETRY 19 days ago
gce RETRY 19 days ago
aws RETRY 19 days ago

Add Comment

Login to comment/vote on this review.


Policy Checklist

Description Unreviewed Pass Fail

General

Must verify that any software installed or utilized is verified as coming from the intended source.
  • Any software installed from the Ubuntu or CentOS default archives satisfies this due to the apt and yum sources including cryptographic signing information.
  • Third party repositories must be listed as a configuration option that can be overridden by the user and not hard coded in the charm itself.
  • Launchpad PPAs are acceptable as the add-apt-repository command retrieves the keys securely.
  • Other third party repositories are acceptable if the signing key is embedded in the charm.
Must provide a means to protect users from known security vulnerabilities in a way consistent with best practices as defined by either operating system policies or upstream documentation.
Basically, this means there must be instructions on how to apply updates if you use software not from distribution channels.
Must have hooks that are idempotent.
Should be built using charm layers.
Should use Juju Resources to deliver required payloads.

Testing and Quality

charm proof must pass without errors or warnings. petevg
Must include passing unit, functional, or integration tests.
Tests must exercise all relations.
Tests must exercise config.
set-config, unset-config, and re-set must be tested as a minimum
Must not use anything infrastructure-provider specific (i.e. querying EC2 metadata service).
Must be self contained unless the charm is a proxy for an existing cloud service, e.g. ec2-elb charm.
Must not use symlinks. petevg
Bundles must only use promulgated charms, they cannot reference charms in personal namespaces. petevg
Must call Juju hook tools (relation-*, unit-*, config-*, etc) without a hard coded path. petevg
Should include a tests.yaml for all integration tests.

Metadata

Must include a full description of what the software does. petevg
Must include a maintainer email address for a team or individual who will be responsive to contact. petevg
Must include a license. Call the file 'copyright' and make sure all files' licenses are specified clearly. petevg
Must be under a Free license. petevg
Must have a well documented and valid README.md. petevg
Must describe the service. petevg
Must describe how it interacts with other services, if applicable. petevg
Must document the interfaces. petevg
Must show how to deploy the charm. petevg
Must define external dependencies, if applicable. petevg
Should link to a recommend production usage bundle and recommended configuration if this differs from the default.
Should reference and link to upstream documentation and best practices.

Security

Must not run any network services using default passwords.
Must verify and validate any external payload
  • Known and understood packaging systems that verify packages like apt, pip, and yum are ok.
  • wget | sh style is not ok.
Should make use of whatever Mandatory Access Control system is provided by the distribution.
Should avoid running services as root.

All changes | Changes since last revision

Source Diff

Files changed 118

Inline diff comments 0

No comments yet.

Back to file index

LICENSE

  1
--- 
  2
+++ LICENSE
  3
@@ -0,0 +1,202 @@
  4
+
  5
+                                 Apache License
  6
+                           Version 2.0, January 2004
  7
+                        http://www.apache.org/licenses/
  8
+
  9
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
 10
+
 11
+   1. Definitions.
 12
+
 13
+      "License" shall mean the terms and conditions for use, reproduction,
 14
+      and distribution as defined by Sections 1 through 9 of this document.
 15
+
 16
+      "Licensor" shall mean the copyright owner or entity authorized by
 17
+      the copyright owner that is granting the License.
 18
+
 19
+      "Legal Entity" shall mean the union of the acting entity and all
 20
+      other entities that control, are controlled by, or are under common
 21
+      control with that entity. For the purposes of this definition,
 22
+      "control" means (i) the power, direct or indirect, to cause the
 23
+      direction or management of such entity, whether by contract or
 24
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
 25
+      outstanding shares, or (iii) beneficial ownership of such entity.
 26
+
 27
+      "You" (or "Your") shall mean an individual or Legal Entity
 28
+      exercising permissions granted by this License.
 29
+
 30
+      "Source" form shall mean the preferred form for making modifications,
 31
+      including but not limited to software source code, documentation
 32
+      source, and configuration files.
 33
+
 34
+      "Object" form shall mean any form resulting from mechanical
 35
+      transformation or translation of a Source form, including but
 36
+      not limited to compiled object code, generated documentation,
 37
+      and conversions to other media types.
 38
+
 39
+      "Work" shall mean the work of authorship, whether in Source or
 40
+      Object form, made available under the License, as indicated by a
 41
+      copyright notice that is included in or attached to the work
 42
+      (an example is provided in the Appendix below).
 43
+
 44
+      "Derivative Works" shall mean any work, whether in Source or Object
 45
+      form, that is based on (or derived from) the Work and for which the
 46
+      editorial revisions, annotations, elaborations, or other modifications
 47
+      represent, as a whole, an original work of authorship. For the purposes
 48
+      of this License, Derivative Works shall not include works that remain
 49
+      separable from, or merely link (or bind by name) to the interfaces of,
 50
+      the Work and Derivative Works thereof.
 51
+
 52
+      "Contribution" shall mean any work of authorship, including
 53
+      the original version of the Work and any modifications or additions
 54
+      to that Work or Derivative Works thereof, that is intentionally
 55
+      submitted to Licensor for inclusion in the Work by the copyright owner
 56
+      or by an individual or Legal Entity authorized to submit on behalf of
 57
+      the copyright owner. For the purposes of this definition, "submitted"
 58
+      means any form of electronic, verbal, or written communication sent
 59
+      to the Licensor or its representatives, including but not limited to
 60
+      communication on electronic mailing lists, source code control systems,
 61
+      and issue tracking systems that are managed by, or on behalf of, the
 62
+      Licensor for the purpose of discussing and improving the Work, but
 63
+      excluding communication that is conspicuously marked or otherwise
 64
+      designated in writing by the copyright owner as "Not a Contribution."
 65
+
 66
+      "Contributor" shall mean Licensor and any individual or Legal Entity
 67
+      on behalf of whom a Contribution has been received by Licensor and
 68
+      subsequently incorporated within the Work.
 69
+
 70
+   2. Grant of Copyright License. Subject to the terms and conditions of
 71
+      this License, each Contributor hereby grants to You a perpetual,
 72
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
 73
+      copyright license to reproduce, prepare Derivative Works of,
 74
+      publicly display, publicly perform, sublicense, and distribute the
 75
+      Work and such Derivative Works in Source or Object form.
 76
+
 77
+   3. Grant of Patent License. Subject to the terms and conditions of
 78
+      this License, each Contributor hereby grants to You a perpetual,
 79
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
 80
+      (except as stated in this section) patent license to make, have made,
 81
+      use, offer to sell, sell, import, and otherwise transfer the Work,
 82
+      where such license applies only to those patent claims licensable
 83
+      by such Contributor that are necessarily infringed by their
 84
+      Contribution(s) alone or by combination of their Contribution(s)
 85
+      with the Work to which such Contribution(s) was submitted. If You
 86
+      institute patent litigation against any entity (including a
 87
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
 88
+      or a Contribution incorporated within the Work constitutes direct
 89
+      or contributory patent infringement, then any patent licenses
 90
+      granted to You under this License for that Work shall terminate
 91
+      as of the date such litigation is filed.
 92
+
 93
+   4. Redistribution. You may reproduce and distribute copies of the
 94
+      Work or Derivative Works thereof in any medium, with or without
 95
+      modifications, and in Source or Object form, provided that You
 96
+      meet the following conditions:
 97
+
 98
+      (a) You must give any other recipients of the Work or
 99
+          Derivative Works a copy of this License; and
100
+
101
+      (b) You must cause any modified files to carry prominent notices
102
+          stating that You changed the files; and
103
+
104
+      (c) You must retain, in the Source form of any Derivative Works
105
+          that You distribute, all copyright, patent, trademark, and
106
+          attribution notices from the Source form of the Work,
107
+          excluding those notices that do not pertain to any part of
108
+          the Derivative Works; and
109
+
110
+      (d) If the Work includes a "NOTICE" text file as part of its
111
+          distribution, then any Derivative Works that You distribute must
112
+          include a readable copy of the attribution notices contained
113
+          within such NOTICE file, excluding those notices that do not
114
+          pertain to any part of the Derivative Works, in at least one
115
+          of the following places: within a NOTICE text file distributed
116
+          as part of the Derivative Works; within the Source form or
117
+          documentation, if provided along with the Derivative Works; or,
118
+          within a display generated by the Derivative Works, if and
119
+          wherever such third-party notices normally appear. The contents
120
+          of the NOTICE file are for informational purposes only and
121
+          do not modify the License. You may add Your own attribution
122
+          notices within Derivative Works that You distribute, alongside
123
+          or as an addendum to the NOTICE text from the Work, provided
124
+          that such additional attribution notices cannot be construed
125
+          as modifying the License.
126
+
127
+      You may add Your own copyright statement to Your modifications and
128
+      may provide additional or different license terms and conditions
129
+      for use, reproduction, or distribution of Your modifications, or
130
+      for any such Derivative Works as a whole, provided Your use,
131
+      reproduction, and distribution of the Work otherwise complies with
132
+      the conditions stated in this License.
133
+
134
+   5. Submission of Contributions. Unless You explicitly state otherwise,
135
+      any Contribution intentionally submitted for inclusion in the Work
136
+      by You to the Licensor shall be under the terms and conditions of
137
+      this License, without any additional terms or conditions.
138
+      Notwithstanding the above, nothing herein shall supersede or modify
139
+      the terms of any separate license agreement you may have executed
140
+      with Licensor regarding such Contributions.
141
+
142
+   6. Trademarks. This License does not grant permission to use the trade
143
+      names, trademarks, service marks, or product names of the Licensor,
144
+      except as required for reasonable and customary use in describing the
145
+      origin of the Work and reproducing the content of the NOTICE file.
146
+
147
+   7. Disclaimer of Warranty. Unless required by applicable law or
148
+      agreed to in writing, Licensor provides the Work (and each
149
+      Contributor provides its Contributions) on an "AS IS" BASIS,
150
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
151
+      implied, including, without limitation, any warranties or conditions
152
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
153
+      PARTICULAR PURPOSE. You are solely responsible for determining the
154
+      appropriateness of using or redistributing the Work and assume any
155
+      risks associated with Your exercise of permissions under this License.
156
+
157
+   8. Limitation of Liability. In no event and under no legal theory,
158
+      whether in tort (including negligence), contract, or otherwise,
159
+      unless required by applicable law (such as deliberate and grossly
160
+      negligent acts) or agreed to in writing, shall any Contributor be
161
+      liable to You for damages, including any direct, indirect, special,
162
+      incidental, or consequential damages of any character arising as a
163
+      result of this License or out of the use or inability to use the
164
+      Work (including but not limited to damages for loss of goodwill,
165
+      work stoppage, computer failure or malfunction, or any and all
166
+      other commercial damages or losses), even if such Contributor
167
+      has been advised of the possibility of such damages.
168
+
169
+   9. Accepting Warranty or Additional Liability. While redistributing
170
+      the Work or Derivative Works thereof, You may choose to offer,
171
+      and charge a fee for, acceptance of support, warranty, indemnity,
172
+      or other liability obligations and/or rights consistent with this
173
+      License. However, in accepting such obligations, You may act only
174
+      on Your own behalf and on Your sole responsibility, not on behalf
175
+      of any other Contributor, and only if You agree to indemnify,
176
+      defend, and hold each Contributor harmless for any liability
177
+      incurred by, or claims asserted against, such Contributor by reason
178
+      of your accepting any such warranty or additional liability.
179
+
180
+   END OF TERMS AND CONDITIONS
181
+
182
+   APPENDIX: How to apply the Apache License to your work.
183
+
184
+      To apply the Apache License to your work, attach the following
185
+      boilerplate notice, with the fields enclosed by brackets "[]"
186
+      replaced with your own identifying information. (Don't include
187
+      the brackets!)  The text should be enclosed in the appropriate
188
+      comment syntax for the file format. We also recommend that a
189
+      file or class name and description of purpose be included on the
190
+      same "printed page" as the copyright notice for easier
191
+      identification within third-party archives.
192
+
193
+   Copyright [yyyy] [name of copyright owner]
194
+
195
+   Licensed under the Apache License, Version 2.0 (the "License");
196
+   you may not use this file except in compliance with the License.
197
+   You may obtain a copy of the License at
198
+
199
+       http://www.apache.org/licenses/LICENSE-2.0
200
+
201
+   Unless required by applicable law or agreed to in writing, software
202
+   distributed under the License is distributed on an "AS IS" BASIS,
203
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
204
+   See the License for the specific language governing permissions and
205
+   limitations under the License.
Back to file index

Makefile

 1
--- 
 2
+++ Makefile
 3
@@ -0,0 +1,26 @@
 4
+#!/usr/bin/make
 5
+PYTHON := /usr/bin/env python
 6
+
 7
+lint:
 8
+	@tox -e pep8
 9
+
10
+test:
11
+	@echo Starting unit tests...
12
+	@tox -e py27
13
+
14
+functional_test:
15
+	@echo Starting Amulet tests...
16
+	@tox -e func27
17
+
18
+bin/charm_helpers_sync.py:
19
+	@mkdir -p bin
20
+	@bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
21
+        > bin/charm_helpers_sync.py
22
+
23
+sync: bin/charm_helpers_sync.py
24
+	@$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-hooks.yaml
25
+	@$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers-tests.yaml
26
+
27
+publish: lint test
28
+	bzr push lp:charms/ibm-cinder-spectrumscale
29
+	bzr push lp:ibmcharmers/trusty/ibm-cinder-spectrumscale
Back to file index

README.md

 1
--- 
 2
+++ README.md
 3
@@ -0,0 +1,30 @@
 4
+Spectrum Scale Storage Backend for Cinder
 5
+-----------------------------------------
 6
+
 7
+Overview
 8
+========
 9
+
10
+This charm provides a Spectrum Scale storage backend for use with the Cinder
11
+charm; this allows a single Spectrum Scale storage cluster to be associated
12
+with a single Cinder deployment, potentially alongside other storage
13
+backends from other vendors.
14
+
15
+To use:
16
+
17
+    juju deploy cinder
18
+    juju deploy ibm-spectrum-scale-client
19
+    juju deploy ibm-cinder-spectrumscale
20
+    juju add-relation ibm-cinder-spectrumscale ibm-spectrum-scale-client
21
+    juju add-relation ibm-cinder-spectrumscale cinder
22
+
23
+The charm would be deployed and will wait for the GPFS mount point value from the user.
24
+To set this run the following command:
25
+
26
+    juju config ibm-cinder-spectrumscale gpfs_mount_point_base="path/to/the/mount/point"
27
+
28
+Configuration
29
+=============
30
+All the values in config.yaml file can be configured as shown below:
31
+
32
+    juju config ibm-cinder-spectrumscale gpfs_user_login="loginname"
33
+    juju config ibm-cinder-spectrumscale gpfs_images_dir="gpfsdirpath"
Back to file index

charm-helpers-hooks.yaml

 1
--- 
 2
+++ charm-helpers-hooks.yaml
 3
@@ -0,0 +1,14 @@
 4
+branch: lp:~openstack-charmers/charm-helpers/stable
 5
+destination: hooks/charmhelpers
 6
+include:
 7
+    - core
 8
+    - osplatform
 9
+    - cli
10
+    - fetch
11
+    - contrib.openstack|inc=*
12
+    - contrib.openstack.utils
13
+    - contrib.storage
14
+    - contrib.hahelpers
15
+    - contrib.network.ip
16
+    - contrib.python.packages
17
+    - payload.execd
Back to file index

charm-helpers-tests.yaml

1
--- 
2
+++ charm-helpers-tests.yaml
3
@@ -0,0 +1,5 @@
4
+branch: lp:~openstack-charmers/charm-helpers/stable
5
+destination: tests/charmhelpers
6
+include:
7
+    - contrib.amulet
8
+    - contrib.openstack.amulet
Back to file index

config.yaml

 1
--- 
 2
+++ config.yaml
 3
@@ -0,0 +1,60 @@
 4
+options:
 5
+  volume-driver:
 6
+    type: string
 7
+    default: cinder.volume.drivers.ibm.gpfs.GPFSRemoteDriver
 8
+    description: |
 9
+      This value denotes the volume driver value.
10
+  gpfs_mount_point_base:
11
+    type: string
12
+    default: 
13
+    description: |
14
+      Specifies the path of the GPFS directory where Block
15
+      Storage volume and snapshot files are stored.
16
+  gpfs_sparse_volumes:
17
+    type: boolean
18
+    default: True
19
+    description: |
20
+      Specifies that volumes are created as sparse files
21
+      which initially consume no space. If set to False, the
22
+      volume is created as a fully allocated file, in which
23
+      case, creation may take a significantly longer time.
24
+  gpfs_storage_pool:
25
+    type: string
26
+    default: system
27
+    description: |
28
+      Specifies the storage pool that volumes are assigned
29
+      to. By default, the system storage pool is used.
30
+  gpfs_user_login:
31
+    type: string
32
+    default: root
33
+    description: |
34
+      Username for GPFS nodes
35
+  gpfs_private_key:
36
+    type: string
37
+    default: /var/lib/cinder/id_rsa
38
+    description: |
39
+       Filename of private key to use for SSH authentication
40
+  gpfs_ssh_port:
41
+    type: int
42
+    default: 22
43
+    description: |
44
+      SSH port to use
45
+  gpfs_images_dir:
46
+    type: string
47
+    default:
48
+    description: |
49
+       Specifies the path of the Image service repository in
50
+       GPFS.  Leave undefined if not storing images in GPFS.
51
+  gpfs_images_share_mode:
52
+    type: string
53
+    default: 
54
+    description: |
55
+       Specifies the type of image copy to be used.  Set this
56
+       when the Image service repository also uses GPFS so
57
+       that image files can be transferred efficiently from
58
+       the Image service to the Block Storage service. There
59
+       are two valid values: "copy" specifies that a full copy
60
+       of the image is made; "copy_on_write" specifies that
61
+       copy-on-write optimization strategy is used and
62
+       unmodified blocks of the image file are shared
63
+       efficiently.
Back to file index

copyright

 1
--- 
 2
+++ copyright
 3
@@ -0,0 +1,16 @@
 4
+Format: http://dep.debian.net/deps/dep5/
 5
+
 6
+Files: *
 7
+Copyright: Copyright 2012, Canonical Ltd., All Rights Reserved.
 8
+License: Apache-2.0
 9
+ Licensed under the Apache License, Version 2.0 (the "License"); you may
10
+ not use this file except in compliance with the License. You may obtain
11
+ a copy of the License at
12
+
13
+      http://www.apache.org/licenses/LICENSE-2.0
14
+
15
+ Unless required by applicable law or agreed to in writing, software
16
+ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
17
+ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
18
+ License for the specific language governing permissions and limitations
19
+ under the License.
Back to file index

hooks/__init__.py

 1
--- 
 2
+++ hooks/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2016 Canonical Ltd
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

hooks/charmhelpers/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/__init__.py
 3
@@ -0,0 +1,36 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+# Bootstrap charm-helpers, installing its dependencies if necessary using
19
+# only standard libraries.
20
+import subprocess
21
+import sys
22
+
23
+try:
24
+    import six  # flake8: noqa
25
+except ImportError:
26
+    if sys.version_info.major == 2:
27
+        subprocess.check_call(['apt-get', 'install', '-y', 'python-six'])
28
+    else:
29
+        subprocess.check_call(['apt-get', 'install', '-y', 'python3-six'])
30
+    import six  # flake8: noqa
31
+
32
+try:
33
+    import yaml  # flake8: noqa
34
+except ImportError:
35
+    if sys.version_info.major == 2:
36
+        subprocess.check_call(['apt-get', 'install', '-y', 'python-yaml'])
37
+    else:
38
+        subprocess.check_call(['apt-get', 'install', '-y', 'python3-yaml'])
39
+    import yaml  # flake8: noqa
Back to file index

hooks/charmhelpers/cli/__init__.py

  1
--- 
  2
+++ hooks/charmhelpers/cli/__init__.py
  3
@@ -0,0 +1,189 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import inspect
 19
+import argparse
 20
+import sys
 21
+
 22
+from six.moves import zip
 23
+
 24
+import charmhelpers.core.unitdata
 25
+
 26
+
 27
+class OutputFormatter(object):
 28
+    def __init__(self, outfile=sys.stdout):
 29
+        self.formats = (
 30
+            "raw",
 31
+            "json",
 32
+            "py",
 33
+            "yaml",
 34
+            "csv",
 35
+            "tab",
 36
+        )
 37
+        self.outfile = outfile
 38
+
 39
+    def add_arguments(self, argument_parser):
 40
+        formatgroup = argument_parser.add_mutually_exclusive_group()
 41
+        choices = self.supported_formats
 42
+        formatgroup.add_argument("--format", metavar='FMT',
 43
+                                 help="Select output format for returned data, "
 44
+                                      "where FMT is one of: {}".format(choices),
 45
+                                 choices=choices, default='raw')
 46
+        for fmt in self.formats:
 47
+            fmtfunc = getattr(self, fmt)
 48
+            formatgroup.add_argument("-{}".format(fmt[0]),
 49
+                                     "--{}".format(fmt), action='store_const',
 50
+                                     const=fmt, dest='format',
 51
+                                     help=fmtfunc.__doc__)
 52
+
 53
+    @property
 54
+    def supported_formats(self):
 55
+        return self.formats
 56
+
 57
+    def raw(self, output):
 58
+        """Output data as raw string (default)"""
 59
+        if isinstance(output, (list, tuple)):
 60
+            output = '\n'.join(map(str, output))
 61
+        self.outfile.write(str(output))
 62
+
 63
+    def py(self, output):
 64
+        """Output data as a nicely-formatted python data structure"""
 65
+        import pprint
 66
+        pprint.pprint(output, stream=self.outfile)
 67
+
 68
+    def json(self, output):
 69
+        """Output data in JSON format"""
 70
+        import json
 71
+        json.dump(output, self.outfile)
 72
+
 73
+    def yaml(self, output):
 74
+        """Output data in YAML format"""
 75
+        import yaml
 76
+        yaml.safe_dump(output, self.outfile)
 77
+
 78
+    def csv(self, output):
 79
+        """Output data as excel-compatible CSV"""
 80
+        import csv
 81
+        csvwriter = csv.writer(self.outfile)
 82
+        csvwriter.writerows(output)
 83
+
 84
+    def tab(self, output):
 85
+        """Output data in excel-compatible tab-delimited format"""
 86
+        import csv
 87
+        csvwriter = csv.writer(self.outfile, dialect=csv.excel_tab)
 88
+        csvwriter.writerows(output)
 89
+
 90
+    def format_output(self, output, fmt='raw'):
 91
+        fmtfunc = getattr(self, fmt)
 92
+        fmtfunc(output)
 93
+
 94
+
 95
+class CommandLine(object):
 96
+    argument_parser = None
 97
+    subparsers = None
 98
+    formatter = None
 99
+    exit_code = 0
100
+
101
+    def __init__(self):
102
+        if not self.argument_parser:
103
+            self.argument_parser = argparse.ArgumentParser(description='Perform common charm tasks')
104
+        if not self.formatter:
105
+            self.formatter = OutputFormatter()
106
+            self.formatter.add_arguments(self.argument_parser)
107
+        if not self.subparsers:
108
+            self.subparsers = self.argument_parser.add_subparsers(help='Commands')
109
+
110
+    def subcommand(self, command_name=None):
111
+        """
112
+        Decorate a function as a subcommand. Use its arguments as the
113
+        command-line arguments"""
114
+        def wrapper(decorated):
115
+            cmd_name = command_name or decorated.__name__
116
+            subparser = self.subparsers.add_parser(cmd_name,
117
+                                                   description=decorated.__doc__)
118
+            for args, kwargs in describe_arguments(decorated):
119
+                subparser.add_argument(*args, **kwargs)
120
+            subparser.set_defaults(func=decorated)
121
+            return decorated
122
+        return wrapper
123
+
124
+    def test_command(self, decorated):
125
+        """
126
+        Subcommand is a boolean test function, so bool return values should be
127
+        converted to a 0/1 exit code.
128
+        """
129
+        decorated._cli_test_command = True
130
+        return decorated
131
+
132
+    def no_output(self, decorated):
133
+        """
134
+        Subcommand is not expected to return a value, so don't print a spurious None.
135
+        """
136
+        decorated._cli_no_output = True
137
+        return decorated
138
+
139
+    def subcommand_builder(self, command_name, description=None):
140
+        """
141
+        Decorate a function that builds a subcommand. Builders should accept a
142
+        single argument (the subparser instance) and return the function to be
143
+        run as the command."""
144
+        def wrapper(decorated):
145
+            subparser = self.subparsers.add_parser(command_name)
146
+            func = decorated(subparser)
147
+            subparser.set_defaults(func=func)
148
+            subparser.description = description or func.__doc__
149
+        return wrapper
150
+
151
+    def run(self):
152
+        "Run cli, processing arguments and executing subcommands."
153
+        arguments = self.argument_parser.parse_args()
154
+        argspec = inspect.getargspec(arguments.func)
155
+        vargs = []
156
+        for arg in argspec.args:
157
+            vargs.append(getattr(arguments, arg))
158
+        if argspec.varargs:
159
+            vargs.extend(getattr(arguments, argspec.varargs))
160
+        output = arguments.func(*vargs)
161
+        if getattr(arguments.func, '_cli_test_command', False):
162
+            self.exit_code = 0 if output else 1
163
+            output = ''
164
+        if getattr(arguments.func, '_cli_no_output', False):
165
+            output = ''
166
+        self.formatter.format_output(output, arguments.format)
167
+        if charmhelpers.core.unitdata._KV:
168
+            charmhelpers.core.unitdata._KV.flush()
169
+
170
+
171
+cmdline = CommandLine()
172
+
173
+
174
+def describe_arguments(func):
175
+    """
176
+    Analyze a function's signature and return a data structure suitable for
177
+    passing in as arguments to an argparse parser's add_argument() method."""
178
+
179
+    argspec = inspect.getargspec(func)
180
+    # we should probably raise an exception somewhere if func includes **kwargs
181
+    if argspec.defaults:
182
+        positional_args = argspec.args[:-len(argspec.defaults)]
183
+        keyword_names = argspec.args[-len(argspec.defaults):]
184
+        for arg, default in zip(keyword_names, argspec.defaults):
185
+            yield ('--{}'.format(arg),), {'default': default}
186
+    else:
187
+        positional_args = argspec.args
188
+
189
+    for arg in positional_args:
190
+        yield (arg,), {}
191
+    if argspec.varargs:
192
+        yield (argspec.varargs,), {'nargs': '*'}
Back to file index

hooks/charmhelpers/cli/benchmark.py

 1
--- 
 2
+++ hooks/charmhelpers/cli/benchmark.py
 3
@@ -0,0 +1,34 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+from . import cmdline
19
+from charmhelpers.contrib.benchmark import Benchmark
20
+
21
+
22
+@cmdline.subcommand(command_name='benchmark-start')
23
+def start():
24
+    Benchmark.start()
25
+
26
+
27
+@cmdline.subcommand(command_name='benchmark-finish')
28
+def finish():
29
+    Benchmark.finish()
30
+
31
+
32
+@cmdline.subcommand_builder('benchmark-composite', description="Set the benchmark composite score")
33
+def service(subparser):
34
+    subparser.add_argument("value", help="The composite score.")
35
+    subparser.add_argument("units", help="The units the composite score represents, i.e., 'reads/sec'.")
36
+    subparser.add_argument("direction", help="'asc' if a lower score is better, 'desc' if a higher score is better.")
37
+    return Benchmark.set_composite_score
Back to file index

hooks/charmhelpers/cli/commands.py

 1
--- 
 2
+++ hooks/charmhelpers/cli/commands.py
 3
@@ -0,0 +1,30 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+"""
19
+This module loads sub-modules into the python runtime so they can be
20
+discovered via the inspect module. In order to prevent flake8 from (rightfully)
21
+telling us these are unused modules, throw a ' # noqa' at the end of each import
22
+so that the warning is suppressed.
23
+"""
24
+
25
+from . import CommandLine  # noqa
26
+
27
+"""
28
+Import the sub-modules which have decorated subcommands to register with chlp.
29
+"""
30
+from . import host  # noqa
31
+from . import benchmark  # noqa
32
+from . import unitdata  # noqa
33
+from . import hookenv  # noqa
Back to file index

hooks/charmhelpers/cli/hookenv.py

 1
--- 
 2
+++ hooks/charmhelpers/cli/hookenv.py
 3
@@ -0,0 +1,21 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+from . import cmdline
19
+from charmhelpers.core import hookenv
20
+
21
+
22
+cmdline.subcommand('relation-id')(hookenv.relation_id._wrapped)
23
+cmdline.subcommand('service-name')(hookenv.service_name)
24
+cmdline.subcommand('remote-service-name')(hookenv.remote_service_name._wrapped)
Back to file index

hooks/charmhelpers/cli/host.py

 1
--- 
 2
+++ hooks/charmhelpers/cli/host.py
 3
@@ -0,0 +1,29 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+from . import cmdline
19
+from charmhelpers.core import host
20
+
21
+
22
+@cmdline.subcommand()
23
+def mounts():
24
+    "List mounts"
25
+    return host.mounts()
26
+
27
+
28
+@cmdline.subcommand_builder('service', description="Control system services")
29
+def service(subparser):
30
+    subparser.add_argument("action", help="The action to perform (start, stop, etc...)")
31
+    subparser.add_argument("service_name", help="Name of the service to control")
32
+    return host.service
Back to file index

hooks/charmhelpers/cli/unitdata.py

 1
--- 
 2
+++ hooks/charmhelpers/cli/unitdata.py
 3
@@ -0,0 +1,37 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+from . import cmdline
19
+from charmhelpers.core import unitdata
20
+
21
+
22
+@cmdline.subcommand_builder('unitdata', description="Store and retrieve data")
23
+def unitdata_cmd(subparser):
24
+    nested = subparser.add_subparsers()
25
+    get_cmd = nested.add_parser('get', help='Retrieve data')
26
+    get_cmd.add_argument('key', help='Key to retrieve the value of')
27
+    get_cmd.set_defaults(action='get', value=None)
28
+    set_cmd = nested.add_parser('set', help='Store data')
29
+    set_cmd.add_argument('key', help='Key to set')
30
+    set_cmd.add_argument('value', help='Value to store')
31
+    set_cmd.set_defaults(action='set')
32
+
33
+    def _unitdata_cmd(action, key, value):
34
+        if action == 'get':
35
+            return unitdata.kv().get(key)
36
+        elif action == 'set':
37
+            unitdata.kv().set(key, value)
38
+            unitdata.kv().flush()
39
+            return ''
40
+    return _unitdata_cmd
Back to file index

hooks/charmhelpers/contrib/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

hooks/charmhelpers/contrib/hahelpers/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/hahelpers/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

hooks/charmhelpers/contrib/hahelpers/apache.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/hahelpers/apache.py
 3
@@ -0,0 +1,95 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+#
19
+# Copyright 2012 Canonical Ltd.
20
+#
21
+# This file is sourced from lp:openstack-charm-helpers
22
+#
23
+# Authors:
24
+#  James Page <james.page@ubuntu.com>
25
+#  Adam Gandelman <adamg@ubuntu.com>
26
+#
27
+
28
+import os
29
+import subprocess
30
+
31
+from charmhelpers.core.hookenv import (
32
+    config as config_get,
33
+    relation_get,
34
+    relation_ids,
35
+    related_units as relation_list,
36
+    log,
37
+    INFO,
38
+)
39
+
40
+
41
+def get_cert(cn=None):
42
+    # TODO: deal with multiple https endpoints via charm config
43
+    cert = config_get('ssl_cert')
44
+    key = config_get('ssl_key')
45
+    if not (cert and key):
46
+        log("Inspecting identity-service relations for SSL certificate.",
47
+            level=INFO)
48
+        cert = key = None
49
+        if cn:
50
+            ssl_cert_attr = 'ssl_cert_{}'.format(cn)
51
+            ssl_key_attr = 'ssl_key_{}'.format(cn)
52
+        else:
53
+            ssl_cert_attr = 'ssl_cert'
54
+            ssl_key_attr = 'ssl_key'
55
+        for r_id in relation_ids('identity-service'):
56
+            for unit in relation_list(r_id):
57
+                if not cert:
58
+                    cert = relation_get(ssl_cert_attr,
59
+                                        rid=r_id, unit=unit)
60
+                if not key:
61
+                    key = relation_get(ssl_key_attr,
62
+                                       rid=r_id, unit=unit)
63
+    return (cert, key)
64
+
65
+
66
+def get_ca_cert():
67
+    ca_cert = config_get('ssl_ca')
68
+    if ca_cert is None:
69
+        log("Inspecting identity-service relations for CA SSL certificate.",
70
+            level=INFO)
71
+        for r_id in relation_ids('identity-service'):
72
+            for unit in relation_list(r_id):
73
+                if ca_cert is None:
74
+                    ca_cert = relation_get('ca_cert',
75
+                                           rid=r_id, unit=unit)
76
+    return ca_cert
77
+
78
+
79
+def retrieve_ca_cert(cert_file):
80
+    cert = None
81
+    if os.path.isfile(cert_file):
82
+        with open(cert_file, 'r') as crt:
83
+            cert = crt.read()
84
+    return cert
85
+
86
+
87
+def install_ca_cert(ca_cert):
88
+    if ca_cert:
89
+        cert_file = ('/usr/local/share/ca-certificates/'
90
+                     'keystone_juju_ca_cert.crt')
91
+        old_cert = retrieve_ca_cert(cert_file)
92
+        if old_cert and old_cert == ca_cert:
93
+            log("CA cert is the same as installed version", level=INFO)
94
+        else:
95
+            log("Installing new CA cert", level=INFO)
96
+            with open(cert_file, 'w') as crt:
97
+                crt.write(ca_cert)
98
+            subprocess.check_call(['update-ca-certificates', '--fresh'])
Back to file index

hooks/charmhelpers/contrib/hahelpers/cluster.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/hahelpers/cluster.py
  3
@@ -0,0 +1,363 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+#
 19
+# Copyright 2012 Canonical Ltd.
 20
+#
 21
+# Authors:
 22
+#  James Page <james.page@ubuntu.com>
 23
+#  Adam Gandelman <adamg@ubuntu.com>
 24
+#
 25
+
 26
+"""
 27
+Helpers for clustering and determining "cluster leadership" and other
 28
+clustering-related helpers.
 29
+"""
 30
+
 31
+import subprocess
 32
+import os
 33
+
 34
+from socket import gethostname as get_unit_hostname
 35
+
 36
+import six
 37
+
 38
+from charmhelpers.core.hookenv import (
 39
+    log,
 40
+    relation_ids,
 41
+    related_units as relation_list,
 42
+    relation_get,
 43
+    config as config_get,
 44
+    INFO,
 45
+    DEBUG,
 46
+    WARNING,
 47
+    unit_get,
 48
+    is_leader as juju_is_leader,
 49
+    status_set,
 50
+)
 51
+from charmhelpers.core.decorators import (
 52
+    retry_on_exception,
 53
+)
 54
+from charmhelpers.core.strutils import (
 55
+    bool_from_string,
 56
+)
 57
+
 58
+DC_RESOURCE_NAME = 'DC'
 59
+
 60
+
 61
+class HAIncompleteConfig(Exception):
 62
+    pass
 63
+
 64
+
 65
+class HAIncorrectConfig(Exception):
 66
+    pass
 67
+
 68
+
 69
+class CRMResourceNotFound(Exception):
 70
+    pass
 71
+
 72
+
 73
+class CRMDCNotFound(Exception):
 74
+    pass
 75
+
 76
+
 77
+def is_elected_leader(resource):
 78
+    """
 79
+    Returns True if the charm executing this is the elected cluster leader.
 80
+
 81
+    It relies on two mechanisms to determine leadership:
 82
+        1. If juju is sufficiently new and leadership election is supported,
 83
+        the is_leader command will be used.
 84
+        2. If the charm is part of a corosync cluster, call corosync to
 85
+        determine leadership.
 86
+        3. If the charm is not part of a corosync cluster, the leader is
 87
+        determined as being "the alive unit with the lowest unit numer". In
 88
+        other words, the oldest surviving unit.
 89
+    """
 90
+    try:
 91
+        return juju_is_leader()
 92
+    except NotImplementedError:
 93
+        log('Juju leadership election feature not enabled'
 94
+            ', using fallback support',
 95
+            level=WARNING)
 96
+
 97
+    if is_clustered():
 98
+        if not is_crm_leader(resource):
 99
+            log('Deferring action to CRM leader.', level=INFO)
100
+            return False
101
+    else:
102
+        peers = peer_units()
103
+        if peers and not oldest_peer(peers):
104
+            log('Deferring action to oldest service unit.', level=INFO)
105
+            return False
106
+    return True
107
+
108
+
109
+def is_clustered():
110
+    for r_id in (relation_ids('ha') or []):
111
+        for unit in (relation_list(r_id) or []):
112
+            clustered = relation_get('clustered',
113
+                                     rid=r_id,
114
+                                     unit=unit)
115
+            if clustered:
116
+                return True
117
+    return False
118
+
119
+
120
+def is_crm_dc():
121
+    """
122
+    Determine leadership by querying the pacemaker Designated Controller
123
+    """
124
+    cmd = ['crm', 'status']
125
+    try:
126
+        status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
127
+        if not isinstance(status, six.text_type):
128
+            status = six.text_type(status, "utf-8")
129
+    except subprocess.CalledProcessError as ex:
130
+        raise CRMDCNotFound(str(ex))
131
+
132
+    current_dc = ''
133
+    for line in status.split('\n'):
134
+        if line.startswith('Current DC'):
135
+            # Current DC: juju-lytrusty-machine-2 (168108163) - partition with quorum
136
+            current_dc = line.split(':')[1].split()[0]
137
+    if current_dc == get_unit_hostname():
138
+        return True
139
+    elif current_dc == 'NONE':
140
+        raise CRMDCNotFound('Current DC: NONE')
141
+
142
+    return False
143
+
144
+
145
+@retry_on_exception(5, base_delay=2,
146
+                    exc_type=(CRMResourceNotFound, CRMDCNotFound))
147
+def is_crm_leader(resource, retry=False):
148
+    """
149
+    Returns True if the charm calling this is the elected corosync leader,
150
+    as returned by calling the external "crm" command.
151
+
152
+    We allow this operation to be retried to avoid the possibility of getting a
153
+    false negative. See LP #1396246 for more info.
154
+    """
155
+    if resource == DC_RESOURCE_NAME:
156
+        return is_crm_dc()
157
+    cmd = ['crm', 'resource', 'show', resource]
158
+    try:
159
+        status = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
160
+        if not isinstance(status, six.text_type):
161
+            status = six.text_type(status, "utf-8")
162
+    except subprocess.CalledProcessError:
163
+        status = None
164
+
165
+    if status and get_unit_hostname() in status:
166
+        return True
167
+
168
+    if status and "resource %s is NOT running" % (resource) in status:
169
+        raise CRMResourceNotFound("CRM resource %s not found" % (resource))
170
+
171
+    return False
172
+
173
+
174
+def is_leader(resource):
175
+    log("is_leader is deprecated. Please consider using is_crm_leader "
176
+        "instead.", level=WARNING)
177
+    return is_crm_leader(resource)
178
+
179
+
180
+def peer_units(peer_relation="cluster"):
181
+    peers = []
182
+    for r_id in (relation_ids(peer_relation) or []):
183
+        for unit in (relation_list(r_id) or []):
184
+            peers.append(unit)
185
+    return peers
186
+
187
+
188
+def peer_ips(peer_relation='cluster', addr_key='private-address'):
189
+    '''Return a dict of peers and their private-address'''
190
+    peers = {}
191
+    for r_id in relation_ids(peer_relation):
192
+        for unit in relation_list(r_id):
193
+            peers[unit] = relation_get(addr_key, rid=r_id, unit=unit)
194
+    return peers
195
+
196
+
197
+def oldest_peer(peers):
198
+    """Determines who the oldest peer is by comparing unit numbers."""
199
+    local_unit_no = int(os.getenv('JUJU_UNIT_NAME').split('/')[1])
200
+    for peer in peers:
201
+        remote_unit_no = int(peer.split('/')[1])
202
+        if remote_unit_no < local_unit_no:
203
+            return False
204
+    return True
205
+
206
+
207
+def eligible_leader(resource):
208
+    log("eligible_leader is deprecated. Please consider using "
209
+        "is_elected_leader instead.", level=WARNING)
210
+    return is_elected_leader(resource)
211
+
212
+
213
+def https():
214
+    '''
215
+    Determines whether enough data has been provided in configuration
216
+    or relation data to configure HTTPS
217
+    .
218
+    returns: boolean
219
+    '''
220
+    use_https = config_get('use-https')
221
+    if use_https and bool_from_string(use_https):
222
+        return True
223
+    if config_get('ssl_cert') and config_get('ssl_key'):
224
+        return True
225
+    for r_id in relation_ids('identity-service'):
226
+        for unit in relation_list(r_id):
227
+            # TODO - needs fixing for new helper as ssl_cert/key suffixes with CN
228
+            rel_state = [
229
+                relation_get('https_keystone', rid=r_id, unit=unit),
230
+                relation_get('ca_cert', rid=r_id, unit=unit),
231
+            ]
232
+            # NOTE: works around (LP: #1203241)
233
+            if (None not in rel_state) and ('' not in rel_state):
234
+                return True
235
+    return False
236
+
237
+
238
+def determine_api_port(public_port, singlenode_mode=False):
239
+    '''
240
+    Determine correct API server listening port based on
241
+    existence of HTTPS reverse proxy and/or haproxy.
242
+
243
+    public_port: int: standard public port for given service
244
+
245
+    singlenode_mode: boolean: Shuffle ports when only a single unit is present
246
+
247
+    returns: int: the correct listening port for the API service
248
+    '''
249
+    i = 0
250
+    if singlenode_mode:
251
+        i += 1
252
+    elif len(peer_units()) > 0 or is_clustered():
253
+        i += 1
254
+    if https():
255
+        i += 1
256
+    return public_port - (i * 10)
257
+
258
+
259
+def determine_apache_port(public_port, singlenode_mode=False):
260
+    '''
261
+    Description: Determine correct apache listening port based on public IP +
262
+    state of the cluster.
263
+
264
+    public_port: int: standard public port for given service
265
+
266
+    singlenode_mode: boolean: Shuffle ports when only a single unit is present
267
+
268
+    returns: int: the correct listening port for the HAProxy service
269
+    '''
270
+    i = 0
271
+    if singlenode_mode:
272
+        i += 1
273
+    elif len(peer_units()) > 0 or is_clustered():
274
+        i += 1
275
+    return public_port - (i * 10)
276
+
277
+
278
+def get_hacluster_config(exclude_keys=None):
279
+    '''
280
+    Obtains all relevant configuration from charm configuration required
281
+    for initiating a relation to hacluster:
282
+
283
+        ha-bindiface, ha-mcastport, vip, os-internal-hostname,
284
+        os-admin-hostname, os-public-hostname, os-access-hostname
285
+
286
+    param: exclude_keys: list of setting key(s) to be excluded.
287
+    returns: dict: A dict containing settings keyed by setting name.
288
+    raises: HAIncompleteConfig if settings are missing or incorrect.
289
+    '''
290
+    settings = ['ha-bindiface', 'ha-mcastport', 'vip', 'os-internal-hostname',
291
+                'os-admin-hostname', 'os-public-hostname', 'os-access-hostname']
292
+    conf = {}
293
+    for setting in settings:
294
+        if exclude_keys and setting in exclude_keys:
295
+            continue
296
+
297
+        conf[setting] = config_get(setting)
298
+
299
+    if not valid_hacluster_config():
300
+        raise HAIncorrectConfig('Insufficient or incorrect config data to '
301
+                                'configure hacluster.')
302
+    return conf
303
+
304
+
305
+def valid_hacluster_config():
306
+    '''
307
+    Check that either vip or dns-ha is set. If dns-ha then one of os-*-hostname
308
+    must be set.
309
+
310
+    Note: ha-bindiface and ha-macastport both have defaults and will always
311
+    be set. We only care that either vip or dns-ha is set.
312
+
313
+    :returns: boolean: valid config returns true.
314
+    raises: HAIncompatibileConfig if settings conflict.
315
+    raises: HAIncompleteConfig if settings are missing.
316
+    '''
317
+    vip = config_get('vip')
318
+    dns = config_get('dns-ha')
319
+    if not(bool(vip) ^ bool(dns)):
320
+        msg = ('HA: Either vip or dns-ha must be set but not both in order to '
321
+               'use high availability')
322
+        status_set('blocked', msg)
323
+        raise HAIncorrectConfig(msg)
324
+
325
+    # If dns-ha then one of os-*-hostname must be set
326
+    if dns:
327
+        dns_settings = ['os-internal-hostname', 'os-admin-hostname',
328
+                        'os-public-hostname', 'os-access-hostname']
329
+        # At this point it is unknown if one or all of the possible
330
+        # network spaces are in HA. Validate at least one is set which is
331
+        # the minimum required.
332
+        for setting in dns_settings:
333
+            if config_get(setting):
334
+                log('DNS HA: At least one hostname is set {}: {}'
335
+                    ''.format(setting, config_get(setting)),
336
+                    level=DEBUG)
337
+                return True
338
+
339
+        msg = ('DNS HA: At least one os-*-hostname(s) must be set to use '
340
+               'DNS HA')
341
+        status_set('blocked', msg)
342
+        raise HAIncompleteConfig(msg)
343
+
344
+    log('VIP HA: VIP is set {}'.format(vip), level=DEBUG)
345
+    return True
346
+
347
+
348
+def canonical_url(configs, vip_setting='vip'):
349
+    '''
350
+    Returns the correct HTTP URL to this host given the state of HTTPS
351
+    configuration and hacluster.
352
+
353
+    :configs    : OSTemplateRenderer: A config tempating object to inspect for
354
+                                      a complete https context.
355
+
356
+    :vip_setting:                str: Setting in charm config that specifies
357
+                                      VIP address.
358
+    '''
359
+    scheme = 'http'
360
+    if 'https' in configs.complete_contexts():
361
+        scheme = 'https'
362
+    if is_clustered():
363
+        addr = config_get(vip_setting)
364
+    else:
365
+        addr = unit_get('private-address')
366
+    return '%s://%s' % (scheme, addr)
Back to file index

hooks/charmhelpers/contrib/network/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/network/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

hooks/charmhelpers/contrib/network/ip.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/network/ip.py
  3
@@ -0,0 +1,497 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import glob
 19
+import re
 20
+import subprocess
 21
+import six
 22
+import socket
 23
+
 24
+from functools import partial
 25
+
 26
+from charmhelpers.core.hookenv import unit_get
 27
+from charmhelpers.fetch import apt_install, apt_update
 28
+from charmhelpers.core.hookenv import (
 29
+    log,
 30
+    WARNING,
 31
+)
 32
+
 33
+try:
 34
+    import netifaces
 35
+except ImportError:
 36
+    apt_update(fatal=True)
 37
+    apt_install('python-netifaces', fatal=True)
 38
+    import netifaces
 39
+
 40
+try:
 41
+    import netaddr
 42
+except ImportError:
 43
+    apt_update(fatal=True)
 44
+    apt_install('python-netaddr', fatal=True)
 45
+    import netaddr
 46
+
 47
+
 48
+def _validate_cidr(network):
 49
+    try:
 50
+        netaddr.IPNetwork(network)
 51
+    except (netaddr.core.AddrFormatError, ValueError):
 52
+        raise ValueError("Network (%s) is not in CIDR presentation format" %
 53
+                         network)
 54
+
 55
+
 56
+def no_ip_found_error_out(network):
 57
+    errmsg = ("No IP address found in network(s): %s" % network)
 58
+    raise ValueError(errmsg)
 59
+
 60
+
 61
+def get_address_in_network(network, fallback=None, fatal=False):
 62
+    """Get an IPv4 or IPv6 address within the network from the host.
 63
+
 64
+    :param network (str): CIDR presentation format. For example,
 65
+        '192.168.1.0/24'. Supports multiple networks as a space-delimited list.
 66
+    :param fallback (str): If no address is found, return fallback.
 67
+    :param fatal (boolean): If no address is found, fallback is not
 68
+        set and fatal is True then exit(1).
 69
+    """
 70
+    if network is None:
 71
+        if fallback is not None:
 72
+            return fallback
 73
+
 74
+        if fatal:
 75
+            no_ip_found_error_out(network)
 76
+        else:
 77
+            return None
 78
+
 79
+    networks = network.split() or [network]
 80
+    for network in networks:
 81
+        _validate_cidr(network)
 82
+        network = netaddr.IPNetwork(network)
 83
+        for iface in netifaces.interfaces():
 84
+            addresses = netifaces.ifaddresses(iface)
 85
+            if network.version == 4 and netifaces.AF_INET in addresses:
 86
+                addr = addresses[netifaces.AF_INET][0]['addr']
 87
+                netmask = addresses[netifaces.AF_INET][0]['netmask']
 88
+                cidr = netaddr.IPNetwork("%s/%s" % (addr, netmask))
 89
+                if cidr in network:
 90
+                    return str(cidr.ip)
 91
+
 92
+            if network.version == 6 and netifaces.AF_INET6 in addresses:
 93
+                for addr in addresses[netifaces.AF_INET6]:
 94
+                    if not addr['addr'].startswith('fe80'):
 95
+                        cidr = netaddr.IPNetwork("%s/%s" % (addr['addr'],
 96
+                                                            addr['netmask']))
 97
+                        if cidr in network:
 98
+                            return str(cidr.ip)
 99
+
100
+    if fallback is not None:
101
+        return fallback
102
+
103
+    if fatal:
104
+        no_ip_found_error_out(network)
105
+
106
+    return None
107
+
108
+
109
+def is_ipv6(address):
110
+    """Determine whether provided address is IPv6 or not."""
111
+    try:
112
+        address = netaddr.IPAddress(address)
113
+    except netaddr.AddrFormatError:
114
+        # probably a hostname - so not an address at all!
115
+        return False
116
+
117
+    return address.version == 6
118
+
119
+
120
+def is_address_in_network(network, address):
121
+    """
122
+    Determine whether the provided address is within a network range.
123
+
124
+    :param network (str): CIDR presentation format. For example,
125
+        '192.168.1.0/24'.
126
+    :param address: An individual IPv4 or IPv6 address without a net
127
+        mask or subnet prefix. For example, '192.168.1.1'.
128
+    :returns boolean: Flag indicating whether address is in network.
129
+    """
130
+    try:
131
+        network = netaddr.IPNetwork(network)
132
+    except (netaddr.core.AddrFormatError, ValueError):
133
+        raise ValueError("Network (%s) is not in CIDR presentation format" %
134
+                         network)
135
+
136
+    try:
137
+        address = netaddr.IPAddress(address)
138
+    except (netaddr.core.AddrFormatError, ValueError):
139
+        raise ValueError("Address (%s) is not in correct presentation format" %
140
+                         address)
141
+
142
+    if address in network:
143
+        return True
144
+    else:
145
+        return False
146
+
147
+
148
+def _get_for_address(address, key):
149
+    """Retrieve an attribute of or the physical interface that
150
+    the IP address provided could be bound to.
151
+
152
+    :param address (str): An individual IPv4 or IPv6 address without a net
153
+        mask or subnet prefix. For example, '192.168.1.1'.
154
+    :param key: 'iface' for the physical interface name or an attribute
155
+        of the configured interface, for example 'netmask'.
156
+    :returns str: Requested attribute or None if address is not bindable.
157
+    """
158
+    address = netaddr.IPAddress(address)
159
+    for iface in netifaces.interfaces():
160
+        addresses = netifaces.ifaddresses(iface)
161
+        if address.version == 4 and netifaces.AF_INET in addresses:
162
+            addr = addresses[netifaces.AF_INET][0]['addr']
163
+            netmask = addresses[netifaces.AF_INET][0]['netmask']
164
+            network = netaddr.IPNetwork("%s/%s" % (addr, netmask))
165
+            cidr = network.cidr
166
+            if address in cidr:
167
+                if key == 'iface':
168
+                    return iface
169
+                else:
170
+                    return addresses[netifaces.AF_INET][0][key]
171
+
172
+        if address.version == 6 and netifaces.AF_INET6 in addresses:
173
+            for addr in addresses[netifaces.AF_INET6]:
174
+                if not addr['addr'].startswith('fe80'):
175
+                    network = netaddr.IPNetwork("%s/%s" % (addr['addr'],
176
+                                                           addr['netmask']))
177
+                    cidr = network.cidr
178
+                    if address in cidr:
179
+                        if key == 'iface':
180
+                            return iface
181
+                        elif key == 'netmask' and cidr:
182
+                            return str(cidr).split('/')[1]
183
+                        else:
184
+                            return addr[key]
185
+
186
+    return None
187
+
188
+
189
+get_iface_for_address = partial(_get_for_address, key='iface')
190
+
191
+
192
+get_netmask_for_address = partial(_get_for_address, key='netmask')
193
+
194
+
195
+def resolve_network_cidr(ip_address):
196
+    '''
197
+    Resolves the full address cidr of an ip_address based on
198
+    configured network interfaces
199
+    '''
200
+    netmask = get_netmask_for_address(ip_address)
201
+    return str(netaddr.IPNetwork("%s/%s" % (ip_address, netmask)).cidr)
202
+
203
+
204
+def format_ipv6_addr(address):
205
+    """If address is IPv6, wrap it in '[]' otherwise return None.
206
+
207
+    This is required by most configuration files when specifying IPv6
208
+    addresses.
209
+    """
210
+    if is_ipv6(address):
211
+        return "[%s]" % address
212
+
213
+    return None
214
+
215
+
216
+def get_iface_addr(iface='eth0', inet_type='AF_INET', inc_aliases=False,
217
+                   fatal=True, exc_list=None):
218
+    """Return the assigned IP address for a given interface, if any.
219
+
220
+    :param iface: network interface on which address(es) are expected to
221
+                  be found.
222
+    :param inet_type: inet address family
223
+    :param inc_aliases: include alias interfaces in search
224
+    :param fatal: if True, raise exception if address not found
225
+    :param exc_list: list of addresses to ignore
226
+    :return: list of ip addresses
227
+    """
228
+    # Extract nic if passed /dev/ethX
229
+    if '/' in iface:
230
+        iface = iface.split('/')[-1]
231
+
232
+    if not exc_list:
233
+        exc_list = []
234
+
235
+    try:
236
+        inet_num = getattr(netifaces, inet_type)
237
+    except AttributeError:
238
+        raise Exception("Unknown inet type '%s'" % str(inet_type))
239
+
240
+    interfaces = netifaces.interfaces()
241
+    if inc_aliases:
242
+        ifaces = []
243
+        for _iface in interfaces:
244
+            if iface == _iface or _iface.split(':')[0] == iface:
245
+                ifaces.append(_iface)
246
+
247
+        if fatal and not ifaces:
248
+            raise Exception("Invalid interface '%s'" % iface)
249
+
250
+        ifaces.sort()
251
+    else:
252
+        if iface not in interfaces:
253
+            if fatal:
254
+                raise Exception("Interface '%s' not found " % (iface))
255
+            else:
256
+                return []
257
+
258
+        else:
259
+            ifaces = [iface]
260
+
261
+    addresses = []
262
+    for netiface in ifaces:
263
+        net_info = netifaces.ifaddresses(netiface)
264
+        if inet_num in net_info:
265
+            for entry in net_info[inet_num]:
266
+                if 'addr' in entry and entry['addr'] not in exc_list:
267
+                    addresses.append(entry['addr'])
268
+
269
+    if fatal and not addresses:
270
+        raise Exception("Interface '%s' doesn't have any %s addresses." %
271
+                        (iface, inet_type))
272
+
273
+    return sorted(addresses)
274
+
275
+
276
+get_ipv4_addr = partial(get_iface_addr, inet_type='AF_INET')
277
+
278
+
279
+def get_iface_from_addr(addr):
280
+    """Work out on which interface the provided address is configured."""
281
+    for iface in netifaces.interfaces():
282
+        addresses = netifaces.ifaddresses(iface)
283
+        for inet_type in addresses:
284
+            for _addr in addresses[inet_type]:
285
+                _addr = _addr['addr']
286
+                # link local
287
+                ll_key = re.compile("(.+)%.*")
288
+                raw = re.match(ll_key, _addr)
289
+                if raw:
290
+                    _addr = raw.group(1)
291
+
292
+                if _addr == addr:
293
+                    log("Address '%s' is configured on iface '%s'" %
294
+                        (addr, iface))
295
+                    return iface
296
+
297
+    msg = "Unable to infer net iface on which '%s' is configured" % (addr)
298
+    raise Exception(msg)
299
+
300
+
301
+def sniff_iface(f):
302
+    """Ensure decorated function is called with a value for iface.
303
+
304
+    If no iface provided, inject net iface inferred from unit private address.
305
+    """
306
+    def iface_sniffer(*args, **kwargs):
307
+        if not kwargs.get('iface', None):
308
+            kwargs['iface'] = get_iface_from_addr(unit_get('private-address'))
309
+
310
+        return f(*args, **kwargs)
311
+
312
+    return iface_sniffer
313
+
314
+
315
+@sniff_iface
316
+def get_ipv6_addr(iface=None, inc_aliases=False, fatal=True, exc_list=None,
317
+                  dynamic_only=True):
318
+    """Get assigned IPv6 address for a given interface.
319
+
320
+    Returns list of addresses found. If no address found, returns empty list.
321
+
322
+    If iface is None, we infer the current primary interface by doing a reverse
323
+    lookup on the unit private-address.
324
+
325
+    We currently only support scope global IPv6 addresses i.e. non-temporary
326
+    addresses. If no global IPv6 address is found, return the first one found
327
+    in the ipv6 address list.
328
+
329
+    :param iface: network interface on which ipv6 address(es) are expected to
330
+                  be found.
331
+    :param inc_aliases: include alias interfaces in search
332
+    :param fatal: if True, raise exception if address not found
333
+    :param exc_list: list of addresses to ignore
334
+    :param dynamic_only: only recognise dynamic addresses
335
+    :return: list of ipv6 addresses
336
+    """
337
+    addresses = get_iface_addr(iface=iface, inet_type='AF_INET6',
338
+                               inc_aliases=inc_aliases, fatal=fatal,
339
+                               exc_list=exc_list)
340
+
341
+    if addresses:
342
+        global_addrs = []
343
+        for addr in addresses:
344
+            key_scope_link_local = re.compile("^fe80::..(.+)%(.+)")
345
+            m = re.match(key_scope_link_local, addr)
346
+            if m:
347
+                eui_64_mac = m.group(1)
348
+                iface = m.group(2)
349
+            else:
350
+                global_addrs.append(addr)
351
+
352
+        if global_addrs:
353
+            # Make sure any found global addresses are not temporary
354
+            cmd = ['ip', 'addr', 'show', iface]
355
+            out = subprocess.check_output(cmd).decode('UTF-8')
356
+            if dynamic_only:
357
+                key = re.compile("inet6 (.+)/[0-9]+ scope global.* dynamic.*")
358
+            else:
359
+                key = re.compile("inet6 (.+)/[0-9]+ scope global.*")
360
+
361
+            addrs = []
362
+            for line in out.split('\n'):
363
+                line = line.strip()
364
+                m = re.match(key, line)
365
+                if m and 'temporary' not in line:
366
+                    # Return the first valid address we find
367
+                    for addr in global_addrs:
368
+                        if m.group(1) == addr:
369
+                            if not dynamic_only or \
370
+                                    m.group(1).endswith(eui_64_mac):
371
+                                addrs.append(addr)
372
+
373
+            if addrs:
374
+                return addrs
375
+
376
+    if fatal:
377
+        raise Exception("Interface '%s' does not have a scope global "
378
+                        "non-temporary ipv6 address." % iface)
379
+
380
+    return []
381
+
382
+
383
+def get_bridges(vnic_dir='/sys/devices/virtual/net'):
384
+    """Return a list of bridges on the system."""
385
+    b_regex = "%s/*/bridge" % vnic_dir
386
+    return [x.replace(vnic_dir, '').split('/')[1] for x in glob.glob(b_regex)]
387
+
388
+
389
+def get_bridge_nics(bridge, vnic_dir='/sys/devices/virtual/net'):
390
+    """Return a list of nics comprising a given bridge on the system."""
391
+    brif_regex = "%s/%s/brif/*" % (vnic_dir, bridge)
392
+    return [x.split('/')[-1] for x in glob.glob(brif_regex)]
393
+
394
+
395
+def is_bridge_member(nic):
396
+    """Check if a given nic is a member of a bridge."""
397
+    for bridge in get_bridges():
398
+        if nic in get_bridge_nics(bridge):
399
+            return True
400
+
401
+    return False
402
+
403
+
404
+def is_ip(address):
405
+    """
406
+    Returns True if address is a valid IP address.
407
+    """
408
+    try:
409
+        # Test to see if already an IPv4/IPv6 address
410
+        address = netaddr.IPAddress(address)
411
+        return True
412
+    except (netaddr.AddrFormatError, ValueError):
413
+        return False
414
+
415
+
416
+def ns_query(address):
417
+    try:
418
+        import dns.resolver
419
+    except ImportError:
420
+        apt_install('python-dnspython', fatal=True)
421
+        import dns.resolver
422
+
423
+    if isinstance(address, dns.name.Name):
424
+        rtype = 'PTR'
425
+    elif isinstance(address, six.string_types):
426
+        rtype = 'A'
427
+    else:
428
+        return None
429
+
430
+    answers = dns.resolver.query(address, rtype)
431
+    if answers:
432
+        return str(answers[0])
433
+    return None
434
+
435
+
436
+def get_host_ip(hostname, fallback=None):
437
+    """
438
+    Resolves the IP for a given hostname, or returns
439
+    the input if it is already an IP.
440
+    """
441
+    if is_ip(hostname):
442
+        return hostname
443
+
444
+    ip_addr = ns_query(hostname)
445
+    if not ip_addr:
446
+        try:
447
+            ip_addr = socket.gethostbyname(hostname)
448
+        except:
449
+            log("Failed to resolve hostname '%s'" % (hostname),
450
+                level=WARNING)
451
+            return fallback
452
+    return ip_addr
453
+
454
+
455
+def get_hostname(address, fqdn=True):
456
+    """
457
+    Resolves hostname for given IP, or returns the input
458
+    if it is already a hostname.
459
+    """
460
+    if is_ip(address):
461
+        try:
462
+            import dns.reversename
463
+        except ImportError:
464
+            apt_install("python-dnspython", fatal=True)
465
+            import dns.reversename
466
+
467
+        rev = dns.reversename.from_address(address)
468
+        result = ns_query(rev)
469
+
470
+        if not result:
471
+            try:
472
+                result = socket.gethostbyaddr(address)[0]
473
+            except:
474
+                return None
475
+    else:
476
+        result = address
477
+
478
+    if fqdn:
479
+        # strip trailing .
480
+        if result.endswith('.'):
481
+            return result[:-1]
482
+        else:
483
+            return result
484
+    else:
485
+        return result.split('.')[0]
486
+
487
+
488
+def port_has_listener(address, port):
489
+    """
490
+    Returns True if the address:port is open and being listened to,
491
+    else False.
492
+
493
+    @param address: an IP address or hostname
494
+    @param port: integer port
495
+
496
+    Note calls 'zc' via a subprocess shell
497
+    """
498
+    cmd = ['nc', '-z', address, str(port)]
499
+    result = subprocess.call(cmd)
500
+    return not(bool(result))
Back to file index

hooks/charmhelpers/contrib/openstack/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

hooks/charmhelpers/contrib/openstack/alternatives.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/alternatives.py
 3
@@ -0,0 +1,31 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+''' Helper for managing alternatives for file conflict resolution '''
19
+
20
+import subprocess
21
+import shutil
22
+import os
23
+
24
+
25
+def install_alternative(name, target, source, priority=50):
26
+    ''' Install alternative configuration '''
27
+    if (os.path.exists(target) and not os.path.islink(target)):
28
+        # Move existing file/directory away before installing
29
+        shutil.move(target, '{}.bak'.format(target))
30
+    cmd = [
31
+        'update-alternatives', '--force', '--install',
32
+        target, name, source, str(priority)
33
+    ]
34
+    subprocess.check_call(cmd)
Back to file index

hooks/charmhelpers/contrib/openstack/amulet/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/amulet/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

hooks/charmhelpers/contrib/openstack/amulet/deployment.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/openstack/amulet/deployment.py
  3
@@ -0,0 +1,345 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import logging
 19
+import re
 20
+import sys
 21
+import six
 22
+from collections import OrderedDict
 23
+from charmhelpers.contrib.amulet.deployment import (
 24
+    AmuletDeployment
 25
+)
 26
+
 27
+DEBUG = logging.DEBUG
 28
+ERROR = logging.ERROR
 29
+
 30
+
 31
+class OpenStackAmuletDeployment(AmuletDeployment):
 32
+    """OpenStack amulet deployment.
 33
+
 34
+       This class inherits from AmuletDeployment and has additional support
 35
+       that is specifically for use by OpenStack charms.
 36
+       """
 37
+
 38
+    def __init__(self, series=None, openstack=None, source=None,
 39
+                 stable=True, log_level=DEBUG):
 40
+        """Initialize the deployment environment."""
 41
+        super(OpenStackAmuletDeployment, self).__init__(series)
 42
+        self.log = self.get_logger(level=log_level)
 43
+        self.log.info('OpenStackAmuletDeployment:  init')
 44
+        self.openstack = openstack
 45
+        self.source = source
 46
+        self.stable = stable
 47
+
 48
+    def get_logger(self, name="deployment-logger", level=logging.DEBUG):
 49
+        """Get a logger object that will log to stdout."""
 50
+        log = logging
 51
+        logger = log.getLogger(name)
 52
+        fmt = log.Formatter("%(asctime)s %(funcName)s "
 53
+                            "%(levelname)s: %(message)s")
 54
+
 55
+        handler = log.StreamHandler(stream=sys.stdout)
 56
+        handler.setLevel(level)
 57
+        handler.setFormatter(fmt)
 58
+
 59
+        logger.addHandler(handler)
 60
+        logger.setLevel(level)
 61
+
 62
+        return logger
 63
+
 64
+    def _determine_branch_locations(self, other_services):
 65
+        """Determine the branch locations for the other services.
 66
+
 67
+           Determine if the local branch being tested is derived from its
 68
+           stable or next (dev) branch, and based on this, use the corresonding
 69
+           stable or next branches for the other_services."""
 70
+
 71
+        self.log.info('OpenStackAmuletDeployment:  determine branch locations')
 72
+
 73
+        # Charms outside the ~openstack-charmers
 74
+        base_charms = {
 75
+            'mysql': ['precise', 'trusty'],
 76
+            'mongodb': ['precise', 'trusty'],
 77
+            'nrpe': ['precise', 'trusty', 'wily', 'xenial'],
 78
+        }
 79
+
 80
+        for svc in other_services:
 81
+            # If a location has been explicitly set, use it
 82
+            if svc.get('location'):
 83
+                continue
 84
+            if svc['name'] in base_charms:
 85
+                # NOTE: not all charms have support for all series we
 86
+                #       want/need to test against, so fix to most recent
 87
+                #       that each base charm supports
 88
+                target_series = self.series
 89
+                if self.series not in base_charms[svc['name']]:
 90
+                    target_series = base_charms[svc['name']][-1]
 91
+                svc['location'] = 'cs:{}/{}'.format(target_series,
 92
+                                                    svc['name'])
 93
+            elif self.stable:
 94
+                svc['location'] = 'cs:{}/{}'.format(self.series,
 95
+                                                    svc['name'])
 96
+            else:
 97
+                svc['location'] = 'cs:~openstack-charmers-next/{}/{}'.format(
 98
+                    self.series,
 99
+                    svc['name']
100
+                )
101
+
102
+        return other_services
103
+
104
+    def _add_services(self, this_service, other_services, use_source=None,
105
+                      no_origin=None):
106
+        """Add services to the deployment and optionally set
107
+        openstack-origin/source.
108
+
109
+        :param this_service dict: Service dictionary describing the service
110
+                                  whose amulet tests are being run
111
+        :param other_services dict: List of service dictionaries describing
112
+                                    the services needed to support the target
113
+                                    service
114
+        :param use_source list: List of services which use the 'source' config
115
+                                option rather than 'openstack-origin'
116
+        :param no_origin list: List of services which do not support setting
117
+                               the Cloud Archive.
118
+        Service Dict:
119
+            {
120
+                'name': str charm-name,
121
+                'units': int number of units,
122
+                'constraints': dict of juju constraints,
123
+                'location': str location of charm,
124
+            }
125
+        eg
126
+        this_service = {
127
+            'name': 'openvswitch-odl',
128
+            'constraints': {'mem': '8G'},
129
+        }
130
+        other_services = [
131
+            {
132
+                'name': 'nova-compute',
133
+                'units': 2,
134
+                'constraints': {'mem': '4G'},
135
+                'location': cs:~bob/xenial/nova-compute
136
+            },
137
+            {
138
+                'name': 'mysql',
139
+                'constraints': {'mem': '2G'},
140
+            },
141
+            {'neutron-api-odl'}]
142
+        use_source = ['mysql']
143
+        no_origin = ['neutron-api-odl']
144
+        """
145
+        self.log.info('OpenStackAmuletDeployment:  adding services')
146
+
147
+        other_services = self._determine_branch_locations(other_services)
148
+
149
+        super(OpenStackAmuletDeployment, self)._add_services(this_service,
150
+                                                             other_services)
151
+
152
+        services = other_services
153
+        services.append(this_service)
154
+
155
+        use_source = use_source or []
156
+        no_origin = no_origin or []
157
+
158
+        # Charms which should use the source config option
159
+        use_source = list(set(
160
+            use_source + ['mysql', 'mongodb', 'rabbitmq-server', 'ceph',
161
+                          'ceph-osd', 'ceph-radosgw', 'ceph-mon',
162
+                          'ceph-proxy', 'percona-cluster', 'lxd']))
163
+
164
+        # Charms which can not use openstack-origin, ie. many subordinates
165
+        no_origin = list(set(
166
+            no_origin + ['cinder-ceph', 'hacluster', 'neutron-openvswitch',
167
+                         'nrpe', 'openvswitch-odl', 'neutron-api-odl',
168
+                         'odl-controller', 'cinder-backup', 'nexentaedge-data',
169
+                         'nexentaedge-iscsi-gw', 'nexentaedge-swift-gw',
170
+                         'cinder-nexentaedge', 'nexentaedge-mgmt']))
171
+
172
+        if self.openstack:
173
+            for svc in services:
174
+                if svc['name'] not in use_source + no_origin:
175
+                    config = {'openstack-origin': self.openstack}
176
+                    self.d.configure(svc['name'], config)
177
+
178
+        if self.source:
179
+            for svc in services:
180
+                if svc['name'] in use_source and svc['name'] not in no_origin:
181
+                    config = {'source': self.source}
182
+                    self.d.configure(svc['name'], config)
183
+
184
+    def _configure_services(self, configs):
185
+        """Configure all of the services."""
186
+        self.log.info('OpenStackAmuletDeployment:  configure services')
187
+        for service, config in six.iteritems(configs):
188
+            self.d.configure(service, config)
189
+
190
+    def _auto_wait_for_status(self, message=None, exclude_services=None,
191
+                              include_only=None, timeout=1800):
192
+        """Wait for all units to have a specific extended status, except
193
+        for any defined as excluded.  Unless specified via message, any
194
+        status containing any case of 'ready' will be considered a match.
195
+
196
+        Examples of message usage:
197
+
198
+          Wait for all unit status to CONTAIN any case of 'ready' or 'ok':
199
+              message = re.compile('.*ready.*|.*ok.*', re.IGNORECASE)
200
+
201
+          Wait for all units to reach this status (exact match):
202
+              message = re.compile('^Unit is ready and clustered$')
203
+
204
+          Wait for all units to reach any one of these (exact match):
205
+              message = re.compile('Unit is ready|OK|Ready')
206
+
207
+          Wait for at least one unit to reach this status (exact match):
208
+              message = {'ready'}
209
+
210
+        See Amulet's sentry.wait_for_messages() for message usage detail.
211
+        https://github.com/juju/amulet/blob/master/amulet/sentry.py
212
+
213
+        :param message: Expected status match
214
+        :param exclude_services: List of juju service names to ignore,
215
+            not to be used in conjuction with include_only.
216
+        :param include_only: List of juju service names to exclusively check,
217
+            not to be used in conjuction with exclude_services.
218
+        :param timeout: Maximum time in seconds to wait for status match
219
+        :returns: None.  Raises if timeout is hit.
220
+        """
221
+        self.log.info('Waiting for extended status on units...')
222
+
223
+        all_services = self.d.services.keys()
224
+
225
+        if exclude_services and include_only:
226
+            raise ValueError('exclude_services can not be used '
227
+                             'with include_only')
228
+
229
+        if message:
230
+            if isinstance(message, re._pattern_type):
231
+                match = message.pattern
232
+            else:
233
+                match = message
234
+
235
+            self.log.debug('Custom extended status wait match: '
236
+                           '{}'.format(match))
237
+        else:
238
+            self.log.debug('Default extended status wait match:  contains '
239
+                           'READY (case-insensitive)')
240
+            message = re.compile('.*ready.*', re.IGNORECASE)
241
+
242
+        if exclude_services:
243
+            self.log.debug('Excluding services from extended status match: '
244
+                           '{}'.format(exclude_services))
245
+        else:
246
+            exclude_services = []
247
+
248
+        if include_only:
249
+            services = include_only
250
+        else:
251
+            services = list(set(all_services) - set(exclude_services))
252
+
253
+        self.log.debug('Waiting up to {}s for extended status on services: '
254
+                       '{}'.format(timeout, services))
255
+        service_messages = {service: message for service in services}
256
+        self.d.sentry.wait_for_messages(service_messages, timeout=timeout)
257
+        self.log.info('OK')
258
+
259
+    def _get_openstack_release(self):
260
+        """Get openstack release.
261
+
262
+           Return an integer representing the enum value of the openstack
263
+           release.
264
+           """
265
+        # Must be ordered by OpenStack release (not by Ubuntu release):
266
+        (self.precise_essex, self.precise_folsom, self.precise_grizzly,
267
+         self.precise_havana, self.precise_icehouse,
268
+         self.trusty_icehouse, self.trusty_juno, self.utopic_juno,
269
+         self.trusty_kilo, self.vivid_kilo, self.trusty_liberty,
270
+         self.wily_liberty, self.trusty_mitaka,
271
+         self.xenial_mitaka, self.xenial_newton,
272
+         self.yakkety_newton) = range(16)
273
+
274
+        releases = {
275
+            ('precise', None): self.precise_essex,
276
+            ('precise', 'cloud:precise-folsom'): self.precise_folsom,
277
+            ('precise', 'cloud:precise-grizzly'): self.precise_grizzly,
278
+            ('precise', 'cloud:precise-havana'): self.precise_havana,
279
+            ('precise', 'cloud:precise-icehouse'): self.precise_icehouse,
280
+            ('trusty', None): self.trusty_icehouse,
281
+            ('trusty', 'cloud:trusty-juno'): self.trusty_juno,
282
+            ('trusty', 'cloud:trusty-kilo'): self.trusty_kilo,
283
+            ('trusty', 'cloud:trusty-liberty'): self.trusty_liberty,
284
+            ('trusty', 'cloud:trusty-mitaka'): self.trusty_mitaka,
285
+            ('utopic', None): self.utopic_juno,
286
+            ('vivid', None): self.vivid_kilo,
287
+            ('wily', None): self.wily_liberty,
288
+            ('xenial', None): self.xenial_mitaka,
289
+            ('xenial', 'cloud:xenial-newton'): self.xenial_newton,
290
+            ('yakkety', None): self.yakkety_newton,
291
+        }
292
+        return releases[(self.series, self.openstack)]
293
+
294
+    def _get_openstack_release_string(self):
295
+        """Get openstack release string.
296
+
297
+           Return a string representing the openstack release.
298
+           """
299
+        releases = OrderedDict([
300
+            ('precise', 'essex'),
301
+            ('quantal', 'folsom'),
302
+            ('raring', 'grizzly'),
303
+            ('saucy', 'havana'),
304
+            ('trusty', 'icehouse'),
305
+            ('utopic', 'juno'),
306
+            ('vivid', 'kilo'),
307
+            ('wily', 'liberty'),
308
+            ('xenial', 'mitaka'),
309
+            ('yakkety', 'newton'),
310
+        ])
311
+        if self.openstack:
312
+            os_origin = self.openstack.split(':')[1]
313
+            return os_origin.split('%s-' % self.series)[1].split('/')[0]
314
+        else:
315
+            return releases[self.series]
316
+
317
+    def get_ceph_expected_pools(self, radosgw=False):
318
+        """Return a list of expected ceph pools in a ceph + cinder + glance
319
+        test scenario, based on OpenStack release and whether ceph radosgw
320
+        is flagged as present or not."""
321
+
322
+        if self._get_openstack_release() >= self.trusty_kilo:
323
+            # Kilo or later
324
+            pools = [
325
+                'rbd',
326
+                'cinder',
327
+                'glance'
328
+            ]
329
+        else:
330
+            # Juno or earlier
331
+            pools = [
332
+                'data',
333
+                'metadata',
334
+                'rbd',
335
+                'cinder',
336
+                'glance'
337
+            ]
338
+
339
+        if radosgw:
340
+            pools.extend([
341
+                '.rgw.root',
342
+                '.rgw.control',
343
+                '.rgw',
344
+                '.rgw.gc',
345
+                '.users.uid'
346
+            ])
347
+
348
+        return pools
Back to file index

hooks/charmhelpers/contrib/openstack/amulet/utils.py

   1
--- 
   2
+++ hooks/charmhelpers/contrib/openstack/amulet/utils.py
   3
@@ -0,0 +1,1124 @@
   4
+# Copyright 2014-2015 Canonical Limited.
   5
+#
   6
+# Licensed under the Apache License, Version 2.0 (the "License");
   7
+# you may not use this file except in compliance with the License.
   8
+# You may obtain a copy of the License at
   9
+#
  10
+#  http://www.apache.org/licenses/LICENSE-2.0
  11
+#
  12
+# Unless required by applicable law or agreed to in writing, software
  13
+# distributed under the License is distributed on an "AS IS" BASIS,
  14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  15
+# See the License for the specific language governing permissions and
  16
+# limitations under the License.
  17
+
  18
+import amulet
  19
+import json
  20
+import logging
  21
+import os
  22
+import re
  23
+import six
  24
+import time
  25
+import urllib
  26
+
  27
+import cinderclient.v1.client as cinder_client
  28
+import glanceclient.v1.client as glance_client
  29
+import heatclient.v1.client as heat_client
  30
+import keystoneclient.v2_0 as keystone_client
  31
+from keystoneclient.auth.identity import v3 as keystone_id_v3
  32
+from keystoneclient import session as keystone_session
  33
+from keystoneclient.v3 import client as keystone_client_v3
  34
+
  35
+import novaclient.client as nova_client
  36
+import pika
  37
+import swiftclient
  38
+
  39
+from charmhelpers.contrib.amulet.utils import (
  40
+    AmuletUtils
  41
+)
  42
+
  43
+DEBUG = logging.DEBUG
  44
+ERROR = logging.ERROR
  45
+
  46
+NOVA_CLIENT_VERSION = "2"
  47
+
  48
+
  49
+class OpenStackAmuletUtils(AmuletUtils):
  50
+    """OpenStack amulet utilities.
  51
+
  52
+       This class inherits from AmuletUtils and has additional support
  53
+       that is specifically for use by OpenStack charm tests.
  54
+       """
  55
+
  56
+    def __init__(self, log_level=ERROR):
  57
+        """Initialize the deployment environment."""
  58
+        super(OpenStackAmuletUtils, self).__init__(log_level)
  59
+
  60
+    def validate_endpoint_data(self, endpoints, admin_port, internal_port,
  61
+                               public_port, expected):
  62
+        """Validate endpoint data.
  63
+
  64
+           Validate actual endpoint data vs expected endpoint data. The ports
  65
+           are used to find the matching endpoint.
  66
+           """
  67
+        self.log.debug('Validating endpoint data...')
  68
+        self.log.debug('actual: {}'.format(repr(endpoints)))
  69
+        found = False
  70
+        for ep in endpoints:
  71
+            self.log.debug('endpoint: {}'.format(repr(ep)))
  72
+            if (admin_port in ep.adminurl and
  73
+                    internal_port in ep.internalurl and
  74
+                    public_port in ep.publicurl):
  75
+                found = True
  76
+                actual = {'id': ep.id,
  77
+                          'region': ep.region,
  78
+                          'adminurl': ep.adminurl,
  79
+                          'internalurl': ep.internalurl,
  80
+                          'publicurl': ep.publicurl,
  81
+                          'service_id': ep.service_id}
  82
+                ret = self._validate_dict_data(expected, actual)
  83
+                if ret:
  84
+                    return 'unexpected endpoint data - {}'.format(ret)
  85
+
  86
+        if not found:
  87
+            return 'endpoint not found'
  88
+
  89
+    def validate_v3_endpoint_data(self, endpoints, admin_port, internal_port,
  90
+                                  public_port, expected):
  91
+        """Validate keystone v3 endpoint data.
  92
+
  93
+        Validate the v3 endpoint data which has changed from v2.  The
  94
+        ports are used to find the matching endpoint.
  95
+
  96
+        The new v3 endpoint data looks like:
  97
+
  98
+        [<Endpoint enabled=True,
  99
+                   id=0432655fc2f74d1e9fa17bdaa6f6e60b,
 100
+                   interface=admin,
 101
+                   links={u'self': u'<RESTful URL of this endpoint>'},
 102
+                   region=RegionOne,
 103
+                   region_id=RegionOne,
 104
+                   service_id=17f842a0dc084b928e476fafe67e4095,
 105
+                   url=http://10.5.6.5:9312>,
 106
+         <Endpoint enabled=True,
 107
+                   id=6536cb6cb92f4f41bf22b079935c7707,
 108
+                   interface=admin,
 109
+                   links={u'self': u'<RESTful url of this endpoint>'},
 110
+                   region=RegionOne,
 111
+                   region_id=RegionOne,
 112
+                   service_id=72fc8736fb41435e8b3584205bb2cfa3,
 113
+                   url=http://10.5.6.6:35357/v3>,
 114
+                   ... ]
 115
+        """
 116
+        self.log.debug('Validating v3 endpoint data...')
 117
+        self.log.debug('actual: {}'.format(repr(endpoints)))
 118
+        found = []
 119
+        for ep in endpoints:
 120
+            self.log.debug('endpoint: {}'.format(repr(ep)))
 121
+            if ((admin_port in ep.url and ep.interface == 'admin') or
 122
+                    (internal_port in ep.url and ep.interface == 'internal') or
 123
+                    (public_port in ep.url and ep.interface == 'public')):
 124
+                found.append(ep.interface)
 125
+                # note we ignore the links member.
 126
+                actual = {'id': ep.id,
 127
+                          'region': ep.region,
 128
+                          'region_id': ep.region_id,
 129
+                          'interface': self.not_null,
 130
+                          'url': ep.url,
 131
+                          'service_id': ep.service_id, }
 132
+                ret = self._validate_dict_data(expected, actual)
 133
+                if ret:
 134
+                    return 'unexpected endpoint data - {}'.format(ret)
 135
+
 136
+        if len(found) != 3:
 137
+            return 'Unexpected number of endpoints found'
 138
+
 139
+    def validate_svc_catalog_endpoint_data(self, expected, actual):
 140
+        """Validate service catalog endpoint data.
 141
+
 142
+           Validate a list of actual service catalog endpoints vs a list of
 143
+           expected service catalog endpoints.
 144
+           """
 145
+        self.log.debug('Validating service catalog endpoint data...')
 146
+        self.log.debug('actual: {}'.format(repr(actual)))
 147
+        for k, v in six.iteritems(expected):
 148
+            if k in actual:
 149
+                ret = self._validate_dict_data(expected[k][0], actual[k][0])
 150
+                if ret:
 151
+                    return self.endpoint_error(k, ret)
 152
+            else:
 153
+                return "endpoint {} does not exist".format(k)
 154
+        return ret
 155
+
 156
+    def validate_v3_svc_catalog_endpoint_data(self, expected, actual):
 157
+        """Validate the keystone v3 catalog endpoint data.
 158
+
 159
+        Validate a list of dictinaries that make up the keystone v3 service
 160
+        catalogue.
 161
+
 162
+        It is in the form of:
 163
+
 164
+
 165
+        {u'identity': [{u'id': u'48346b01c6804b298cdd7349aadb732e',
 166
+                        u'interface': u'admin',
 167
+                        u'region': u'RegionOne',
 168
+                        u'region_id': u'RegionOne',
 169
+                        u'url': u'http://10.5.5.224:35357/v3'},
 170
+                       {u'id': u'8414f7352a4b47a69fddd9dbd2aef5cf',
 171
+                        u'interface': u'public',
 172
+                        u'region': u'RegionOne',
 173
+                        u'region_id': u'RegionOne',
 174
+                        u'url': u'http://10.5.5.224:5000/v3'},
 175
+                       {u'id': u'd5ca31440cc24ee1bf625e2996fb6a5b',
 176
+                        u'interface': u'internal',
 177
+                        u'region': u'RegionOne',
 178
+                        u'region_id': u'RegionOne',
 179
+                        u'url': u'http://10.5.5.224:5000/v3'}],
 180
+         u'key-manager': [{u'id': u'68ebc17df0b045fcb8a8a433ebea9e62',
 181
+                           u'interface': u'public',
 182
+                           u'region': u'RegionOne',
 183
+                           u'region_id': u'RegionOne',
 184
+                           u'url': u'http://10.5.5.223:9311'},
 185
+                          {u'id': u'9cdfe2a893c34afd8f504eb218cd2f9d',
 186
+                           u'interface': u'internal',
 187
+                           u'region': u'RegionOne',
 188
+                           u'region_id': u'RegionOne',
 189
+                           u'url': u'http://10.5.5.223:9311'},
 190
+                          {u'id': u'f629388955bc407f8b11d8b7ca168086',
 191
+                           u'interface': u'admin',
 192
+                           u'region': u'RegionOne',
 193
+                           u'region_id': u'RegionOne',
 194
+                           u'url': u'http://10.5.5.223:9312'}]}
 195
+
 196
+        Note, that an added complication is that the order of admin, public,
 197
+        internal against 'interface' in each region.
 198
+
 199
+        Thus, the function sorts the expected and actual lists using the
 200
+        interface key as a sort key, prior to the comparison.
 201
+        """
 202
+        self.log.debug('Validating v3 service catalog endpoint data...')
 203
+        self.log.debug('actual: {}'.format(repr(actual)))
 204
+        for k, v in six.iteritems(expected):
 205
+            if k in actual:
 206
+                l_expected = sorted(v, key=lambda x: x['interface'])
 207
+                l_actual = sorted(actual[k], key=lambda x: x['interface'])
 208
+                if len(l_actual) != len(l_expected):
 209
+                    return ("endpoint {} has differing number of interfaces "
 210
+                            " - expected({}), actual({})"
 211
+                            .format(k, len(l_expected), len(l_actual)))
 212
+                for i_expected, i_actual in zip(l_expected, l_actual):
 213
+                    self.log.debug("checking interface {}"
 214
+                                   .format(i_expected['interface']))
 215
+                    ret = self._validate_dict_data(i_expected, i_actual)
 216
+                    if ret:
 217
+                        return self.endpoint_error(k, ret)
 218
+            else:
 219
+                return "endpoint {} does not exist".format(k)
 220
+        return ret
 221
+
 222
+    def validate_tenant_data(self, expected, actual):
 223
+        """Validate tenant data.
 224
+
 225
+           Validate a list of actual tenant data vs list of expected tenant
 226
+           data.
 227
+           """
 228
+        self.log.debug('Validating tenant data...')
 229
+        self.log.debug('actual: {}'.format(repr(actual)))
 230
+        for e in expected:
 231
+            found = False
 232
+            for act in actual:
 233
+                a = {'enabled': act.enabled, 'description': act.description,
 234
+                     'name': act.name, 'id': act.id}
 235
+                if e['name'] == a['name']:
 236
+                    found = True
 237
+                    ret = self._validate_dict_data(e, a)
 238
+                    if ret:
 239
+                        return "unexpected tenant data - {}".format(ret)
 240
+            if not found:
 241
+                return "tenant {} does not exist".format(e['name'])
 242
+        return ret
 243
+
 244
+    def validate_role_data(self, expected, actual):
 245
+        """Validate role data.
 246
+
 247
+           Validate a list of actual role data vs a list of expected role
 248
+           data.
 249
+           """
 250
+        self.log.debug('Validating role data...')
 251
+        self.log.debug('actual: {}'.format(repr(actual)))
 252
+        for e in expected:
 253
+            found = False
 254
+            for act in actual:
 255
+                a = {'name': act.name, 'id': act.id}
 256
+                if e['name'] == a['name']:
 257
+                    found = True
 258
+                    ret = self._validate_dict_data(e, a)
 259
+                    if ret:
 260
+                        return "unexpected role data - {}".format(ret)
 261
+            if not found:
 262
+                return "role {} does not exist".format(e['name'])
 263
+        return ret
 264
+
 265
+    def validate_user_data(self, expected, actual, api_version=None):
 266
+        """Validate user data.
 267
+
 268
+           Validate a list of actual user data vs a list of expected user
 269
+           data.
 270
+           """
 271
+        self.log.debug('Validating user data...')
 272
+        self.log.debug('actual: {}'.format(repr(actual)))
 273
+        for e in expected:
 274
+            found = False
 275
+            for act in actual:
 276
+                if e['name'] == act.name:
 277
+                    a = {'enabled': act.enabled, 'name': act.name,
 278
+                         'email': act.email, 'id': act.id}
 279
+                    if api_version == 3:
 280
+                        a['default_project_id'] = getattr(act,
 281
+                                                          'default_project_id',
 282
+                                                          'none')
 283
+                    else:
 284
+                        a['tenantId'] = act.tenantId
 285
+                    found = True
 286
+                    ret = self._validate_dict_data(e, a)
 287
+                    if ret:
 288
+                        return "unexpected user data - {}".format(ret)
 289
+            if not found:
 290
+                return "user {} does not exist".format(e['name'])
 291
+        return ret
 292
+
 293
+    def validate_flavor_data(self, expected, actual):
 294
+        """Validate flavor data.
 295
+
 296
+           Validate a list of actual flavors vs a list of expected flavors.
 297
+           """
 298
+        self.log.debug('Validating flavor data...')
 299
+        self.log.debug('actual: {}'.format(repr(actual)))
 300
+        act = [a.name for a in actual]
 301
+        return self._validate_list_data(expected, act)
 302
+
 303
+    def tenant_exists(self, keystone, tenant):
 304
+        """Return True if tenant exists."""
 305
+        self.log.debug('Checking if tenant exists ({})...'.format(tenant))
 306
+        return tenant in [t.name for t in keystone.tenants.list()]
 307
+
 308
+    def authenticate_cinder_admin(self, keystone_sentry, username,
 309
+                                  password, tenant):
 310
+        """Authenticates admin user with cinder."""
 311
+        # NOTE(beisner): cinder python client doesn't accept tokens.
 312
+        keystone_ip = keystone_sentry.info['public-address']
 313
+        ept = "http://{}:5000/v2.0".format(keystone_ip.strip().decode('utf-8'))
 314
+        return cinder_client.Client(username, password, tenant, ept)
 315
+
 316
+    def authenticate_keystone_admin(self, keystone_sentry, user, password,
 317
+                                    tenant=None, api_version=None,
 318
+                                    keystone_ip=None):
 319
+        """Authenticates admin user with the keystone admin endpoint."""
 320
+        self.log.debug('Authenticating keystone admin...')
 321
+        if not keystone_ip:
 322
+            keystone_ip = keystone_sentry.info['public-address']
 323
+
 324
+        base_ep = "http://{}:35357".format(keystone_ip.strip().decode('utf-8'))
 325
+        if not api_version or api_version == 2:
 326
+            ep = base_ep + "/v2.0"
 327
+            return keystone_client.Client(username=user, password=password,
 328
+                                          tenant_name=tenant, auth_url=ep)
 329
+        else:
 330
+            ep = base_ep + "/v3"
 331
+            auth = keystone_id_v3.Password(
 332
+                user_domain_name='admin_domain',
 333
+                username=user,
 334
+                password=password,
 335
+                domain_name='admin_domain',
 336
+                auth_url=ep,
 337
+            )
 338
+            sess = keystone_session.Session(auth=auth)
 339
+            return keystone_client_v3.Client(session=sess)
 340
+
 341
+    def authenticate_keystone_user(self, keystone, user, password, tenant):
 342
+        """Authenticates a regular user with the keystone public endpoint."""
 343
+        self.log.debug('Authenticating keystone user ({})...'.format(user))
 344
+        ep = keystone.service_catalog.url_for(service_type='identity',
 345
+                                              endpoint_type='publicURL')
 346
+        return keystone_client.Client(username=user, password=password,
 347
+                                      tenant_name=tenant, auth_url=ep)
 348
+
 349
+    def authenticate_glance_admin(self, keystone):
 350
+        """Authenticates admin user with glance."""
 351
+        self.log.debug('Authenticating glance admin...')
 352
+        ep = keystone.service_catalog.url_for(service_type='image',
 353
+                                              endpoint_type='adminURL')
 354
+        return glance_client.Client(ep, token=keystone.auth_token)
 355
+
 356
+    def authenticate_heat_admin(self, keystone):
 357
+        """Authenticates the admin user with heat."""
 358
+        self.log.debug('Authenticating heat admin...')
 359
+        ep = keystone.service_catalog.url_for(service_type='orchestration',
 360
+                                              endpoint_type='publicURL')
 361
+        return heat_client.Client(endpoint=ep, token=keystone.auth_token)
 362
+
 363
+    def authenticate_nova_user(self, keystone, user, password, tenant):
 364
+        """Authenticates a regular user with nova-api."""
 365
+        self.log.debug('Authenticating nova user ({})...'.format(user))
 366
+        ep = keystone.service_catalog.url_for(service_type='identity',
 367
+                                              endpoint_type='publicURL')
 368
+        return nova_client.Client(NOVA_CLIENT_VERSION,
 369
+                                  username=user, api_key=password,
 370
+                                  project_id=tenant, auth_url=ep)
 371
+
 372
+    def authenticate_swift_user(self, keystone, user, password, tenant):
 373
+        """Authenticates a regular user with swift api."""
 374
+        self.log.debug('Authenticating swift user ({})...'.format(user))
 375
+        ep = keystone.service_catalog.url_for(service_type='identity',
 376
+                                              endpoint_type='publicURL')
 377
+        return swiftclient.Connection(authurl=ep,
 378
+                                      user=user,
 379
+                                      key=password,
 380
+                                      tenant_name=tenant,
 381
+                                      auth_version='2.0')
 382
+
 383
+    def create_cirros_image(self, glance, image_name):
 384
+        """Download the latest cirros image and upload it to glance,
 385
+        validate and return a resource pointer.
 386
+
 387
+        :param glance: pointer to authenticated glance connection
 388
+        :param image_name: display name for new image
 389
+        :returns: glance image pointer
 390
+        """
 391
+        self.log.debug('Creating glance cirros image '
 392
+                       '({})...'.format(image_name))
 393
+
 394
+        # Download cirros image
 395
+        http_proxy = os.getenv('AMULET_HTTP_PROXY')
 396
+        self.log.debug('AMULET_HTTP_PROXY: {}'.format(http_proxy))
 397
+        if http_proxy:
 398
+            proxies = {'http': http_proxy}
 399
+            opener = urllib.FancyURLopener(proxies)
 400
+        else:
 401
+            opener = urllib.FancyURLopener()
 402
+
 403
+        f = opener.open('http://download.cirros-cloud.net/version/released')
 404
+        version = f.read().strip()
 405
+        cirros_img = 'cirros-{}-x86_64-disk.img'.format(version)
 406
+        local_path = os.path.join('tests', cirros_img)
 407
+
 408
+        if not os.path.exists(local_path):
 409
+            cirros_url = 'http://{}/{}/{}'.format('download.cirros-cloud.net',
 410
+                                                  version, cirros_img)
 411
+            opener.retrieve(cirros_url, local_path)
 412
+        f.close()
 413
+
 414
+        # Create glance image
 415
+        with open(local_path) as f:
 416
+            image = glance.images.create(name=image_name, is_public=True,
 417
+                                         disk_format='qcow2',
 418
+                                         container_format='bare', data=f)
 419
+
 420
+        # Wait for image to reach active status
 421
+        img_id = image.id
 422
+        ret = self.resource_reaches_status(glance.images, img_id,
 423
+                                           expected_stat='active',
 424
+                                           msg='Image status wait')
 425
+        if not ret:
 426
+            msg = 'Glance image failed to reach expected state.'
 427
+            amulet.raise_status(amulet.FAIL, msg=msg)
 428
+
 429
+        # Re-validate new image
 430
+        self.log.debug('Validating image attributes...')
 431
+        val_img_name = glance.images.get(img_id).name
 432
+        val_img_stat = glance.images.get(img_id).status
 433
+        val_img_pub = glance.images.get(img_id).is_public
 434
+        val_img_cfmt = glance.images.get(img_id).container_format
 435
+        val_img_dfmt = glance.images.get(img_id).disk_format
 436
+        msg_attr = ('Image attributes - name:{} public:{} id:{} stat:{} '
 437
+                    'container fmt:{} disk fmt:{}'.format(
 438
+                        val_img_name, val_img_pub, img_id,
 439
+                        val_img_stat, val_img_cfmt, val_img_dfmt))
 440
+
 441
+        if val_img_name == image_name and val_img_stat == 'active' \
 442
+                and val_img_pub is True and val_img_cfmt == 'bare' \
 443
+                and val_img_dfmt == 'qcow2':
 444
+            self.log.debug(msg_attr)
 445
+        else:
 446
+            msg = ('Volume validation failed, {}'.format(msg_attr))
 447
+            amulet.raise_status(amulet.FAIL, msg=msg)
 448
+
 449
+        return image
 450
+
 451
+    def delete_image(self, glance, image):
 452
+        """Delete the specified image."""
 453
+
 454
+        # /!\ DEPRECATION WARNING
 455
+        self.log.warn('/!\\ DEPRECATION WARNING:  use '
 456
+                      'delete_resource instead of delete_image.')
 457
+        self.log.debug('Deleting glance image ({})...'.format(image))
 458
+        return self.delete_resource(glance.images, image, msg='glance image')
 459
+
 460
+    def create_instance(self, nova, image_name, instance_name, flavor):
 461
+        """Create the specified instance."""
 462
+        self.log.debug('Creating instance '
 463
+                       '({}|{}|{})'.format(instance_name, image_name, flavor))
 464
+        image = nova.images.find(name=image_name)
 465
+        flavor = nova.flavors.find(name=flavor)
 466
+        instance = nova.servers.create(name=instance_name, image=image,
 467
+                                       flavor=flavor)
 468
+
 469
+        count = 1
 470
+        status = instance.status
 471
+        while status != 'ACTIVE' and count < 60:
 472
+            time.sleep(3)
 473
+            instance = nova.servers.get(instance.id)
 474
+            status = instance.status
 475
+            self.log.debug('instance status: {}'.format(status))
 476
+            count += 1
 477
+
 478
+        if status != 'ACTIVE':
 479
+            self.log.error('instance creation timed out')
 480
+            return None
 481
+
 482
+        return instance
 483
+
 484
+    def delete_instance(self, nova, instance):
 485
+        """Delete the specified instance."""
 486
+
 487
+        # /!\ DEPRECATION WARNING
 488
+        self.log.warn('/!\\ DEPRECATION WARNING:  use '
 489
+                      'delete_resource instead of delete_instance.')
 490
+        self.log.debug('Deleting instance ({})...'.format(instance))
 491
+        return self.delete_resource(nova.servers, instance,
 492
+                                    msg='nova instance')
 493
+
 494
+    def create_or_get_keypair(self, nova, keypair_name="testkey"):
 495
+        """Create a new keypair, or return pointer if it already exists."""
 496
+        try:
 497
+            _keypair = nova.keypairs.get(keypair_name)
 498
+            self.log.debug('Keypair ({}) already exists, '
 499
+                           'using it.'.format(keypair_name))
 500
+            return _keypair
 501
+        except:
 502
+            self.log.debug('Keypair ({}) does not exist, '
 503
+                           'creating it.'.format(keypair_name))
 504
+
 505
+        _keypair = nova.keypairs.create(name=keypair_name)
 506
+        return _keypair
 507
+
 508
+    def create_cinder_volume(self, cinder, vol_name="demo-vol", vol_size=1,
 509
+                             img_id=None, src_vol_id=None, snap_id=None):
 510
+        """Create cinder volume, optionally from a glance image, OR
 511
+        optionally as a clone of an existing volume, OR optionally
 512
+        from a snapshot.  Wait for the new volume status to reach
 513
+        the expected status, validate and return a resource pointer.
 514
+
 515
+        :param vol_name: cinder volume display name
 516
+        :param vol_size: size in gigabytes
 517
+        :param img_id: optional glance image id
 518
+        :param src_vol_id: optional source volume id to clone
 519
+        :param snap_id: optional snapshot id to use
 520
+        :returns: cinder volume pointer
 521
+        """
 522
+        # Handle parameter input and avoid impossible combinations
 523
+        if img_id and not src_vol_id and not snap_id:
 524
+            # Create volume from image
 525
+            self.log.debug('Creating cinder volume from glance image...')
 526
+            bootable = 'true'
 527
+        elif src_vol_id and not img_id and not snap_id:
 528
+            # Clone an existing volume
 529
+            self.log.debug('Cloning cinder volume...')
 530
+            bootable = cinder.volumes.get(src_vol_id).bootable
 531
+        elif snap_id and not src_vol_id and not img_id:
 532
+            # Create volume from snapshot
 533
+            self.log.debug('Creating cinder volume from snapshot...')
 534
+            snap = cinder.volume_snapshots.find(id=snap_id)
 535
+            vol_size = snap.size
 536
+            snap_vol_id = cinder.volume_snapshots.get(snap_id).volume_id
 537
+            bootable = cinder.volumes.get(snap_vol_id).bootable
 538
+        elif not img_id and not src_vol_id and not snap_id:
 539
+            # Create volume
 540
+            self.log.debug('Creating cinder volume...')
 541
+            bootable = 'false'
 542
+        else:
 543
+            # Impossible combination of parameters
 544
+            msg = ('Invalid method use - name:{} size:{} img_id:{} '
 545
+                   'src_vol_id:{} snap_id:{}'.format(vol_name, vol_size,
 546
+                                                     img_id, src_vol_id,
 547
+                                                     snap_id))
 548
+            amulet.raise_status(amulet.FAIL, msg=msg)
 549
+
 550
+        # Create new volume
 551
+        try:
 552
+            vol_new = cinder.volumes.create(display_name=vol_name,
 553
+                                            imageRef=img_id,
 554
+                                            size=vol_size,
 555
+                                            source_volid=src_vol_id,
 556
+                                            snapshot_id=snap_id)
 557
+            vol_id = vol_new.id
 558
+        except Exception as e:
 559
+            msg = 'Failed to create volume: {}'.format(e)
 560
+            amulet.raise_status(amulet.FAIL, msg=msg)
 561
+
 562
+        # Wait for volume to reach available status
 563
+        ret = self.resource_reaches_status(cinder.volumes, vol_id,
 564
+                                           expected_stat="available",
 565
+                                           msg="Volume status wait")
 566
+        if not ret:
 567
+            msg = 'Cinder volume failed to reach expected state.'
 568
+            amulet.raise_status(amulet.FAIL, msg=msg)
 569
+
 570
+        # Re-validate new volume
 571
+        self.log.debug('Validating volume attributes...')
 572
+        val_vol_name = cinder.volumes.get(vol_id).display_name
 573
+        val_vol_boot = cinder.volumes.get(vol_id).bootable
 574
+        val_vol_stat = cinder.volumes.get(vol_id).status
 575
+        val_vol_size = cinder.volumes.get(vol_id).size
 576
+        msg_attr = ('Volume attributes - name:{} id:{} stat:{} boot:'
 577
+                    '{} size:{}'.format(val_vol_name, vol_id,
 578
+                                        val_vol_stat, val_vol_boot,
 579
+                                        val_vol_size))
 580
+
 581
+        if val_vol_boot == bootable and val_vol_stat == 'available' \
 582
+                and val_vol_name == vol_name and val_vol_size == vol_size:
 583
+            self.log.debug(msg_attr)
 584
+        else:
 585
+            msg = ('Volume validation failed, {}'.format(msg_attr))
 586
+            amulet.raise_status(amulet.FAIL, msg=msg)
 587
+
 588
+        return vol_new
 589
+
 590
+    def delete_resource(self, resource, resource_id,
 591
+                        msg="resource", max_wait=120):
 592
+        """Delete one openstack resource, such as one instance, keypair,
 593
+        image, volume, stack, etc., and confirm deletion within max wait time.
 594
+
 595
+        :param resource: pointer to os resource type, ex:glance_client.images
 596
+        :param resource_id: unique name or id for the openstack resource
 597
+        :param msg: text to identify purpose in logging
 598
+        :param max_wait: maximum wait time in seconds
 599
+        :returns: True if successful, otherwise False
 600
+        """
 601
+        self.log.debug('Deleting OpenStack resource '
 602
+                       '{} ({})'.format(resource_id, msg))
 603
+        num_before = len(list(resource.list()))
 604
+        resource.delete(resource_id)
 605
+
 606
+        tries = 0
 607
+        num_after = len(list(resource.list()))
 608
+        while num_after != (num_before - 1) and tries < (max_wait / 4):
 609
+            self.log.debug('{} delete check: '
 610
+                           '{} [{}:{}] {}'.format(msg, tries,
 611
+                                                  num_before,
 612
+                                                  num_after,
 613
+                                                  resource_id))
 614
+            time.sleep(4)
 615
+            num_after = len(list(resource.list()))
 616
+            tries += 1
 617
+
 618
+        self.log.debug('{}:  expected, actual count = {}, '
 619
+                       '{}'.format(msg, num_before - 1, num_after))
 620
+
 621
+        if num_after == (num_before - 1):
 622
+            return True
 623
+        else:
 624
+            self.log.error('{} delete timed out'.format(msg))
 625
+            return False
 626
+
 627
+    def resource_reaches_status(self, resource, resource_id,
 628
+                                expected_stat='available',
 629
+                                msg='resource', max_wait=120):
 630
+        """Wait for an openstack resources status to reach an
 631
+           expected status within a specified time.  Useful to confirm that
 632
+           nova instances, cinder vols, snapshots, glance images, heat stacks
 633
+           and other resources eventually reach the expected status.
 634
+
 635
+        :param resource: pointer to os resource type, ex: heat_client.stacks
 636
+        :param resource_id: unique id for the openstack resource
 637
+        :param expected_stat: status to expect resource to reach
 638
+        :param msg: text to identify purpose in logging
 639
+        :param max_wait: maximum wait time in seconds
 640
+        :returns: True if successful, False if status is not reached
 641
+        """
 642
+
 643
+        tries = 0
 644
+        resource_stat = resource.get(resource_id).status
 645
+        while resource_stat != expected_stat and tries < (max_wait / 4):
 646
+            self.log.debug('{} status check: '
 647
+                           '{} [{}:{}] {}'.format(msg, tries,
 648
+                                                  resource_stat,
 649
+                                                  expected_stat,
 650
+                                                  resource_id))
 651
+            time.sleep(4)
 652
+            resource_stat = resource.get(resource_id).status
 653
+            tries += 1
 654
+
 655
+        self.log.debug('{}:  expected, actual status = {}, '
 656
+                       '{}'.format(msg, resource_stat, expected_stat))
 657
+
 658
+        if resource_stat == expected_stat:
 659
+            return True
 660
+        else:
 661
+            self.log.debug('{} never reached expected status: '
 662
+                           '{}'.format(resource_id, expected_stat))
 663
+            return False
 664
+
 665
+    def get_ceph_osd_id_cmd(self, index):
 666
+        """Produce a shell command that will return a ceph-osd id."""
 667
+        return ("`initctl list | grep 'ceph-osd ' | "
 668
+                "awk 'NR=={} {{ print $2 }}' | "
 669
+                "grep -o '[0-9]*'`".format(index + 1))
 670
+
 671
+    def get_ceph_pools(self, sentry_unit):
 672
+        """Return a dict of ceph pools from a single ceph unit, with
 673
+        pool name as keys, pool id as vals."""
 674
+        pools = {}
 675
+        cmd = 'sudo ceph osd lspools'
 676
+        output, code = sentry_unit.run(cmd)
 677
+        if code != 0:
 678
+            msg = ('{} `{}` returned {} '
 679
+                   '{}'.format(sentry_unit.info['unit_name'],
 680
+                               cmd, code, output))
 681
+            amulet.raise_status(amulet.FAIL, msg=msg)
 682
+
 683
+        # Example output: 0 data,1 metadata,2 rbd,3 cinder,4 glance,
 684
+        for pool in str(output).split(','):
 685
+            pool_id_name = pool.split(' ')
 686
+            if len(pool_id_name) == 2:
 687
+                pool_id = pool_id_name[0]
 688
+                pool_name = pool_id_name[1]
 689
+                pools[pool_name] = int(pool_id)
 690
+
 691
+        self.log.debug('Pools on {}: {}'.format(sentry_unit.info['unit_name'],
 692
+                                                pools))
 693
+        return pools
 694
+
 695
+    def get_ceph_df(self, sentry_unit):
 696
+        """Return dict of ceph df json output, including ceph pool state.
 697
+
 698
+        :param sentry_unit: Pointer to amulet sentry instance (juju unit)
 699
+        :returns: Dict of ceph df output
 700
+        """
 701
+        cmd = 'sudo ceph df --format=json'
 702
+        output, code = sentry_unit.run(cmd)
 703
+        if code != 0:
 704
+            msg = ('{} `{}` returned {} '
 705
+                   '{}'.format(sentry_unit.info['unit_name'],
 706
+                               cmd, code, output))
 707
+            amulet.raise_status(amulet.FAIL, msg=msg)
 708
+        return json.loads(output)
 709
+
 710
+    def get_ceph_pool_sample(self, sentry_unit, pool_id=0):
 711
+        """Take a sample of attributes of a ceph pool, returning ceph
 712
+        pool name, object count and disk space used for the specified
 713
+        pool ID number.
 714
+
 715
+        :param sentry_unit: Pointer to amulet sentry instance (juju unit)
 716
+        :param pool_id: Ceph pool ID
 717
+        :returns: List of pool name, object count, kb disk space used
 718
+        """
 719
+        df = self.get_ceph_df(sentry_unit)
 720
+        pool_name = df['pools'][pool_id]['name']
 721
+        obj_count = df['pools'][pool_id]['stats']['objects']
 722
+        kb_used = df['pools'][pool_id]['stats']['kb_used']
 723
+        self.log.debug('Ceph {} pool (ID {}): {} objects, '
 724
+                       '{} kb used'.format(pool_name, pool_id,
 725
+                                           obj_count, kb_used))
 726
+        return pool_name, obj_count, kb_used
 727
+
 728
+    def validate_ceph_pool_samples(self, samples, sample_type="resource pool"):
 729
+        """Validate ceph pool samples taken over time, such as pool
 730
+        object counts or pool kb used, before adding, after adding, and
 731
+        after deleting items which affect those pool attributes.  The
 732
+        2nd element is expected to be greater than the 1st; 3rd is expected
 733
+        to be less than the 2nd.
 734
+
 735
+        :param samples: List containing 3 data samples
 736
+        :param sample_type: String for logging and usage context
 737
+        :returns: None if successful, Failure message otherwise
 738
+        """
 739
+        original, created, deleted = range(3)
 740
+        if samples[created] <= samples[original] or \
 741
+                samples[deleted] >= samples[created]:
 742
+            return ('Ceph {} samples ({}) '
 743
+                    'unexpected.'.format(sample_type, samples))
 744
+        else:
 745
+            self.log.debug('Ceph {} samples (OK): '
 746
+                           '{}'.format(sample_type, samples))
 747
+            return None
 748
+
 749
+    # rabbitmq/amqp specific helpers:
 750
+
 751
+    def rmq_wait_for_cluster(self, deployment, init_sleep=15, timeout=1200):
 752
+        """Wait for rmq units extended status to show cluster readiness,
 753
+        after an optional initial sleep period.  Initial sleep is likely
 754
+        necessary to be effective following a config change, as status
 755
+        message may not instantly update to non-ready."""
 756
+
 757
+        if init_sleep:
 758
+            time.sleep(init_sleep)
 759
+
 760
+        message = re.compile('^Unit is ready and clustered$')
 761
+        deployment._auto_wait_for_status(message=message,
 762
+                                         timeout=timeout,
 763
+                                         include_only=['rabbitmq-server'])
 764
+
 765
+    def add_rmq_test_user(self, sentry_units,
 766
+                          username="testuser1", password="changeme"):
 767
+        """Add a test user via the first rmq juju unit, check connection as
 768
+        the new user against all sentry units.
 769
+
 770
+        :param sentry_units: list of sentry unit pointers
 771
+        :param username: amqp user name, default to testuser1
 772
+        :param password: amqp user password
 773
+        :returns: None if successful.  Raise on error.
 774
+        """
 775
+        self.log.debug('Adding rmq user ({})...'.format(username))
 776
+
 777
+        # Check that user does not already exist
 778
+        cmd_user_list = 'rabbitmqctl list_users'
 779
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
 780
+        if username in output:
 781
+            self.log.warning('User ({}) already exists, returning '
 782
+                             'gracefully.'.format(username))
 783
+            return
 784
+
 785
+        perms = '".*" ".*" ".*"'
 786
+        cmds = ['rabbitmqctl add_user {} {}'.format(username, password),
 787
+                'rabbitmqctl set_permissions {} {}'.format(username, perms)]
 788
+
 789
+        # Add user via first unit
 790
+        for cmd in cmds:
 791
+            output, _ = self.run_cmd_unit(sentry_units[0], cmd)
 792
+
 793
+        # Check connection against the other sentry_units
 794
+        self.log.debug('Checking user connect against units...')
 795
+        for sentry_unit in sentry_units:
 796
+            connection = self.connect_amqp_by_unit(sentry_unit, ssl=False,
 797
+                                                   username=username,
 798
+                                                   password=password)
 799
+            connection.close()
 800
+
 801
+    def delete_rmq_test_user(self, sentry_units, username="testuser1"):
 802
+        """Delete a rabbitmq user via the first rmq juju unit.
 803
+
 804
+        :param sentry_units: list of sentry unit pointers
 805
+        :param username: amqp user name, default to testuser1
 806
+        :param password: amqp user password
 807
+        :returns: None if successful or no such user.
 808
+        """
 809
+        self.log.debug('Deleting rmq user ({})...'.format(username))
 810
+
 811
+        # Check that the user exists
 812
+        cmd_user_list = 'rabbitmqctl list_users'
 813
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_list)
 814
+
 815
+        if username not in output:
 816
+            self.log.warning('User ({}) does not exist, returning '
 817
+                             'gracefully.'.format(username))
 818
+            return
 819
+
 820
+        # Delete the user
 821
+        cmd_user_del = 'rabbitmqctl delete_user {}'.format(username)
 822
+        output, _ = self.run_cmd_unit(sentry_units[0], cmd_user_del)
 823
+
 824
+    def get_rmq_cluster_status(self, sentry_unit):
 825
+        """Execute rabbitmq cluster status command on a unit and return
 826
+        the full output.
 827
+
 828
+        :param unit: sentry unit
 829
+        :returns: String containing console output of cluster status command
 830
+        """
 831
+        cmd = 'rabbitmqctl cluster_status'
 832
+        output, _ = self.run_cmd_unit(sentry_unit, cmd)
 833
+        self.log.debug('{} cluster_status:\n{}'.format(
 834
+            sentry_unit.info['unit_name'], output))
 835
+        return str(output)
 836
+
 837
+    def get_rmq_cluster_running_nodes(self, sentry_unit):
 838
+        """Parse rabbitmqctl cluster_status output string, return list of
 839
+        running rabbitmq cluster nodes.
 840
+
 841
+        :param unit: sentry unit
 842
+        :returns: List containing node names of running nodes
 843
+        """
 844
+        # NOTE(beisner): rabbitmqctl cluster_status output is not
 845
+        # json-parsable, do string chop foo, then json.loads that.
 846
+        str_stat = self.get_rmq_cluster_status(sentry_unit)
 847
+        if 'running_nodes' in str_stat:
 848
+            pos_start = str_stat.find("{running_nodes,") + 15
 849
+            pos_end = str_stat.find("]},", pos_start) + 1
 850
+            str_run_nodes = str_stat[pos_start:pos_end].replace("'", '"')
 851
+            run_nodes = json.loads(str_run_nodes)
 852
+            return run_nodes
 853
+        else:
 854
+            return []
 855
+
 856
+    def validate_rmq_cluster_running_nodes(self, sentry_units):
 857
+        """Check that all rmq unit hostnames are represented in the
 858
+        cluster_status output of all units.
 859
+
 860
+        :param host_names: dict of juju unit names to host names
 861
+        :param units: list of sentry unit pointers (all rmq units)
 862
+        :returns: None if successful, otherwise return error message
 863
+        """
 864
+        host_names = self.get_unit_hostnames(sentry_units)
 865
+        errors = []
 866
+
 867
+        # Query every unit for cluster_status running nodes
 868
+        for query_unit in sentry_units:
 869
+            query_unit_name = query_unit.info['unit_name']
 870
+            running_nodes = self.get_rmq_cluster_running_nodes(query_unit)
 871
+
 872
+            # Confirm that every unit is represented in the queried unit's
 873
+            # cluster_status running nodes output.
 874
+            for validate_unit in sentry_units:
 875
+                val_host_name = host_names[validate_unit.info['unit_name']]
 876
+                val_node_name = 'rabbit@{}'.format(val_host_name)
 877
+
 878
+                if val_node_name not in running_nodes:
 879
+                    errors.append('Cluster member check failed on {}: {} not '
 880
+                                  'in {}\n'.format(query_unit_name,
 881
+                                                   val_node_name,
 882
+                                                   running_nodes))
 883
+        if errors:
 884
+            return ''.join(errors)
 885
+
 886
+    def rmq_ssl_is_enabled_on_unit(self, sentry_unit, port=None):
 887
+        """Check a single juju rmq unit for ssl and port in the config file."""
 888
+        host = sentry_unit.info['public-address']
 889
+        unit_name = sentry_unit.info['unit_name']
 890
+
 891
+        conf_file = '/etc/rabbitmq/rabbitmq.config'
 892
+        conf_contents = str(self.file_contents_safe(sentry_unit,
 893
+                                                    conf_file, max_wait=16))
 894
+        # Checks
 895
+        conf_ssl = 'ssl' in conf_contents
 896
+        conf_port = str(port) in conf_contents
 897
+
 898
+        # Port explicitly checked in config
 899
+        if port and conf_port and conf_ssl:
 900
+            self.log.debug('SSL is enabled  @{}:{} '
 901
+                           '({})'.format(host, port, unit_name))
 902
+            return True
 903
+        elif port and not conf_port and conf_ssl:
 904
+            self.log.debug('SSL is enabled @{} but not on port {} '
 905
+                           '({})'.format(host, port, unit_name))
 906
+            return False
 907
+        # Port not checked (useful when checking that ssl is disabled)
 908
+        elif not port and conf_ssl:
 909
+            self.log.debug('SSL is enabled  @{}:{} '
 910
+                           '({})'.format(host, port, unit_name))
 911
+            return True
 912
+        elif not conf_ssl:
 913
+            self.log.debug('SSL not enabled @{}:{} '
 914
+                           '({})'.format(host, port, unit_name))
 915
+            return False
 916
+        else:
 917
+            msg = ('Unknown condition when checking SSL status @{}:{} '
 918
+                   '({})'.format(host, port, unit_name))
 919
+            amulet.raise_status(amulet.FAIL, msg)
 920
+
 921
+    def validate_rmq_ssl_enabled_units(self, sentry_units, port=None):
 922
+        """Check that ssl is enabled on rmq juju sentry units.
 923
+
 924
+        :param sentry_units: list of all rmq sentry units
 925
+        :param port: optional ssl port override to validate
 926
+        :returns: None if successful, otherwise return error message
 927
+        """
 928
+        for sentry_unit in sentry_units:
 929
+            if not self.rmq_ssl_is_enabled_on_unit(sentry_unit, port=port):
 930
+                return ('Unexpected condition:  ssl is disabled on unit '
 931
+                        '({})'.format(sentry_unit.info['unit_name']))
 932
+        return None
 933
+
 934
+    def validate_rmq_ssl_disabled_units(self, sentry_units):
 935
+        """Check that ssl is enabled on listed rmq juju sentry units.
 936
+
 937
+        :param sentry_units: list of all rmq sentry units
 938
+        :returns: True if successful.  Raise on error.
 939
+        """
 940
+        for sentry_unit in sentry_units:
 941
+            if self.rmq_ssl_is_enabled_on_unit(sentry_unit):
 942
+                return ('Unexpected condition:  ssl is enabled on unit '
 943
+                        '({})'.format(sentry_unit.info['unit_name']))
 944
+        return None
 945
+
 946
+    def configure_rmq_ssl_on(self, sentry_units, deployment,
 947
+                             port=None, max_wait=60):
 948
+        """Turn ssl charm config option on, with optional non-default
 949
+        ssl port specification.  Confirm that it is enabled on every
 950
+        unit.
 951
+
 952
+        :param sentry_units: list of sentry units
 953
+        :param deployment: amulet deployment object pointer
 954
+        :param port: amqp port, use defaults if None
 955
+        :param max_wait: maximum time to wait in seconds to confirm
 956
+        :returns: None if successful.  Raise on error.
 957
+        """
 958
+        self.log.debug('Setting ssl charm config option:  on')
 959
+
 960
+        # Enable RMQ SSL
 961
+        config = {'ssl': 'on'}
 962
+        if port:
 963
+            config['ssl_port'] = port
 964
+
 965
+        deployment.d.configure('rabbitmq-server', config)
 966
+
 967
+        # Wait for unit status
 968
+        self.rmq_wait_for_cluster(deployment)
 969
+
 970
+        # Confirm
 971
+        tries = 0
 972
+        ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
 973
+        while ret and tries < (max_wait / 4):
 974
+            time.sleep(4)
 975
+            self.log.debug('Attempt {}: {}'.format(tries, ret))
 976
+            ret = self.validate_rmq_ssl_enabled_units(sentry_units, port=port)
 977
+            tries += 1
 978
+
 979
+        if ret:
 980
+            amulet.raise_status(amulet.FAIL, ret)
 981
+
 982
+    def configure_rmq_ssl_off(self, sentry_units, deployment, max_wait=60):
 983
+        """Turn ssl charm config option off, confirm that it is disabled
 984
+        on every unit.
 985
+
 986
+        :param sentry_units: list of sentry units
 987
+        :param deployment: amulet deployment object pointer
 988
+        :param max_wait: maximum time to wait in seconds to confirm
 989
+        :returns: None if successful.  Raise on error.
 990
+        """
 991
+        self.log.debug('Setting ssl charm config option:  off')
 992
+
 993
+        # Disable RMQ SSL
 994
+        config = {'ssl': 'off'}
 995
+        deployment.d.configure('rabbitmq-server', config)
 996
+
 997
+        # Wait for unit status
 998
+        self.rmq_wait_for_cluster(deployment)
 999
+
1000
+        # Confirm
1001
+        tries = 0
1002
+        ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1003
+        while ret and tries < (max_wait / 4):
1004
+            time.sleep(4)
1005
+            self.log.debug('Attempt {}: {}'.format(tries, ret))
1006
+            ret = self.validate_rmq_ssl_disabled_units(sentry_units)
1007
+            tries += 1
1008
+
1009
+        if ret:
1010
+            amulet.raise_status(amulet.FAIL, ret)
1011
+
1012
+    def connect_amqp_by_unit(self, sentry_unit, ssl=False,
1013
+                             port=None, fatal=True,
1014
+                             username="testuser1", password="changeme"):
1015
+        """Establish and return a pika amqp connection to the rabbitmq service
1016
+        running on a rmq juju unit.
1017
+
1018
+        :param sentry_unit: sentry unit pointer
1019
+        :param ssl: boolean, default to False
1020
+        :param port: amqp port, use defaults if None
1021
+        :param fatal: boolean, default to True (raises on connect error)
1022
+        :param username: amqp user name, default to testuser1
1023
+        :param password: amqp user password
1024
+        :returns: pika amqp connection pointer or None if failed and non-fatal
1025
+        """
1026
+        host = sentry_unit.info['public-address']
1027
+        unit_name = sentry_unit.info['unit_name']
1028
+
1029
+        # Default port logic if port is not specified
1030
+        if ssl and not port:
1031
+            port = 5671
1032
+        elif not ssl and not port:
1033
+            port = 5672
1034
+
1035
+        self.log.debug('Connecting to amqp on {}:{} ({}) as '
1036
+                       '{}...'.format(host, port, unit_name, username))
1037
+
1038
+        try:
1039
+            credentials = pika.PlainCredentials(username, password)
1040
+            parameters = pika.ConnectionParameters(host=host, port=port,
1041
+                                                   credentials=credentials,
1042
+                                                   ssl=ssl,
1043
+                                                   connection_attempts=3,
1044
+                                                   retry_delay=5,
1045
+                                                   socket_timeout=1)
1046
+            connection = pika.BlockingConnection(parameters)
1047
+            assert connection.is_open is True
1048
+            assert connection.is_closing is False
1049
+            self.log.debug('Connect OK')
1050
+            return connection
1051
+        except Exception as e:
1052
+            msg = ('amqp connection failed to {}:{} as '
1053
+                   '{} ({})'.format(host, port, username, str(e)))
1054
+            if fatal:
1055
+                amulet.raise_status(amulet.FAIL, msg)
1056
+            else:
1057
+                self.log.warn(msg)
1058
+                return None
1059
+
1060
+    def publish_amqp_message_by_unit(self, sentry_unit, message,
1061
+                                     queue="test", ssl=False,
1062
+                                     username="testuser1",
1063
+                                     password="changeme",
1064
+                                     port=None):
1065
+        """Publish an amqp message to a rmq juju unit.
1066
+
1067
+        :param sentry_unit: sentry unit pointer
1068
+        :param message: amqp message string
1069
+        :param queue: message queue, default to test
1070
+        :param username: amqp user name, default to testuser1
1071
+        :param password: amqp user password
1072
+        :param ssl: boolean, default to False
1073
+        :param port: amqp port, use defaults if None
1074
+        :returns: None.  Raises exception if publish failed.
1075
+        """
1076
+        self.log.debug('Publishing message to {} queue:\n{}'.format(queue,
1077
+                                                                    message))
1078
+        connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1079
+                                               port=port,
1080
+                                               username=username,
1081
+                                               password=password)
1082
+
1083
+        # NOTE(beisner): extra debug here re: pika hang potential:
1084
+        #   https://github.com/pika/pika/issues/297
1085
+        #   https://groups.google.com/forum/#!topic/rabbitmq-users/Ja0iyfF0Szw
1086
+        self.log.debug('Defining channel...')
1087
+        channel = connection.channel()
1088
+        self.log.debug('Declaring queue...')
1089
+        channel.queue_declare(queue=queue, auto_delete=False, durable=True)
1090
+        self.log.debug('Publishing message...')
1091
+        channel.basic_publish(exchange='', routing_key=queue, body=message)
1092
+        self.log.debug('Closing channel...')
1093
+        channel.close()
1094
+        self.log.debug('Closing connection...')
1095
+        connection.close()
1096
+
1097
+    def get_amqp_message_by_unit(self, sentry_unit, queue="test",
1098
+                                 username="testuser1",
1099
+                                 password="changeme",
1100
+                                 ssl=False, port=None):
1101
+        """Get an amqp message from a rmq juju unit.
1102
+
1103
+        :param sentry_unit: sentry unit pointer
1104
+        :param queue: message queue, default to test
1105
+        :param username: amqp user name, default to testuser1
1106
+        :param password: amqp user password
1107
+        :param ssl: boolean, default to False
1108
+        :param port: amqp port, use defaults if None
1109
+        :returns: amqp message body as string.  Raise if get fails.
1110
+        """
1111
+        connection = self.connect_amqp_by_unit(sentry_unit, ssl=ssl,
1112
+                                               port=port,
1113
+                                               username=username,
1114
+                                               password=password)
1115
+        channel = connection.channel()
1116
+        method_frame, _, body = channel.basic_get(queue)
1117
+
1118
+        if method_frame:
1119
+            self.log.debug('Retreived message from {} queue:\n{}'.format(queue,
1120
+                                                                         body))
1121
+            channel.basic_ack(method_frame.delivery_tag)
1122
+            channel.close()
1123
+            connection.close()
1124
+            return body
1125
+        else:
1126
+            msg = 'No message retrieved.'
1127
+            amulet.raise_status(amulet.FAIL, msg)
Back to file index

hooks/charmhelpers/contrib/openstack/context.py

   1
--- 
   2
+++ hooks/charmhelpers/contrib/openstack/context.py
   3
@@ -0,0 +1,1510 @@
   4
+# Copyright 2014-2015 Canonical Limited.
   5
+#
   6
+# Licensed under the Apache License, Version 2.0 (the "License");
   7
+# you may not use this file except in compliance with the License.
   8
+# You may obtain a copy of the License at
   9
+#
  10
+#  http://www.apache.org/licenses/LICENSE-2.0
  11
+#
  12
+# Unless required by applicable law or agreed to in writing, software
  13
+# distributed under the License is distributed on an "AS IS" BASIS,
  14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  15
+# See the License for the specific language governing permissions and
  16
+# limitations under the License.
  17
+
  18
+import glob
  19
+import json
  20
+import os
  21
+import re
  22
+import time
  23
+from base64 import b64decode
  24
+from subprocess import check_call, CalledProcessError
  25
+
  26
+import six
  27
+
  28
+from charmhelpers.fetch import (
  29
+    apt_install,
  30
+    filter_installed_packages,
  31
+)
  32
+from charmhelpers.core.hookenv import (
  33
+    config,
  34
+    is_relation_made,
  35
+    local_unit,
  36
+    log,
  37
+    relation_get,
  38
+    relation_ids,
  39
+    related_units,
  40
+    relation_set,
  41
+    unit_get,
  42
+    unit_private_ip,
  43
+    charm_name,
  44
+    DEBUG,
  45
+    INFO,
  46
+    WARNING,
  47
+    ERROR,
  48
+    status_set,
  49
+)
  50
+
  51
+from charmhelpers.core.sysctl import create as sysctl_create
  52
+from charmhelpers.core.strutils import bool_from_string
  53
+from charmhelpers.contrib.openstack.exceptions import OSContextError
  54
+
  55
+from charmhelpers.core.host import (
  56
+    get_bond_master,
  57
+    is_phy_iface,
  58
+    list_nics,
  59
+    get_nic_hwaddr,
  60
+    mkdir,
  61
+    write_file,
  62
+    pwgen,
  63
+    lsb_release,
  64
+)
  65
+from charmhelpers.contrib.hahelpers.cluster import (
  66
+    determine_apache_port,
  67
+    determine_api_port,
  68
+    https,
  69
+    is_clustered,
  70
+)
  71
+from charmhelpers.contrib.hahelpers.apache import (
  72
+    get_cert,
  73
+    get_ca_cert,
  74
+    install_ca_cert,
  75
+)
  76
+from charmhelpers.contrib.openstack.neutron import (
  77
+    neutron_plugin_attribute,
  78
+    parse_data_port_mappings,
  79
+)
  80
+from charmhelpers.contrib.openstack.ip import (
  81
+    resolve_address,
  82
+    INTERNAL,
  83
+)
  84
+from charmhelpers.contrib.network.ip import (
  85
+    get_address_in_network,
  86
+    get_ipv4_addr,
  87
+    get_ipv6_addr,
  88
+    get_netmask_for_address,
  89
+    format_ipv6_addr,
  90
+    is_address_in_network,
  91
+    is_bridge_member,
  92
+)
  93
+from charmhelpers.contrib.openstack.utils import (
  94
+    config_flags_parser,
  95
+    get_host_ip,
  96
+)
  97
+from charmhelpers.core.unitdata import kv
  98
+
  99
+try:
 100
+    import psutil
 101
+except ImportError:
 102
+    apt_install('python-psutil', fatal=True)
 103
+    import psutil
 104
+
 105
+CA_CERT_PATH = '/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt'
 106
+ADDRESS_TYPES = ['admin', 'internal', 'public']
 107
+
 108
+
 109
+def ensure_packages(packages):
 110
+    """Install but do not upgrade required plugin packages."""
 111
+    required = filter_installed_packages(packages)
 112
+    if required:
 113
+        apt_install(required, fatal=True)
 114
+
 115
+
 116
+def context_complete(ctxt):
 117
+    _missing = []
 118
+    for k, v in six.iteritems(ctxt):
 119
+        if v is None or v == '':
 120
+            _missing.append(k)
 121
+
 122
+    if _missing:
 123
+        log('Missing required data: %s' % ' '.join(_missing), level=INFO)
 124
+        return False
 125
+
 126
+    return True
 127
+
 128
+
 129
+class OSContextGenerator(object):
 130
+    """Base class for all context generators."""
 131
+    interfaces = []
 132
+    related = False
 133
+    complete = False
 134
+    missing_data = []
 135
+
 136
+    def __call__(self):
 137
+        raise NotImplementedError
 138
+
 139
+    def context_complete(self, ctxt):
 140
+        """Check for missing data for the required context data.
 141
+        Set self.missing_data if it exists and return False.
 142
+        Set self.complete if no missing data and return True.
 143
+        """
 144
+        # Fresh start
 145
+        self.complete = False
 146
+        self.missing_data = []
 147
+        for k, v in six.iteritems(ctxt):
 148
+            if v is None or v == '':
 149
+                if k not in self.missing_data:
 150
+                    self.missing_data.append(k)
 151
+
 152
+        if self.missing_data:
 153
+            self.complete = False
 154
+            log('Missing required data: %s' % ' '.join(self.missing_data), level=INFO)
 155
+        else:
 156
+            self.complete = True
 157
+        return self.complete
 158
+
 159
+    def get_related(self):
 160
+        """Check if any of the context interfaces have relation ids.
 161
+        Set self.related and return True if one of the interfaces
 162
+        has relation ids.
 163
+        """
 164
+        # Fresh start
 165
+        self.related = False
 166
+        try:
 167
+            for interface in self.interfaces:
 168
+                if relation_ids(interface):
 169
+                    self.related = True
 170
+            return self.related
 171
+        except AttributeError as e:
 172
+            log("{} {}"
 173
+                "".format(self, e), 'INFO')
 174
+            return self.related
 175
+
 176
+
 177
+class SharedDBContext(OSContextGenerator):
 178
+    interfaces = ['shared-db']
 179
+
 180
+    def __init__(self,
 181
+                 database=None, user=None, relation_prefix=None, ssl_dir=None):
 182
+        """Allows inspecting relation for settings prefixed with
 183
+        relation_prefix. This is useful for parsing access for multiple
 184
+        databases returned via the shared-db interface (eg, nova_password,
 185
+        quantum_password)
 186
+        """
 187
+        self.relation_prefix = relation_prefix
 188
+        self.database = database
 189
+        self.user = user
 190
+        self.ssl_dir = ssl_dir
 191
+        self.rel_name = self.interfaces[0]
 192
+
 193
+    def __call__(self):
 194
+        self.database = self.database or config('database')
 195
+        self.user = self.user or config('database-user')
 196
+        if None in [self.database, self.user]:
 197
+            log("Could not generate shared_db context. Missing required charm "
 198
+                "config options. (database name and user)", level=ERROR)
 199
+            raise OSContextError
 200
+
 201
+        ctxt = {}
 202
+
 203
+        # NOTE(jamespage) if mysql charm provides a network upon which
 204
+        # access to the database should be made, reconfigure relation
 205
+        # with the service units local address and defer execution
 206
+        access_network = relation_get('access-network')
 207
+        if access_network is not None:
 208
+            if self.relation_prefix is not None:
 209
+                hostname_key = "{}_hostname".format(self.relation_prefix)
 210
+            else:
 211
+                hostname_key = "hostname"
 212
+            access_hostname = get_address_in_network(access_network,
 213
+                                                     unit_get('private-address'))
 214
+            set_hostname = relation_get(attribute=hostname_key,
 215
+                                        unit=local_unit())
 216
+            if set_hostname != access_hostname:
 217
+                relation_set(relation_settings={hostname_key: access_hostname})
 218
+                return None  # Defer any further hook execution for now....
 219
+
 220
+        password_setting = 'password'
 221
+        if self.relation_prefix:
 222
+            password_setting = self.relation_prefix + '_password'
 223
+
 224
+        for rid in relation_ids(self.interfaces[0]):
 225
+            self.related = True
 226
+            for unit in related_units(rid):
 227
+                rdata = relation_get(rid=rid, unit=unit)
 228
+                host = rdata.get('db_host')
 229
+                host = format_ipv6_addr(host) or host
 230
+                ctxt = {
 231
+                    'database_host': host,
 232
+                    'database': self.database,
 233
+                    'database_user': self.user,
 234
+                    'database_password': rdata.get(password_setting),
 235
+                    'database_type': 'mysql'
 236
+                }
 237
+                if self.context_complete(ctxt):
 238
+                    db_ssl(rdata, ctxt, self.ssl_dir)
 239
+                    return ctxt
 240
+        return {}
 241
+
 242
+
 243
+class PostgresqlDBContext(OSContextGenerator):
 244
+    interfaces = ['pgsql-db']
 245
+
 246
+    def __init__(self, database=None):
 247
+        self.database = database
 248
+
 249
+    def __call__(self):
 250
+        self.database = self.database or config('database')
 251
+        if self.database is None:
 252
+            log('Could not generate postgresql_db context. Missing required '
 253
+                'charm config options. (database name)', level=ERROR)
 254
+            raise OSContextError
 255
+
 256
+        ctxt = {}
 257
+        for rid in relation_ids(self.interfaces[0]):
 258
+            self.related = True
 259
+            for unit in related_units(rid):
 260
+                rel_host = relation_get('host', rid=rid, unit=unit)
 261
+                rel_user = relation_get('user', rid=rid, unit=unit)
 262
+                rel_passwd = relation_get('password', rid=rid, unit=unit)
 263
+                ctxt = {'database_host': rel_host,
 264
+                        'database': self.database,
 265
+                        'database_user': rel_user,
 266
+                        'database_password': rel_passwd,
 267
+                        'database_type': 'postgresql'}
 268
+                if self.context_complete(ctxt):
 269
+                    return ctxt
 270
+
 271
+        return {}
 272
+
 273
+
 274
+def db_ssl(rdata, ctxt, ssl_dir):
 275
+    if 'ssl_ca' in rdata and ssl_dir:
 276
+        ca_path = os.path.join(ssl_dir, 'db-client.ca')
 277
+        with open(ca_path, 'w') as fh:
 278
+            fh.write(b64decode(rdata['ssl_ca']))
 279
+
 280
+        ctxt['database_ssl_ca'] = ca_path
 281
+    elif 'ssl_ca' in rdata:
 282
+        log("Charm not setup for ssl support but ssl ca found", level=INFO)
 283
+        return ctxt
 284
+
 285
+    if 'ssl_cert' in rdata:
 286
+        cert_path = os.path.join(
 287
+            ssl_dir, 'db-client.cert')
 288
+        if not os.path.exists(cert_path):
 289
+            log("Waiting 1m for ssl client cert validity", level=INFO)
 290
+            time.sleep(60)
 291
+
 292
+        with open(cert_path, 'w') as fh:
 293
+            fh.write(b64decode(rdata['ssl_cert']))
 294
+
 295
+        ctxt['database_ssl_cert'] = cert_path
 296
+        key_path = os.path.join(ssl_dir, 'db-client.key')
 297
+        with open(key_path, 'w') as fh:
 298
+            fh.write(b64decode(rdata['ssl_key']))
 299
+
 300
+        ctxt['database_ssl_key'] = key_path
 301
+
 302
+    return ctxt
 303
+
 304
+
 305
+class IdentityServiceContext(OSContextGenerator):
 306
+
 307
+    def __init__(self, service=None, service_user=None, rel_name='identity-service'):
 308
+        self.service = service
 309
+        self.service_user = service_user
 310
+        self.rel_name = rel_name
 311
+        self.interfaces = [self.rel_name]
 312
+
 313
+    def __call__(self):
 314
+        log('Generating template context for ' + self.rel_name, level=DEBUG)
 315
+        ctxt = {}
 316
+
 317
+        if self.service and self.service_user:
 318
+            # This is required for pki token signing if we don't want /tmp to
 319
+            # be used.
 320
+            cachedir = '/var/cache/%s' % (self.service)
 321
+            if not os.path.isdir(cachedir):
 322
+                log("Creating service cache dir %s" % (cachedir), level=DEBUG)
 323
+                mkdir(path=cachedir, owner=self.service_user,
 324
+                      group=self.service_user, perms=0o700)
 325
+
 326
+            ctxt['signing_dir'] = cachedir
 327
+
 328
+        for rid in relation_ids(self.rel_name):
 329
+            self.related = True
 330
+            for unit in related_units(rid):
 331
+                rdata = relation_get(rid=rid, unit=unit)
 332
+                serv_host = rdata.get('service_host')
 333
+                serv_host = format_ipv6_addr(serv_host) or serv_host
 334
+                auth_host = rdata.get('auth_host')
 335
+                auth_host = format_ipv6_addr(auth_host) or auth_host
 336
+                svc_protocol = rdata.get('service_protocol') or 'http'
 337
+                auth_protocol = rdata.get('auth_protocol') or 'http'
 338
+                api_version = rdata.get('api_version') or '2.0'
 339
+                ctxt.update({'service_port': rdata.get('service_port'),
 340
+                             'service_host': serv_host,
 341
+                             'auth_host': auth_host,
 342
+                             'auth_port': rdata.get('auth_port'),
 343
+                             'admin_tenant_name': rdata.get('service_tenant'),
 344
+                             'admin_user': rdata.get('service_username'),
 345
+                             'admin_password': rdata.get('service_password'),
 346
+                             'service_protocol': svc_protocol,
 347
+                             'auth_protocol': auth_protocol,
 348
+                             'api_version': api_version})
 349
+
 350
+                if self.context_complete(ctxt):
 351
+                    # NOTE(jamespage) this is required for >= icehouse
 352
+                    # so a missing value just indicates keystone needs
 353
+                    # upgrading
 354
+                    ctxt['admin_tenant_id'] = rdata.get('service_tenant_id')
 355
+                    return ctxt
 356
+
 357
+        return {}
 358
+
 359
+
 360
+class AMQPContext(OSContextGenerator):
 361
+
 362
+    def __init__(self, ssl_dir=None, rel_name='amqp', relation_prefix=None):
 363
+        self.ssl_dir = ssl_dir
 364
+        self.rel_name = rel_name
 365
+        self.relation_prefix = relation_prefix
 366
+        self.interfaces = [rel_name]
 367
+
 368
+    def __call__(self):
 369
+        log('Generating template context for amqp', level=DEBUG)
 370
+        conf = config()
 371
+        if self.relation_prefix:
 372
+            user_setting = '%s-rabbit-user' % (self.relation_prefix)
 373
+            vhost_setting = '%s-rabbit-vhost' % (self.relation_prefix)
 374
+        else:
 375
+            user_setting = 'rabbit-user'
 376
+            vhost_setting = 'rabbit-vhost'
 377
+
 378
+        try:
 379
+            username = conf[user_setting]
 380
+            vhost = conf[vhost_setting]
 381
+        except KeyError as e:
 382
+            log('Could not generate shared_db context. Missing required charm '
 383
+                'config options: %s.' % e, level=ERROR)
 384
+            raise OSContextError
 385
+
 386
+        ctxt = {}
 387
+        for rid in relation_ids(self.rel_name):
 388
+            ha_vip_only = False
 389
+            self.related = True
 390
+            for unit in related_units(rid):
 391
+                if relation_get('clustered', rid=rid, unit=unit):
 392
+                    ctxt['clustered'] = True
 393
+                    vip = relation_get('vip', rid=rid, unit=unit)
 394
+                    vip = format_ipv6_addr(vip) or vip
 395
+                    ctxt['rabbitmq_host'] = vip
 396
+                else:
 397
+                    host = relation_get('private-address', rid=rid, unit=unit)
 398
+                    host = format_ipv6_addr(host) or host
 399
+                    ctxt['rabbitmq_host'] = host
 400
+
 401
+                ctxt.update({
 402
+                    'rabbitmq_user': username,
 403
+                    'rabbitmq_password': relation_get('password', rid=rid,
 404
+                                                      unit=unit),
 405
+                    'rabbitmq_virtual_host': vhost,
 406
+                })
 407
+
 408
+                ssl_port = relation_get('ssl_port', rid=rid, unit=unit)
 409
+                if ssl_port:
 410
+                    ctxt['rabbit_ssl_port'] = ssl_port
 411
+
 412
+                ssl_ca = relation_get('ssl_ca', rid=rid, unit=unit)
 413
+                if ssl_ca:
 414
+                    ctxt['rabbit_ssl_ca'] = ssl_ca
 415
+
 416
+                if relation_get('ha_queues', rid=rid, unit=unit) is not None:
 417
+                    ctxt['rabbitmq_ha_queues'] = True
 418
+
 419
+                ha_vip_only = relation_get('ha-vip-only',
 420
+                                           rid=rid, unit=unit) is not None
 421
+
 422
+                if self.context_complete(ctxt):
 423
+                    if 'rabbit_ssl_ca' in ctxt:
 424
+                        if not self.ssl_dir:
 425
+                            log("Charm not setup for ssl support but ssl ca "
 426
+                                "found", level=INFO)
 427
+                            break
 428
+
 429
+                        ca_path = os.path.join(
 430
+                            self.ssl_dir, 'rabbit-client-ca.pem')
 431
+                        with open(ca_path, 'w') as fh:
 432
+                            fh.write(b64decode(ctxt['rabbit_ssl_ca']))
 433
+                            ctxt['rabbit_ssl_ca'] = ca_path
 434
+
 435
+                    # Sufficient information found = break out!
 436
+                    break
 437
+
 438
+            # Used for active/active rabbitmq >= grizzly
 439
+            if (('clustered' not in ctxt or ha_vip_only) and
 440
+                    len(related_units(rid)) > 1):
 441
+                rabbitmq_hosts = []
 442
+                for unit in related_units(rid):
 443
+                    host = relation_get('private-address', rid=rid, unit=unit)
 444
+                    host = format_ipv6_addr(host) or host
 445
+                    rabbitmq_hosts.append(host)
 446
+
 447
+                ctxt['rabbitmq_hosts'] = ','.join(sorted(rabbitmq_hosts))
 448
+
 449
+        oslo_messaging_flags = conf.get('oslo-messaging-flags', None)
 450
+        if oslo_messaging_flags:
 451
+            ctxt['oslo_messaging_flags'] = config_flags_parser(
 452
+                oslo_messaging_flags)
 453
+
 454
+        if not self.complete:
 455
+            return {}
 456
+
 457
+        return ctxt
 458
+
 459
+
 460
+class CephContext(OSContextGenerator):
 461
+    """Generates context for /etc/ceph/ceph.conf templates."""
 462
+    interfaces = ['ceph']
 463
+
 464
+    def __call__(self):
 465
+        if not relation_ids('ceph'):
 466
+            return {}
 467
+
 468
+        log('Generating template context for ceph', level=DEBUG)
 469
+        mon_hosts = []
 470
+        ctxt = {
 471
+            'use_syslog': str(config('use-syslog')).lower()
 472
+        }
 473
+        for rid in relation_ids('ceph'):
 474
+            for unit in related_units(rid):
 475
+                if not ctxt.get('auth'):
 476
+                    ctxt['auth'] = relation_get('auth', rid=rid, unit=unit)
 477
+                if not ctxt.get('key'):
 478
+                    ctxt['key'] = relation_get('key', rid=rid, unit=unit)
 479
+                ceph_pub_addr = relation_get('ceph-public-address', rid=rid,
 480
+                                             unit=unit)
 481
+                unit_priv_addr = relation_get('private-address', rid=rid,
 482
+                                              unit=unit)
 483
+                ceph_addr = ceph_pub_addr or unit_priv_addr
 484
+                ceph_addr = format_ipv6_addr(ceph_addr) or ceph_addr
 485
+                mon_hosts.append(ceph_addr)
 486
+
 487
+        ctxt['mon_hosts'] = ' '.join(sorted(mon_hosts))
 488
+
 489
+        if not os.path.isdir('/etc/ceph'):
 490
+            os.mkdir('/etc/ceph')
 491
+
 492
+        if not self.context_complete(ctxt):
 493
+            return {}
 494
+
 495
+        ensure_packages(['ceph-common'])
 496
+        return ctxt
 497
+
 498
+
 499
+class HAProxyContext(OSContextGenerator):
 500
+    """Provides half a context for the haproxy template, which describes
 501
+    all peers to be included in the cluster.  Each charm needs to include
 502
+    its own context generator that describes the port mapping.
 503
+    """
 504
+    interfaces = ['cluster']
 505
+
 506
+    def __init__(self, singlenode_mode=False):
 507
+        self.singlenode_mode = singlenode_mode
 508
+
 509
+    def __call__(self):
 510
+        if not relation_ids('cluster') and not self.singlenode_mode:
 511
+            return {}
 512
+
 513
+        if config('prefer-ipv6'):
 514
+            addr = get_ipv6_addr(exc_list=[config('vip')])[0]
 515
+        else:
 516
+            addr = get_host_ip(unit_get('private-address'))
 517
+
 518
+        l_unit = local_unit().replace('/', '-')
 519
+        cluster_hosts = {}
 520
+
 521
+        # NOTE(jamespage): build out map of configured network endpoints
 522
+        # and associated backends
 523
+        for addr_type in ADDRESS_TYPES:
 524
+            cfg_opt = 'os-{}-network'.format(addr_type)
 525
+            laddr = get_address_in_network(config(cfg_opt))
 526
+            if laddr:
 527
+                netmask = get_netmask_for_address(laddr)
 528
+                cluster_hosts[laddr] = {'network': "{}/{}".format(laddr,
 529
+                                                                  netmask),
 530
+                                        'backends': {l_unit: laddr}}
 531
+                for rid in relation_ids('cluster'):
 532
+                    for unit in related_units(rid):
 533
+                        _laddr = relation_get('{}-address'.format(addr_type),
 534
+                                              rid=rid, unit=unit)
 535
+                        if _laddr:
 536
+                            _unit = unit.replace('/', '-')
 537
+                            cluster_hosts[laddr]['backends'][_unit] = _laddr
 538
+
 539
+        # NOTE(jamespage) add backend based on private address - this
 540
+        # with either be the only backend or the fallback if no acls
 541
+        # match in the frontend
 542
+        cluster_hosts[addr] = {}
 543
+        netmask = get_netmask_for_address(addr)
 544
+        cluster_hosts[addr] = {'network': "{}/{}".format(addr, netmask),
 545
+                               'backends': {l_unit: addr}}
 546
+        for rid in relation_ids('cluster'):
 547
+            for unit in related_units(rid):
 548
+                _laddr = relation_get('private-address',
 549
+                                      rid=rid, unit=unit)
 550
+                if _laddr:
 551
+                    _unit = unit.replace('/', '-')
 552
+                    cluster_hosts[addr]['backends'][_unit] = _laddr
 553
+
 554
+        ctxt = {
 555
+            'frontends': cluster_hosts,
 556
+            'default_backend': addr
 557
+        }
 558
+
 559
+        if config('haproxy-server-timeout'):
 560
+            ctxt['haproxy_server_timeout'] = config('haproxy-server-timeout')
 561
+
 562
+        if config('haproxy-client-timeout'):
 563
+            ctxt['haproxy_client_timeout'] = config('haproxy-client-timeout')
 564
+
 565
+        if config('haproxy-queue-timeout'):
 566
+            ctxt['haproxy_queue_timeout'] = config('haproxy-queue-timeout')
 567
+
 568
+        if config('haproxy-connect-timeout'):
 569
+            ctxt['haproxy_connect_timeout'] = config('haproxy-connect-timeout')
 570
+
 571
+        if config('prefer-ipv6'):
 572
+            ctxt['ipv6'] = True
 573
+            ctxt['local_host'] = 'ip6-localhost'
 574
+            ctxt['haproxy_host'] = '::'
 575
+        else:
 576
+            ctxt['local_host'] = '127.0.0.1'
 577
+            ctxt['haproxy_host'] = '0.0.0.0'
 578
+
 579
+        ctxt['stat_port'] = '8888'
 580
+
 581
+        db = kv()
 582
+        ctxt['stat_password'] = db.get('stat-password')
 583
+        if not ctxt['stat_password']:
 584
+            ctxt['stat_password'] = db.set('stat-password',
 585
+                                           pwgen(32))
 586
+            db.flush()
 587
+
 588
+        for frontend in cluster_hosts:
 589
+            if (len(cluster_hosts[frontend]['backends']) > 1 or
 590
+                    self.singlenode_mode):
 591
+                # Enable haproxy when we have enough peers.
 592
+                log('Ensuring haproxy enabled in /etc/default/haproxy.',
 593
+                    level=DEBUG)
 594
+                with open('/etc/default/haproxy', 'w') as out:
 595
+                    out.write('ENABLED=1\n')
 596
+
 597
+                return ctxt
 598
+
 599
+        log('HAProxy context is incomplete, this unit has no peers.',
 600
+            level=INFO)
 601
+        return {}
 602
+
 603
+
 604
+class ImageServiceContext(OSContextGenerator):
 605
+    interfaces = ['image-service']
 606
+
 607
+    def __call__(self):
 608
+        """Obtains the glance API server from the image-service relation.
 609
+        Useful in nova and cinder (currently).
 610
+        """
 611
+        log('Generating template context for image-service.', level=DEBUG)
 612
+        rids = relation_ids('image-service')
 613
+        if not rids:
 614
+            return {}
 615
+
 616
+        for rid in rids:
 617
+            for unit in related_units(rid):
 618
+                api_server = relation_get('glance-api-server',
 619
+                                          rid=rid, unit=unit)
 620
+                if api_server:
 621
+                    return {'glance_api_servers': api_server}
 622
+
 623
+        log("ImageService context is incomplete. Missing required relation "
 624
+            "data.", level=INFO)
 625
+        return {}
 626
+
 627
+
 628
+class ApacheSSLContext(OSContextGenerator):
 629
+    """Generates a context for an apache vhost configuration that configures
 630
+    HTTPS reverse proxying for one or many endpoints.  Generated context
 631
+    looks something like::
 632
+
 633
+        {
 634
+            'namespace': 'cinder',
 635
+            'private_address': 'iscsi.mycinderhost.com',
 636
+            'endpoints': [(8776, 8766), (8777, 8767)]
 637
+        }
 638
+
 639
+    The endpoints list consists of a tuples mapping external ports
 640
+    to internal ports.
 641
+    """
 642
+    interfaces = ['https']
 643
+
 644
+    # charms should inherit this context and set external ports
 645
+    # and service namespace accordingly.
 646
+    external_ports = []
 647
+    service_namespace = None
 648
+
 649
+    def enable_modules(self):
 650
+        cmd = ['a2enmod', 'ssl', 'proxy', 'proxy_http']
 651
+        check_call(cmd)
 652
+
 653
+    def configure_cert(self, cn=None):
 654
+        ssl_dir = os.path.join('/etc/apache2/ssl/', self.service_namespace)
 655
+        mkdir(path=ssl_dir)
 656
+        cert, key = get_cert(cn)
 657
+        if cn:
 658
+            cert_filename = 'cert_{}'.format(cn)
 659
+            key_filename = 'key_{}'.format(cn)
 660
+        else:
 661
+            cert_filename = 'cert'
 662
+            key_filename = 'key'
 663
+
 664
+        write_file(path=os.path.join(ssl_dir, cert_filename),
 665
+                   content=b64decode(cert))
 666
+        write_file(path=os.path.join(ssl_dir, key_filename),
 667
+                   content=b64decode(key))
 668
+
 669
+    def configure_ca(self):
 670
+        ca_cert = get_ca_cert()
 671
+        if ca_cert:
 672
+            install_ca_cert(b64decode(ca_cert))
 673
+
 674
+    def canonical_names(self):
 675
+        """Figure out which canonical names clients will access this service.
 676
+        """
 677
+        cns = []
 678
+        for r_id in relation_ids('identity-service'):
 679
+            for unit in related_units(r_id):
 680
+                rdata = relation_get(rid=r_id, unit=unit)
 681
+                for k in rdata:
 682
+                    if k.startswith('ssl_key_'):
 683
+                        cns.append(k.lstrip('ssl_key_'))
 684
+
 685
+        return sorted(list(set(cns)))
 686
+
 687
+    def get_network_addresses(self):
 688
+        """For each network configured, return corresponding address and vip
 689
+           (if available).
 690
+
 691
+        Returns a list of tuples of the form:
 692
+
 693
+            [(address_in_net_a, vip_in_net_a),
 694
+             (address_in_net_b, vip_in_net_b),
 695
+             ...]
 696
+
 697
+            or, if no vip(s) available:
 698
+
 699
+            [(address_in_net_a, address_in_net_a),
 700
+             (address_in_net_b, address_in_net_b),
 701
+             ...]
 702
+        """
 703
+        addresses = []
 704
+        if config('vip'):
 705
+            vips = config('vip').split()
 706
+        else:
 707
+            vips = []
 708
+
 709
+        for net_type in ['os-internal-network', 'os-admin-network',
 710
+                         'os-public-network']:
 711
+            addr = get_address_in_network(config(net_type),
 712
+                                          unit_get('private-address'))
 713
+            if len(vips) > 1 and is_clustered():
 714
+                if not config(net_type):
 715
+                    log("Multiple networks configured but net_type "
 716
+                        "is None (%s)." % net_type, level=WARNING)
 717
+                    continue
 718
+
 719
+                for vip in vips:
 720
+                    if is_address_in_network(config(net_type), vip):
 721
+                        addresses.append((addr, vip))
 722
+                        break
 723
+
 724
+            elif is_clustered() and config('vip'):
 725
+                addresses.append((addr, config('vip')))
 726
+            else:
 727
+                addresses.append((addr, addr))
 728
+
 729
+        return sorted(addresses)
 730
+
 731
+    def __call__(self):
 732
+        if isinstance(self.external_ports, six.string_types):
 733
+            self.external_ports = [self.external_ports]
 734
+
 735
+        if not self.external_ports or not https():
 736
+            return {}
 737
+
 738
+        self.configure_ca()
 739
+        self.enable_modules()
 740
+
 741
+        ctxt = {'namespace': self.service_namespace,
 742
+                'endpoints': [],
 743
+                'ext_ports': []}
 744
+
 745
+        cns = self.canonical_names()
 746
+        if cns:
 747
+            for cn in cns:
 748
+                self.configure_cert(cn)
 749
+        else:
 750
+            # Expect cert/key provided in config (currently assumed that ca
 751
+            # uses ip for cn)
 752
+            cn = resolve_address(endpoint_type=INTERNAL)
 753
+            self.configure_cert(cn)
 754
+
 755
+        addresses = self.get_network_addresses()
 756
+        for address, endpoint in sorted(set(addresses)):
 757
+            for api_port in self.external_ports:
 758
+                ext_port = determine_apache_port(api_port,
 759
+                                                 singlenode_mode=True)
 760
+                int_port = determine_api_port(api_port, singlenode_mode=True)
 761
+                portmap = (address, endpoint, int(ext_port), int(int_port))
 762
+                ctxt['endpoints'].append(portmap)
 763
+                ctxt['ext_ports'].append(int(ext_port))
 764
+
 765
+        ctxt['ext_ports'] = sorted(list(set(ctxt['ext_ports'])))
 766
+        return ctxt
 767
+
 768
+
 769
+class NeutronContext(OSContextGenerator):
 770
+    interfaces = []
 771
+
 772
+    @property
 773
+    def plugin(self):
 774
+        return None
 775
+
 776
+    @property
 777
+    def network_manager(self):
 778
+        return None
 779
+
 780
+    @property
 781
+    def packages(self):
 782
+        return neutron_plugin_attribute(self.plugin, 'packages',
 783
+                                        self.network_manager)
 784
+
 785
+    @property
 786
+    def neutron_security_groups(self):
 787
+        return None
 788
+
 789
+    def _ensure_packages(self):
 790
+        for pkgs in self.packages:
 791
+            ensure_packages(pkgs)
 792
+
 793
+    def _save_flag_file(self):
 794
+        if self.network_manager == 'quantum':
 795
+            _file = '/etc/nova/quantum_plugin.conf'
 796
+        else:
 797
+            _file = '/etc/nova/neutron_plugin.conf'
 798
+
 799
+        with open(_file, 'wb') as out:
 800
+            out.write(self.plugin + '\n')
 801
+
 802
+    def ovs_ctxt(self):
 803
+        driver = neutron_plugin_attribute(self.plugin, 'driver',
 804
+                                          self.network_manager)
 805
+        config = neutron_plugin_attribute(self.plugin, 'config',
 806
+                                          self.network_manager)
 807
+        ovs_ctxt = {'core_plugin': driver,
 808
+                    'neutron_plugin': 'ovs',
 809
+                    'neutron_security_groups': self.neutron_security_groups,
 810
+                    'local_ip': unit_private_ip(),
 811
+                    'config': config}
 812
+
 813
+        return ovs_ctxt
 814
+
 815
+    def nuage_ctxt(self):
 816
+        driver = neutron_plugin_attribute(self.plugin, 'driver',
 817
+                                          self.network_manager)
 818
+        config = neutron_plugin_attribute(self.plugin, 'config',
 819
+                                          self.network_manager)
 820
+        nuage_ctxt = {'core_plugin': driver,
 821
+                      'neutron_plugin': 'vsp',
 822
+                      'neutron_security_groups': self.neutron_security_groups,
 823
+                      'local_ip': unit_private_ip(),
 824
+                      'config': config}
 825
+
 826
+        return nuage_ctxt
 827
+
 828
+    def nvp_ctxt(self):
 829
+        driver = neutron_plugin_attribute(self.plugin, 'driver',
 830
+                                          self.network_manager)
 831
+        config = neutron_plugin_attribute(self.plugin, 'config',
 832
+                                          self.network_manager)
 833
+        nvp_ctxt = {'core_plugin': driver,
 834
+                    'neutron_plugin': 'nvp',
 835
+                    'neutron_security_groups': self.neutron_security_groups,
 836
+                    'local_ip': unit_private_ip(),
 837
+                    'config': config}
 838
+
 839
+        return nvp_ctxt
 840
+
 841
+    def n1kv_ctxt(self):
 842
+        driver = neutron_plugin_attribute(self.plugin, 'driver',
 843
+                                          self.network_manager)
 844
+        n1kv_config = neutron_plugin_attribute(self.plugin, 'config',
 845
+                                               self.network_manager)
 846
+        n1kv_user_config_flags = config('n1kv-config-flags')
 847
+        restrict_policy_profiles = config('n1kv-restrict-policy-profiles')
 848
+        n1kv_ctxt = {'core_plugin': driver,
 849
+                     'neutron_plugin': 'n1kv',
 850
+                     'neutron_security_groups': self.neutron_security_groups,
 851
+                     'local_ip': unit_private_ip(),
 852
+                     'config': n1kv_config,
 853
+                     'vsm_ip': config('n1kv-vsm-ip'),
 854
+                     'vsm_username': config('n1kv-vsm-username'),
 855
+                     'vsm_password': config('n1kv-vsm-password'),
 856
+                     'restrict_policy_profiles': restrict_policy_profiles}
 857
+
 858
+        if n1kv_user_config_flags:
 859
+            flags = config_flags_parser(n1kv_user_config_flags)
 860
+            n1kv_ctxt['user_config_flags'] = flags
 861
+
 862
+        return n1kv_ctxt
 863
+
 864
+    def calico_ctxt(self):
 865
+        driver = neutron_plugin_attribute(self.plugin, 'driver',
 866
+                                          self.network_manager)
 867
+        config = neutron_plugin_attribute(self.plugin, 'config',
 868
+                                          self.network_manager)
 869
+        calico_ctxt = {'core_plugin': driver,
 870
+                       'neutron_plugin': 'Calico',
 871
+                       'neutron_security_groups': self.neutron_security_groups,
 872
+                       'local_ip': unit_private_ip(),
 873
+                       'config': config}
 874
+
 875
+        return calico_ctxt
 876
+
 877
+    def neutron_ctxt(self):
 878
+        if https():
 879
+            proto = 'https'
 880
+        else:
 881
+            proto = 'http'
 882
+
 883
+        if is_clustered():
 884
+            host = config('vip')
 885
+        else:
 886
+            host = unit_get('private-address')
 887
+
 888
+        ctxt = {'network_manager': self.network_manager,
 889
+                'neutron_url': '%s://%s:%s' % (proto, host, '9696')}
 890
+        return ctxt
 891
+
 892
+    def pg_ctxt(self):
 893
+        driver = neutron_plugin_attribute(self.plugin, 'driver',
 894
+                                          self.network_manager)
 895
+        config = neutron_plugin_attribute(self.plugin, 'config',
 896
+                                          self.network_manager)
 897
+        ovs_ctxt = {'core_plugin': driver,
 898
+                    'neutron_plugin': 'plumgrid',
 899
+                    'neutron_security_groups': self.neutron_security_groups,
 900
+                    'local_ip': unit_private_ip(),
 901
+                    'config': config}
 902
+        return ovs_ctxt
 903
+
 904
+    def midonet_ctxt(self):
 905
+        driver = neutron_plugin_attribute(self.plugin, 'driver',
 906
+                                          self.network_manager)
 907
+        midonet_config = neutron_plugin_attribute(self.plugin, 'config',
 908
+                                                  self.network_manager)
 909
+        mido_ctxt = {'core_plugin': driver,
 910
+                     'neutron_plugin': 'midonet',
 911
+                     'neutron_security_groups': self.neutron_security_groups,
 912
+                     'local_ip': unit_private_ip(),
 913
+                     'config': midonet_config}
 914
+
 915
+        return mido_ctxt
 916
+
 917
+    def __call__(self):
 918
+        if self.network_manager not in ['quantum', 'neutron']:
 919
+            return {}
 920
+
 921
+        if not self.plugin:
 922
+            return {}
 923
+
 924
+        ctxt = self.neutron_ctxt()
 925
+
 926
+        if self.plugin == 'ovs':
 927
+            ctxt.update(self.ovs_ctxt())
 928
+        elif self.plugin in ['nvp', 'nsx']:
 929
+            ctxt.update(self.nvp_ctxt())
 930
+        elif self.plugin == 'n1kv':
 931
+            ctxt.update(self.n1kv_ctxt())
 932
+        elif self.plugin == 'Calico':
 933
+            ctxt.update(self.calico_ctxt())
 934
+        elif self.plugin == 'vsp':
 935
+            ctxt.update(self.nuage_ctxt())
 936
+        elif self.plugin == 'plumgrid':
 937
+            ctxt.update(self.pg_ctxt())
 938
+        elif self.plugin == 'midonet':
 939
+            ctxt.update(self.midonet_ctxt())
 940
+
 941
+        alchemy_flags = config('neutron-alchemy-flags')
 942
+        if alchemy_flags:
 943
+            flags = config_flags_parser(alchemy_flags)
 944
+            ctxt['neutron_alchemy_flags'] = flags
 945
+
 946
+        self._save_flag_file()
 947
+        return ctxt
 948
+
 949
+
 950
+class NeutronPortContext(OSContextGenerator):
 951
+
 952
+    def resolve_ports(self, ports):
 953
+        """Resolve NICs not yet bound to bridge(s)
 954
+
 955
+        If hwaddress provided then returns resolved hwaddress otherwise NIC.
 956
+        """
 957
+        if not ports:
 958
+            return None
 959
+
 960
+        hwaddr_to_nic = {}
 961
+        hwaddr_to_ip = {}
 962
+        for nic in list_nics():
 963
+            # Ignore virtual interfaces (bond masters will be identified from
 964
+            # their slaves)
 965
+            if not is_phy_iface(nic):
 966
+                continue
 967
+
 968
+            _nic = get_bond_master(nic)
 969
+            if _nic:
 970
+                log("Replacing iface '%s' with bond master '%s'" % (nic, _nic),
 971
+                    level=DEBUG)
 972
+                nic = _nic
 973
+
 974
+            hwaddr = get_nic_hwaddr(nic)
 975
+            hwaddr_to_nic[hwaddr] = nic
 976
+            addresses = get_ipv4_addr(nic, fatal=False)
 977
+            addresses += get_ipv6_addr(iface=nic, fatal=False)
 978
+            hwaddr_to_ip[hwaddr] = addresses
 979
+
 980
+        resolved = []
 981
+        mac_regex = re.compile(r'([0-9A-F]{2}[:-]){5}([0-9A-F]{2})', re.I)
 982
+        for entry in ports:
 983
+            if re.match(mac_regex, entry):
 984
+                # NIC is in known NICs and does NOT hace an IP address
 985
+                if entry in hwaddr_to_nic and not hwaddr_to_ip[entry]:
 986
+                    # If the nic is part of a bridge then don't use it
 987
+                    if is_bridge_member(hwaddr_to_nic[entry]):
 988
+                        continue
 989
+
 990
+                    # Entry is a MAC address for a valid interface that doesn't
 991
+                    # have an IP address assigned yet.
 992
+                    resolved.append(hwaddr_to_nic[entry])
 993
+            else:
 994
+                # If the passed entry is not a MAC address, assume it's a valid
 995
+                # interface, and that the user put it there on purpose (we can
 996
+                # trust it to be the real external network).
 997
+                resolved.append(entry)
 998
+
 999
+        # Ensure no duplicates
1000
+        return list(set(resolved))
1001
+
1002
+
1003
+class OSConfigFlagContext(OSContextGenerator):
1004
+    """Provides support for user-defined config flags.
1005
+
1006
+    Users can define a comma-seperated list of key=value pairs
1007
+    in the charm configuration and apply them at any point in
1008
+    any file by using a template flag.
1009
+
1010
+    Sometimes users might want config flags inserted within a
1011
+    specific section so this class allows users to specify the
1012
+    template flag name, allowing for multiple template flags
1013
+    (sections) within the same context.
1014
+
1015
+    NOTE: the value of config-flags may be a comma-separated list of
1016
+          key=value pairs and some Openstack config files support
1017
+          comma-separated lists as values.
1018
+    """
1019
+
1020
+    def __init__(self, charm_flag='config-flags',
1021
+                 template_flag='user_config_flags'):
1022
+        """
1023
+        :param charm_flag: config flags in charm configuration.
1024
+        :param template_flag: insert point for user-defined flags in template
1025
+                              file.
1026
+        """
1027
+        super(OSConfigFlagContext, self).__init__()
1028
+        self._charm_flag = charm_flag
1029
+        self._template_flag = template_flag
1030
+
1031
+    def __call__(self):
1032
+        config_flags = config(self._charm_flag)
1033
+        if not config_flags:
1034
+            return {}
1035
+
1036
+        return {self._template_flag:
1037
+                config_flags_parser(config_flags)}
1038
+
1039
+
1040
+class LibvirtConfigFlagsContext(OSContextGenerator):
1041
+    """
1042
+    This context provides support for extending
1043
+    the libvirt section through user-defined flags.
1044
+    """
1045
+    def __call__(self):
1046
+        ctxt = {}
1047
+        libvirt_flags = config('libvirt-flags')
1048
+        if libvirt_flags:
1049
+            ctxt['libvirt_flags'] = config_flags_parser(
1050
+                libvirt_flags)
1051
+        return ctxt
1052
+
1053
+
1054
+class SubordinateConfigContext(OSContextGenerator):
1055
+
1056
+    """
1057
+    Responsible for inspecting relations to subordinates that
1058
+    may be exporting required config via a json blob.
1059
+
1060
+    The subordinate interface allows subordinates to export their
1061
+    configuration requirements to the principle for multiple config
1062
+    files and multiple serivces.  Ie, a subordinate that has interfaces
1063
+    to both glance and nova may export to following yaml blob as json::
1064
+
1065
+        glance:
1066
+            /etc/glance/glance-api.conf:
1067
+                sections:
1068
+                    DEFAULT:
1069
+                        - [key1, value1]
1070
+            /etc/glance/glance-registry.conf:
1071
+                    MYSECTION:
1072
+                        - [key2, value2]
1073
+        nova:
1074
+            /etc/nova/nova.conf:
1075
+                sections:
1076
+                    DEFAULT:
1077
+                        - [key3, value3]
1078
+
1079
+
1080
+    It is then up to the principle charms to subscribe this context to
1081
+    the service+config file it is interestd in.  Configuration data will
1082
+    be available in the template context, in glance's case, as::
1083
+
1084
+        ctxt = {
1085
+            ... other context ...
1086
+            'subordinate_configuration': {
1087
+                'DEFAULT': {
1088
+                    'key1': 'value1',
1089
+                },
1090
+                'MYSECTION': {
1091
+                    'key2': 'value2',
1092
+                },
1093
+            }
1094
+        }
1095
+    """
1096
+
1097
+    def __init__(self, service, config_file, interface):
1098
+        """
1099
+        :param service     : Service name key to query in any subordinate
1100
+                             data found
1101
+        :param config_file : Service's config file to query sections
1102
+        :param interface   : Subordinate interface to inspect
1103
+        """
1104
+        self.config_file = config_file
1105
+        if isinstance(service, list):
1106
+            self.services = service
1107
+        else:
1108
+            self.services = [service]
1109
+        if isinstance(interface, list):
1110
+            self.interfaces = interface
1111
+        else:
1112
+            self.interfaces = [interface]
1113
+
1114
+    def __call__(self):
1115
+        ctxt = {'sections': {}}
1116
+        rids = []
1117
+        for interface in self.interfaces:
1118
+            rids.extend(relation_ids(interface))
1119
+        for rid in rids:
1120
+            for unit in related_units(rid):
1121
+                sub_config = relation_get('subordinate_configuration',
1122
+                                          rid=rid, unit=unit)
1123
+                if sub_config and sub_config != '':
1124
+                    try:
1125
+                        sub_config = json.loads(sub_config)
1126
+                    except:
1127
+                        log('Could not parse JSON from '
1128
+                            'subordinate_configuration setting from %s'
1129
+                            % rid, level=ERROR)
1130
+                        continue
1131
+
1132
+                    for service in self.services:
1133
+                        if service not in sub_config:
1134
+                            log('Found subordinate_configuration on %s but it '
1135
+                                'contained nothing for %s service'
1136
+                                % (rid, service), level=INFO)
1137
+                            continue
1138
+
1139
+                        sub_config = sub_config[service]
1140
+                        if self.config_file not in sub_config:
1141
+                            log('Found subordinate_configuration on %s but it '
1142
+                                'contained nothing for %s'
1143
+                                % (rid, self.config_file), level=INFO)
1144
+                            continue
1145
+
1146
+                        sub_config = sub_config[self.config_file]
1147
+                        for k, v in six.iteritems(sub_config):
1148
+                            if k == 'sections':
1149
+                                for section, config_list in six.iteritems(v):
1150
+                                    log("adding section '%s'" % (section),
1151
+                                        level=DEBUG)
1152
+                                    if ctxt[k].get(section):
1153
+                                        ctxt[k][section].extend(config_list)
1154
+                                    else:
1155
+                                        ctxt[k][section] = config_list
1156
+                            else:
1157
+                                ctxt[k] = v
1158
+        log("%d section(s) found" % (len(ctxt['sections'])), level=DEBUG)
1159
+        return ctxt
1160
+
1161
+
1162
+class LogLevelContext(OSContextGenerator):
1163
+
1164
+    def __call__(self):
1165
+        ctxt = {}
1166
+        ctxt['debug'] = \
1167
+            False if config('debug') is None else config('debug')
1168
+        ctxt['verbose'] = \
1169
+            False if config('verbose') is None else config('verbose')
1170
+
1171
+        return ctxt
1172
+
1173
+
1174
+class SyslogContext(OSContextGenerator):
1175
+
1176
+    def __call__(self):
1177
+        ctxt = {'use_syslog': config('use-syslog')}
1178
+        return ctxt
1179
+
1180
+
1181
+class BindHostContext(OSContextGenerator):
1182
+
1183
+    def __call__(self):
1184
+        if config('prefer-ipv6'):
1185
+            return {'bind_host': '::'}
1186
+        else:
1187
+            return {'bind_host': '0.0.0.0'}
1188
+
1189
+
1190
+class WorkerConfigContext(OSContextGenerator):
1191
+
1192
+    @property
1193
+    def num_cpus(self):
1194
+        # NOTE: use cpu_count if present (16.04 support)
1195
+        if hasattr(psutil, 'cpu_count'):
1196
+            return psutil.cpu_count()
1197
+        else:
1198
+            return psutil.NUM_CPUS
1199
+
1200
+    def __call__(self):
1201
+        multiplier = config('worker-multiplier') or 0
1202
+        count = int(self.num_cpus * multiplier)
1203
+        if multiplier > 0 and count == 0:
1204
+            count = 1
1205
+        ctxt = {"workers": count}
1206
+        return ctxt
1207
+
1208
+
1209
+class ZeroMQContext(OSContextGenerator):
1210
+    interfaces = ['zeromq-configuration']
1211
+
1212
+    def __call__(self):
1213
+        ctxt = {}
1214
+        if is_relation_made('zeromq-configuration', 'host'):
1215
+            for rid in relation_ids('zeromq-configuration'):
1216
+                    for unit in related_units(rid):
1217
+                        ctxt['zmq_nonce'] = relation_get('nonce', unit, rid)
1218
+                        ctxt['zmq_host'] = relation_get('host', unit, rid)
1219
+                        ctxt['zmq_redis_address'] = relation_get(
1220
+                            'zmq_redis_address', unit, rid)
1221
+
1222
+        return ctxt
1223
+
1224
+
1225
+class NotificationDriverContext(OSContextGenerator):
1226
+
1227
+    def __init__(self, zmq_relation='zeromq-configuration',
1228
+                 amqp_relation='amqp'):
1229
+        """
1230
+        :param zmq_relation: Name of Zeromq relation to check
1231
+        """
1232
+        self.zmq_relation = zmq_relation
1233
+        self.amqp_relation = amqp_relation
1234
+
1235
+    def __call__(self):
1236
+        ctxt = {'notifications': 'False'}
1237
+        if is_relation_made(self.amqp_relation):
1238
+            ctxt['notifications'] = "True"
1239
+
1240
+        return ctxt
1241
+
1242
+
1243
+class SysctlContext(OSContextGenerator):
1244
+    """This context check if the 'sysctl' option exists on configuration
1245
+    then creates a file with the loaded contents"""
1246
+    def __call__(self):
1247
+        sysctl_dict = config('sysctl')
1248
+        if sysctl_dict:
1249
+            sysctl_create(sysctl_dict,
1250
+                          '/etc/sysctl.d/50-{0}.conf'.format(charm_name()))
1251
+        return {'sysctl': sysctl_dict}
1252
+
1253
+
1254
+class NeutronAPIContext(OSContextGenerator):
1255
+    '''
1256
+    Inspects current neutron-plugin-api relation for neutron settings. Return
1257
+    defaults if it is not present.
1258
+    '''
1259
+    interfaces = ['neutron-plugin-api']
1260
+
1261
+    def __call__(self):
1262
+        self.neutron_defaults = {
1263
+            'l2_population': {
1264
+                'rel_key': 'l2-population',
1265
+                'default': False,
1266
+            },
1267
+            'overlay_network_type': {
1268
+                'rel_key': 'overlay-network-type',
1269
+                'default': 'gre',
1270
+            },
1271
+            'neutron_security_groups': {
1272
+                'rel_key': 'neutron-security-groups',
1273
+                'default': False,
1274
+            },
1275
+            'network_device_mtu': {
1276
+                'rel_key': 'network-device-mtu',
1277
+                'default': None,
1278
+            },
1279
+            'enable_dvr': {
1280
+                'rel_key': 'enable-dvr',
1281
+                'default': False,
1282
+            },
1283
+            'enable_l3ha': {
1284
+                'rel_key': 'enable-l3ha',
1285
+                'default': False,
1286
+            },
1287
+        }
1288
+        ctxt = self.get_neutron_options({})
1289
+        for rid in relation_ids('neutron-plugin-api'):
1290
+            for unit in related_units(rid):
1291
+                rdata = relation_get(rid=rid, unit=unit)
1292
+                if 'l2-population' in rdata:
1293
+                    ctxt.update(self.get_neutron_options(rdata))
1294
+
1295
+        return ctxt
1296
+
1297
+    def get_neutron_options(self, rdata):
1298
+        settings = {}
1299
+        for nkey in self.neutron_defaults.keys():
1300
+            defv = self.neutron_defaults[nkey]['default']
1301
+            rkey = self.neutron_defaults[nkey]['rel_key']
1302
+            if rkey in rdata.keys():
1303
+                if type(defv) is bool:
1304
+                    settings[nkey] = bool_from_string(rdata[rkey])
1305
+                else:
1306
+                    settings[nkey] = rdata[rkey]
1307
+            else:
1308
+                settings[nkey] = defv
1309
+        return settings
1310
+
1311
+
1312
+class ExternalPortContext(NeutronPortContext):
1313
+
1314
+    def __call__(self):
1315
+        ctxt = {}
1316
+        ports = config('ext-port')
1317
+        if ports:
1318
+            ports = [p.strip() for p in ports.split()]
1319
+            ports = self.resolve_ports(ports)
1320
+            if ports:
1321
+                ctxt = {"ext_port": ports[0]}
1322
+                napi_settings = NeutronAPIContext()()
1323
+                mtu = napi_settings.get('network_device_mtu')
1324
+                if mtu:
1325
+                    ctxt['ext_port_mtu'] = mtu
1326
+
1327
+        return ctxt
1328
+
1329
+
1330
+class DataPortContext(NeutronPortContext):
1331
+
1332
+    def __call__(self):
1333
+        ports = config('data-port')
1334
+        if ports:
1335
+            # Map of {port/mac:bridge}
1336
+            portmap = parse_data_port_mappings(ports)
1337
+            ports = portmap.keys()
1338
+            # Resolve provided ports or mac addresses and filter out those
1339
+            # already attached to a bridge.
1340
+            resolved = self.resolve_ports(ports)
1341
+            # FIXME: is this necessary?
1342
+            normalized = {get_nic_hwaddr(port): port for port in resolved
1343
+                          if port not in ports}
1344
+            normalized.update({port: port for port in resolved
1345
+                               if port in ports})
1346
+            if resolved:
1347
+                return {normalized[port]: bridge for port, bridge in
1348
+                        six.iteritems(portmap) if port in normalized.keys()}
1349
+
1350
+        return None
1351
+
1352
+
1353
+class PhyNICMTUContext(DataPortContext):
1354
+
1355
+    def __call__(self):
1356
+        ctxt = {}
1357
+        mappings = super(PhyNICMTUContext, self).__call__()
1358
+        if mappings and mappings.keys():
1359
+            ports = sorted(mappings.keys())
1360
+            napi_settings = NeutronAPIContext()()
1361
+            mtu = napi_settings.get('network_device_mtu')
1362
+            all_ports = set()
1363
+            # If any of ports is a vlan device, its underlying device must have
1364
+            # mtu applied first.
1365
+            for port in ports:
1366
+                for lport in glob.glob("/sys/class/net/%s/lower_*" % port):
1367
+                    lport = os.path.basename(lport)
1368
+                    all_ports.add(lport.split('_')[1])
1369
+
1370
+            all_ports = list(all_ports)
1371
+            all_ports.extend(ports)
1372
+            if mtu:
1373
+                ctxt["devs"] = '\\n'.join(all_ports)
1374
+                ctxt['mtu'] = mtu
1375
+
1376
+        return ctxt
1377
+
1378
+
1379
+class NetworkServiceContext(OSContextGenerator):
1380
+
1381
+    def __init__(self, rel_name='quantum-network-service'):
1382
+        self.rel_name = rel_name
1383
+        self.interfaces = [rel_name]
1384
+
1385
+    def __call__(self):
1386
+        for rid in relation_ids(self.rel_name):
1387
+            for unit in related_units(rid):
1388
+                rdata = relation_get(rid=rid, unit=unit)
1389
+                ctxt = {
1390
+                    'keystone_host': rdata.get('keystone_host'),
1391
+                    'service_port': rdata.get('service_port'),
1392
+                    'auth_port': rdata.get('auth_port'),
1393
+                    'service_tenant': rdata.get('service_tenant'),
1394
+                    'service_username': rdata.get('service_username'),
1395
+                    'service_password': rdata.get('service_password'),
1396
+                    'quantum_host': rdata.get('quantum_host'),
1397
+                    'quantum_port': rdata.get('quantum_port'),
1398
+                    'quantum_url': rdata.get('quantum_url'),
1399
+                    'region': rdata.get('region'),
1400
+                    'service_protocol':
1401
+                    rdata.get('service_protocol') or 'http',
1402
+                    'auth_protocol':
1403
+                    rdata.get('auth_protocol') or 'http',
1404
+                    'api_version':
1405
+                    rdata.get('api_version') or '2.0',
1406
+                }
1407
+                if self.context_complete(ctxt):
1408
+                    return ctxt
1409
+        return {}
1410
+
1411
+
1412
+class InternalEndpointContext(OSContextGenerator):
1413
+    """Internal endpoint context.
1414
+
1415
+    This context provides the endpoint type used for communication between
1416
+    services e.g. between Nova and Cinder internally. Openstack uses Public
1417
+    endpoints by default so this allows admins to optionally use internal
1418
+    endpoints.
1419
+    """
1420
+    def __call__(self):
1421
+        return {'use_internal_endpoints': config('use-internal-endpoints')}
1422
+
1423
+
1424
+class AppArmorContext(OSContextGenerator):
1425
+    """Base class for apparmor contexts."""
1426
+
1427
+    def __init__(self, profile_name=None):
1428
+        self._ctxt = None
1429
+        self.aa_profile = profile_name
1430
+        self.aa_utils_packages = ['apparmor-utils']
1431
+
1432
+    @property
1433
+    def ctxt(self):
1434
+        if self._ctxt is not None:
1435
+            return self._ctxt
1436
+        self._ctxt = self._determine_ctxt()
1437
+        return self._ctxt
1438
+
1439
+    def _determine_ctxt(self):
1440
+        """
1441
+        Validate aa-profile-mode settings is disable, enforce, or complain.
1442
+
1443
+        :return ctxt: Dictionary of the apparmor profile or None
1444
+        """
1445
+        if config('aa-profile-mode') in ['disable', 'enforce', 'complain']:
1446
+            ctxt = {'aa_profile_mode': config('aa-profile-mode'),
1447
+                    'ubuntu_release': lsb_release()['DISTRIB_RELEASE']}
1448
+            if self.aa_profile:
1449
+                ctxt['aa_profile'] = self.aa_profile
1450
+        else:
1451
+            ctxt = None
1452
+        return ctxt
1453
+
1454
+    def __call__(self):
1455
+        return self.ctxt
1456
+
1457
+    def install_aa_utils(self):
1458
+        """
1459
+        Install packages required for apparmor configuration.
1460
+        """
1461
+        log("Installing apparmor utils.")
1462
+        ensure_packages(self.aa_utils_packages)
1463
+
1464
+    def manually_disable_aa_profile(self):
1465
+        """
1466
+        Manually disable an apparmor profile.
1467
+
1468
+        If aa-profile-mode is set to disabled (default) this is required as the
1469
+        template has been written but apparmor is yet unaware of the profile
1470
+        and aa-disable aa-profile fails. Without this the profile would kick
1471
+        into enforce mode on the next service restart.
1472
+
1473
+        """
1474
+        profile_path = '/etc/apparmor.d'
1475
+        disable_path = '/etc/apparmor.d/disable'
1476
+        if not os.path.lexists(os.path.join(disable_path, self.aa_profile)):
1477
+            os.symlink(os.path.join(profile_path, self.aa_profile),
1478
+                       os.path.join(disable_path, self.aa_profile))
1479
+
1480
+    def setup_aa_profile(self):
1481
+        """
1482
+        Setup an apparmor profile.
1483
+        The ctxt dictionary will contain the apparmor profile mode and
1484
+        the apparmor profile name.
1485
+        Makes calls out to aa-disable, aa-complain, or aa-enforce to setup
1486
+        the apparmor profile.
1487
+        """
1488
+        self()
1489
+        if not self.ctxt:
1490
+            log("Not enabling apparmor Profile")
1491
+            return
1492
+        self.install_aa_utils()
1493
+        cmd = ['aa-{}'.format(self.ctxt['aa_profile_mode'])]
1494
+        cmd.append(self.ctxt['aa_profile'])
1495
+        log("Setting up the apparmor profile for {} in {} mode."
1496
+            "".format(self.ctxt['aa_profile'], self.ctxt['aa_profile_mode']))
1497
+        try:
1498
+            check_call(cmd)
1499
+        except CalledProcessError as e:
1500
+            # If aa-profile-mode is set to disabled (default) manual
1501
+            # disabling is required as the template has been written but
1502
+            # apparmor is yet unaware of the profile and aa-disable aa-profile
1503
+            # fails. If aa-disable learns to read profile files first this can
1504
+            # be removed.
1505
+            if self.ctxt['aa_profile_mode'] == 'disable':
1506
+                log("Manually disabling the apparmor profile for {}."
1507
+                    "".format(self.ctxt['aa_profile']))
1508
+                self.manually_disable_aa_profile()
1509
+                return
1510
+            status_set('blocked', "Apparmor profile {} failed to be set to {}."
1511
+                                  "".format(self.ctxt['aa_profile'],
1512
+                                            self.ctxt['aa_profile_mode']))
1513
+            raise e
Back to file index

hooks/charmhelpers/contrib/openstack/exceptions.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/exceptions.py
 3
@@ -0,0 +1,21 @@
 4
+# Copyright 2016 Canonical Ltd
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+
19
+class OSContextError(Exception):
20
+    """Raised when an error occurs during context generation.
21
+
22
+    This exception is principally used in contrib.openstack.context
23
+    """
24
+    pass
Back to file index

hooks/charmhelpers/contrib/openstack/files/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/files/__init__.py
 3
@@ -0,0 +1,16 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+# dummy __init__.py to fool syncer into thinking this is a syncable python
19
+# module
Back to file index

hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy.sh
 3
@@ -0,0 +1,34 @@
 4
+#!/bin/bash
 5
+#--------------------------------------------
 6
+# This file is managed by Juju
 7
+#--------------------------------------------
 8
+#
 9
+# Copyright 2009,2012 Canonical Ltd.
10
+# Author: Tom Haddon
11
+
12
+CRITICAL=0
13
+NOTACTIVE=''
14
+LOGFILE=/var/log/nagios/check_haproxy.log
15
+AUTH=$(grep -r "stats auth" /etc/haproxy | awk 'NR=1{print $4}')
16
+
17
+typeset -i N_INSTANCES=0
18
+for appserver in $(awk '/^\s+server/{print $2}' /etc/haproxy/haproxy.cfg)
19
+do
20
+    N_INSTANCES=N_INSTANCES+1
21
+    output=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' --regex=",${appserver},.*,UP.*" -e ' 200 OK')
22
+    if [ $? != 0 ]; then
23
+        date >> $LOGFILE
24
+        echo $output >> $LOGFILE
25
+        /usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v | grep ",${appserver}," >> $LOGFILE 2>&1
26
+        CRITICAL=1
27
+        NOTACTIVE="${NOTACTIVE} $appserver"
28
+    fi
29
+done
30
+
31
+if [ $CRITICAL = 1 ]; then
32
+    echo "CRITICAL:${NOTACTIVE}"
33
+    exit 2
34
+fi
35
+
36
+echo "OK: All haproxy instances ($N_INSTANCES) looking good"
37
+exit 0
Back to file index

hooks/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/files/check_haproxy_queue_depth.sh
 3
@@ -0,0 +1,30 @@
 4
+#!/bin/bash
 5
+#--------------------------------------------
 6
+# This file is managed by Juju
 7
+#--------------------------------------------
 8
+#                                       
 9
+# Copyright 2009,2012 Canonical Ltd.
10
+# Author: Tom Haddon
11
+
12
+# These should be config options at some stage
13
+CURRQthrsh=0
14
+MAXQthrsh=100
15
+
16
+AUTH=$(grep -r "stats auth" /etc/haproxy | head -1 | awk '{print $4}')
17
+
18
+HAPROXYSTATS=$(/usr/lib/nagios/plugins/check_http -a ${AUTH} -I 127.0.0.1 -p 8888 -u '/;csv' -v)
19
+
20
+for BACKEND in $(echo $HAPROXYSTATS| xargs -n1 | grep BACKEND | awk -F , '{print $1}')
21
+do
22
+    CURRQ=$(echo "$HAPROXYSTATS" | grep $BACKEND | grep BACKEND | cut -d , -f 3)
23
+    MAXQ=$(echo "$HAPROXYSTATS"  | grep $BACKEND | grep BACKEND | cut -d , -f 4)
24
+
25
+    if [[ $CURRQ -gt $CURRQthrsh || $MAXQ -gt $MAXQthrsh ]] ; then
26
+        echo "CRITICAL: queue depth for $BACKEND - CURRENT:$CURRQ MAX:$MAXQ"
27
+        exit 2
28
+    fi
29
+done
30
+
31
+echo "OK: All haproxy queue depths looking good"
32
+exit 0
33
+
Back to file index

hooks/charmhelpers/contrib/openstack/ha/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/ha/__init__.py
 3
@@ -0,0 +1,13 @@
 4
+# Copyright 2016 Canonical Ltd
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
Back to file index

hooks/charmhelpers/contrib/openstack/ha/utils.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/openstack/ha/utils.py
  3
@@ -0,0 +1,128 @@
  4
+# Copyright 2014-2016 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+#
 19
+# Copyright 2016 Canonical Ltd.
 20
+#
 21
+# Authors:
 22
+#  Openstack Charmers <
 23
+#
 24
+
 25
+"""
 26
+Helpers for high availability.
 27
+"""
 28
+
 29
+import re
 30
+
 31
+from charmhelpers.core.hookenv import (
 32
+    log,
 33
+    relation_set,
 34
+    charm_name,
 35
+    config,
 36
+    status_set,
 37
+    DEBUG,
 38
+)
 39
+
 40
+from charmhelpers.core.host import (
 41
+    lsb_release
 42
+)
 43
+
 44
+from charmhelpers.contrib.openstack.ip import (
 45
+    resolve_address,
 46
+)
 47
+
 48
+
 49
+class DNSHAException(Exception):
 50
+    """Raised when an error occurs setting up DNS HA
 51
+    """
 52
+
 53
+    pass
 54
+
 55
+
 56
+def update_dns_ha_resource_params(resources, resource_params,
 57
+                                  relation_id=None,
 58
+                                  crm_ocf='ocf:maas:dns'):
 59
+    """ Check for os-*-hostname settings and update resource dictionaries for
 60
+    the HA relation.
 61
+
 62
+    @param resources: Pointer to dictionary of resources.
 63
+                      Usually instantiated in ha_joined().
 64
+    @param resource_params: Pointer to dictionary of resource parameters.
 65
+                            Usually instantiated in ha_joined()
 66
+    @param relation_id: Relation ID of the ha relation
 67
+    @param crm_ocf: Corosync Open Cluster Framework resource agent to use for
 68
+                    DNS HA
 69
+    """
 70
+
 71
+    # Validate the charm environment for DNS HA
 72
+    assert_charm_supports_dns_ha()
 73
+
 74
+    settings = ['os-admin-hostname', 'os-internal-hostname',
 75
+                'os-public-hostname', 'os-access-hostname']
 76
+
 77
+    # Check which DNS settings are set and update dictionaries
 78
+    hostname_group = []
 79
+    for setting in settings:
 80
+        hostname = config(setting)
 81
+        if hostname is None:
 82
+            log('DNS HA: Hostname setting {} is None. Ignoring.'
 83
+                ''.format(setting),
 84
+                DEBUG)
 85
+            continue
 86
+        m = re.search('os-(.+?)-hostname', setting)
 87
+        if m:
 88
+            networkspace = m.group(1)
 89
+        else:
 90
+            msg = ('Unexpected DNS hostname setting: {}. '
 91
+                   'Cannot determine network space name'
 92
+                   ''.format(setting))
 93
+            status_set('blocked', msg)
 94
+            raise DNSHAException(msg)
 95
+
 96
+        hostname_key = 'res_{}_{}_hostname'.format(charm_name(), networkspace)
 97
+        if hostname_key in hostname_group:
 98
+            log('DNS HA: Resource {}: {} already exists in '
 99
+                'hostname group - skipping'.format(hostname_key, hostname),
100
+                DEBUG)
101
+            continue
102
+
103
+        hostname_group.append(hostname_key)
104
+        resources[hostname_key] = crm_ocf
105
+        resource_params[hostname_key] = (
106
+            'params fqdn="{}" ip_address="{}" '
107
+            ''.format(hostname, resolve_address(endpoint_type=networkspace,
108
+                                                override=False)))
109
+
110
+    if len(hostname_group) >= 1:
111
+        log('DNS HA: Hostname group is set with {} as members. '
112
+            'Informing the ha relation'.format(' '.join(hostname_group)),
113
+            DEBUG)
114
+        relation_set(relation_id=relation_id, groups={
115
+            'grp_{}_hostnames'.format(charm_name()): ' '.join(hostname_group)})
116
+    else:
117
+        msg = 'DNS HA: Hostname group has no members.'
118
+        status_set('blocked', msg)
119
+        raise DNSHAException(msg)
120
+
121
+
122
+def assert_charm_supports_dns_ha():
123
+    """Validate prerequisites for DNS HA
124
+    The MAAS client is only available on Xenial or greater
125
+    """
126
+    if lsb_release().get('DISTRIB_RELEASE') < '16.04':
127
+        msg = ('DNS HA is only supported on 16.04 and greater '
128
+               'versions of Ubuntu.')
129
+        status_set('blocked', msg)
130
+        raise DNSHAException(msg)
131
+    return True
Back to file index

hooks/charmhelpers/contrib/openstack/ip.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/openstack/ip.py
  3
@@ -0,0 +1,186 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+from charmhelpers.core.hookenv import (
 19
+    config,
 20
+    unit_get,
 21
+    service_name,
 22
+    network_get_primary_address,
 23
+)
 24
+from charmhelpers.contrib.network.ip import (
 25
+    get_address_in_network,
 26
+    is_address_in_network,
 27
+    is_ipv6,
 28
+    get_ipv6_addr,
 29
+    resolve_network_cidr,
 30
+)
 31
+from charmhelpers.contrib.hahelpers.cluster import is_clustered
 32
+
 33
+PUBLIC = 'public'
 34
+INTERNAL = 'int'
 35
+ADMIN = 'admin'
 36
+ACCESS = 'access'
 37
+
 38
+ADDRESS_MAP = {
 39
+    PUBLIC: {
 40
+        'binding': 'public',
 41
+        'config': 'os-public-network',
 42
+        'fallback': 'public-address',
 43
+        'override': 'os-public-hostname',
 44
+    },
 45
+    INTERNAL: {
 46
+        'binding': 'internal',
 47
+        'config': 'os-internal-network',
 48
+        'fallback': 'private-address',
 49
+        'override': 'os-internal-hostname',
 50
+    },
 51
+    ADMIN: {
 52
+        'binding': 'admin',
 53
+        'config': 'os-admin-network',
 54
+        'fallback': 'private-address',
 55
+        'override': 'os-admin-hostname',
 56
+    },
 57
+    ACCESS: {
 58
+        'binding': 'access',
 59
+        'config': 'access-network',
 60
+        'fallback': 'private-address',
 61
+        'override': 'os-access-hostname',
 62
+    },
 63
+}
 64
+
 65
+
 66
+def canonical_url(configs, endpoint_type=PUBLIC):
 67
+    """Returns the correct HTTP URL to this host given the state of HTTPS
 68
+    configuration, hacluster and charm configuration.
 69
+
 70
+    :param configs: OSTemplateRenderer config templating object to inspect
 71
+                    for a complete https context.
 72
+    :param endpoint_type: str endpoint type to resolve.
 73
+    :param returns: str base URL for services on the current service unit.
 74
+    """
 75
+    scheme = _get_scheme(configs)
 76
+
 77
+    address = resolve_address(endpoint_type)
 78
+    if is_ipv6(address):
 79
+        address = "[{}]".format(address)
 80
+
 81
+    return '%s://%s' % (scheme, address)
 82
+
 83
+
 84
+def _get_scheme(configs):
 85
+    """Returns the scheme to use for the url (either http or https)
 86
+    depending upon whether https is in the configs value.
 87
+
 88
+    :param configs: OSTemplateRenderer config templating object to inspect
 89
+                    for a complete https context.
 90
+    :returns: either 'http' or 'https' depending on whether https is
 91
+              configured within the configs context.
 92
+    """
 93
+    scheme = 'http'
 94
+    if configs and 'https' in configs.complete_contexts():
 95
+        scheme = 'https'
 96
+    return scheme
 97
+
 98
+
 99
+def _get_address_override(endpoint_type=PUBLIC):
100
+    """Returns any address overrides that the user has defined based on the
101
+    endpoint type.
102
+
103
+    Note: this function allows for the service name to be inserted into the
104
+    address if the user specifies {service_name}.somehost.org.
105
+
106
+    :param endpoint_type: the type of endpoint to retrieve the override
107
+                          value for.
108
+    :returns: any endpoint address or hostname that the user has overridden
109
+              or None if an override is not present.
110
+    """
111
+    override_key = ADDRESS_MAP[endpoint_type]['override']
112
+    addr_override = config(override_key)
113
+    if not addr_override:
114
+        return None
115
+    else:
116
+        return addr_override.format(service_name=service_name())
117
+
118
+
119
+def resolve_address(endpoint_type=PUBLIC, override=True):
120
+    """Return unit address depending on net config.
121
+
122
+    If unit is clustered with vip(s) and has net splits defined, return vip on
123
+    correct network. If clustered with no nets defined, return primary vip.
124
+
125
+    If not clustered, return unit address ensuring address is on configured net
126
+    split if one is configured, or a Juju 2.0 extra-binding has been used.
127
+
128
+    :param endpoint_type: Network endpoing type
129
+    :param override: Accept hostname overrides or not
130
+    """
131
+    resolved_address = None
132
+    if override:
133
+        resolved_address = _get_address_override(endpoint_type)
134
+        if resolved_address:
135
+            return resolved_address
136
+
137
+    vips = config('vip')
138
+    if vips:
139
+        vips = vips.split()
140
+
141
+    net_type = ADDRESS_MAP[endpoint_type]['config']
142
+    net_addr = config(net_type)
143
+    net_fallback = ADDRESS_MAP[endpoint_type]['fallback']
144
+    binding = ADDRESS_MAP[endpoint_type]['binding']
145
+    clustered = is_clustered()
146
+
147
+    if clustered and vips:
148
+        if net_addr:
149
+            for vip in vips:
150
+                if is_address_in_network(net_addr, vip):
151
+                    resolved_address = vip
152
+                    break
153
+        else:
154
+            # NOTE: endeavour to check vips against network space
155
+            #       bindings
156
+            try:
157
+                bound_cidr = resolve_network_cidr(
158
+                    network_get_primary_address(binding)
159
+                )
160
+                for vip in vips:
161
+                    if is_address_in_network(bound_cidr, vip):
162
+                        resolved_address = vip
163
+                        break
164
+            except NotImplementedError:
165
+                # If no net-splits configured and no support for extra
166
+                # bindings/network spaces so we expect a single vip
167
+                resolved_address = vips[0]
168
+    else:
169
+        if config('prefer-ipv6'):
170
+            fallback_addr = get_ipv6_addr(exc_list=vips)[0]
171
+        else:
172
+            fallback_addr = unit_get(net_fallback)
173
+
174
+        if net_addr:
175
+            resolved_address = get_address_in_network(net_addr, fallback_addr)
176
+        else:
177
+            # NOTE: only try to use extra bindings if legacy network
178
+            #       configuration is not in use
179
+            try:
180
+                resolved_address = network_get_primary_address(binding)
181
+            except NotImplementedError:
182
+                resolved_address = fallback_addr
183
+
184
+    if resolved_address is None:
185
+        raise ValueError("Unable to resolve a suitable IP address based on "
186
+                         "charm state and configuration. (net_type=%s, "
187
+                         "clustered=%s)" % (net_type, clustered))
188
+
189
+    return resolved_address
Back to file index

hooks/charmhelpers/contrib/openstack/neutron.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/openstack/neutron.py
  3
@@ -0,0 +1,388 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+# Various utilies for dealing with Neutron and the renaming from Quantum.
 19
+
 20
+import six
 21
+from subprocess import check_output
 22
+
 23
+from charmhelpers.core.hookenv import (
 24
+    config,
 25
+    log,
 26
+    ERROR,
 27
+)
 28
+
 29
+from charmhelpers.contrib.openstack.utils import os_release
 30
+
 31
+
 32
+def headers_package():
 33
+    """Ensures correct linux-headers for running kernel are installed,
 34
+    for building DKMS package"""
 35
+    kver = check_output(['uname', '-r']).decode('UTF-8').strip()
 36
+    return 'linux-headers-%s' % kver
 37
+
 38
+QUANTUM_CONF_DIR = '/etc/quantum'
 39
+
 40
+
 41
+def kernel_version():
 42
+    """ Retrieve the current major kernel version as a tuple e.g. (3, 13) """
 43
+    kver = check_output(['uname', '-r']).decode('UTF-8').strip()
 44
+    kver = kver.split('.')
 45
+    return (int(kver[0]), int(kver[1]))
 46
+
 47
+
 48
+def determine_dkms_package():
 49
+    """ Determine which DKMS package should be used based on kernel version """
 50
+    # NOTE: 3.13 kernels have support for GRE and VXLAN native
 51
+    if kernel_version() >= (3, 13):
 52
+        return []
 53
+    else:
 54
+        return [headers_package(), 'openvswitch-datapath-dkms']
 55
+
 56
+
 57
+# legacy
 58
+
 59
+
 60
+def quantum_plugins():
 61
+    from charmhelpers.contrib.openstack import context
 62
+    return {
 63
+        'ovs': {
 64
+            'config': '/etc/quantum/plugins/openvswitch/'
 65
+                      'ovs_quantum_plugin.ini',
 66
+            'driver': 'quantum.plugins.openvswitch.ovs_quantum_plugin.'
 67
+                      'OVSQuantumPluginV2',
 68
+            'contexts': [
 69
+                context.SharedDBContext(user=config('neutron-database-user'),
 70
+                                        database=config('neutron-database'),
 71
+                                        relation_prefix='neutron',
 72
+                                        ssl_dir=QUANTUM_CONF_DIR)],
 73
+            'services': ['quantum-plugin-openvswitch-agent'],
 74
+            'packages': [determine_dkms_package(),
 75
+                         ['quantum-plugin-openvswitch-agent']],
 76
+            'server_packages': ['quantum-server',
 77
+                                'quantum-plugin-openvswitch'],
 78
+            'server_services': ['quantum-server']
 79
+        },
 80
+        'nvp': {
 81
+            'config': '/etc/quantum/plugins/nicira/nvp.ini',
 82
+            'driver': 'quantum.plugins.nicira.nicira_nvp_plugin.'
 83
+                      'QuantumPlugin.NvpPluginV2',
 84
+            'contexts': [
 85
+                context.SharedDBContext(user=config('neutron-database-user'),
 86
+                                        database=config('neutron-database'),
 87
+                                        relation_prefix='neutron',
 88
+                                        ssl_dir=QUANTUM_CONF_DIR)],
 89
+            'services': [],
 90
+            'packages': [],
 91
+            'server_packages': ['quantum-server',
 92
+                                'quantum-plugin-nicira'],
 93
+            'server_services': ['quantum-server']
 94
+        }
 95
+    }
 96
+
 97
+NEUTRON_CONF_DIR = '/etc/neutron'
 98
+
 99
+
100
+def neutron_plugins():
101
+    from charmhelpers.contrib.openstack import context
102
+    release = os_release('nova-common')
103
+    plugins = {
104
+        'ovs': {
105
+            'config': '/etc/neutron/plugins/openvswitch/'
106
+                      'ovs_neutron_plugin.ini',
107
+            'driver': 'neutron.plugins.openvswitch.ovs_neutron_plugin.'
108
+                      'OVSNeutronPluginV2',
109
+            'contexts': [
110
+                context.SharedDBContext(user=config('neutron-database-user'),
111
+                                        database=config('neutron-database'),
112
+                                        relation_prefix='neutron',
113
+                                        ssl_dir=NEUTRON_CONF_DIR)],
114
+            'services': ['neutron-plugin-openvswitch-agent'],
115
+            'packages': [determine_dkms_package(),
116
+                         ['neutron-plugin-openvswitch-agent']],
117
+            'server_packages': ['neutron-server',
118
+                                'neutron-plugin-openvswitch'],
119
+            'server_services': ['neutron-server']
120
+        },
121
+        'nvp': {
122
+            'config': '/etc/neutron/plugins/nicira/nvp.ini',
123
+            'driver': 'neutron.plugins.nicira.nicira_nvp_plugin.'
124
+                      'NeutronPlugin.NvpPluginV2',
125
+            'contexts': [
126
+                context.SharedDBContext(user=config('neutron-database-user'),
127
+                                        database=config('neutron-database'),
128
+                                        relation_prefix='neutron',
129
+                                        ssl_dir=NEUTRON_CONF_DIR)],
130
+            'services': [],
131
+            'packages': [],
132
+            'server_packages': ['neutron-server',
133
+                                'neutron-plugin-nicira'],
134
+            'server_services': ['neutron-server']
135
+        },
136
+        'nsx': {
137
+            'config': '/etc/neutron/plugins/vmware/nsx.ini',
138
+            'driver': 'vmware',
139
+            'contexts': [
140
+                context.SharedDBContext(user=config('neutron-database-user'),
141
+                                        database=config('neutron-database'),
142
+                                        relation_prefix='neutron',
143
+                                        ssl_dir=NEUTRON_CONF_DIR)],
144
+            'services': [],
145
+            'packages': [],
146
+            'server_packages': ['neutron-server',
147
+                                'neutron-plugin-vmware'],
148
+            'server_services': ['neutron-server']
149
+        },
150
+        'n1kv': {
151
+            'config': '/etc/neutron/plugins/cisco/cisco_plugins.ini',
152
+            'driver': 'neutron.plugins.cisco.network_plugin.PluginV2',
153
+            'contexts': [
154
+                context.SharedDBContext(user=config('neutron-database-user'),
155
+                                        database=config('neutron-database'),
156
+                                        relation_prefix='neutron',
157
+                                        ssl_dir=NEUTRON_CONF_DIR)],
158
+            'services': [],
159
+            'packages': [determine_dkms_package(),
160
+                         ['neutron-plugin-cisco']],
161
+            'server_packages': ['neutron-server',
162
+                                'neutron-plugin-cisco'],
163
+            'server_services': ['neutron-server']
164
+        },
165
+        'Calico': {
166
+            'config': '/etc/neutron/plugins/ml2/ml2_conf.ini',
167
+            'driver': 'neutron.plugins.ml2.plugin.Ml2Plugin',
168
+            'contexts': [
169
+                context.SharedDBContext(user=config('neutron-database-user'),
170
+                                        database=config('neutron-database'),
171
+                                        relation_prefix='neutron',
172
+                                        ssl_dir=NEUTRON_CONF_DIR)],
173
+            'services': ['calico-felix',
174
+                         'bird',
175
+                         'neutron-dhcp-agent',
176
+                         'nova-api-metadata',
177
+                         'etcd'],
178
+            'packages': [determine_dkms_package(),
179
+                         ['calico-compute',
180
+                          'bird',
181
+                          'neutron-dhcp-agent',
182
+                          'nova-api-metadata',
183
+                          'etcd']],
184
+            'server_packages': ['neutron-server', 'calico-control', 'etcd'],
185
+            'server_services': ['neutron-server', 'etcd']
186
+        },
187
+        'vsp': {
188
+            'config': '/etc/neutron/plugins/nuage/nuage_plugin.ini',
189
+            'driver': 'neutron.plugins.nuage.plugin.NuagePlugin',
190
+            'contexts': [
191
+                context.SharedDBContext(user=config('neutron-database-user'),
192
+                                        database=config('neutron-database'),
193
+                                        relation_prefix='neutron',
194
+                                        ssl_dir=NEUTRON_CONF_DIR)],
195
+            'services': [],
196
+            'packages': [],
197
+            'server_packages': ['neutron-server', 'neutron-plugin-nuage'],
198
+            'server_services': ['neutron-server']
199
+        },
200
+        'plumgrid': {
201
+            'config': '/etc/neutron/plugins/plumgrid/plumgrid.ini',
202
+            'driver': 'neutron.plugins.plumgrid.plumgrid_plugin.plumgrid_plugin.NeutronPluginPLUMgridV2',
203
+            'contexts': [
204
+                context.SharedDBContext(user=config('database-user'),
205
+                                        database=config('database'),
206
+                                        ssl_dir=NEUTRON_CONF_DIR)],
207
+            'services': [],
208
+            'packages': ['plumgrid-lxc',
209
+                         'iovisor-dkms'],
210
+            'server_packages': ['neutron-server',
211
+                                'neutron-plugin-plumgrid'],
212
+            'server_services': ['neutron-server']
213
+        },
214
+        'midonet': {
215
+            'config': '/etc/neutron/plugins/midonet/midonet.ini',
216
+            'driver': 'midonet.neutron.plugin.MidonetPluginV2',
217
+            'contexts': [
218
+                context.SharedDBContext(user=config('neutron-database-user'),
219
+                                        database=config('neutron-database'),
220
+                                        relation_prefix='neutron',
221
+                                        ssl_dir=NEUTRON_CONF_DIR)],
222
+            'services': [],
223
+            'packages': [determine_dkms_package()],
224
+            'server_packages': ['neutron-server',
225
+                                'python-neutron-plugin-midonet'],
226
+            'server_services': ['neutron-server']
227
+        }
228
+    }
229
+    if release >= 'icehouse':
230
+        # NOTE: patch in ml2 plugin for icehouse onwards
231
+        plugins['ovs']['config'] = '/etc/neutron/plugins/ml2/ml2_conf.ini'
232
+        plugins['ovs']['driver'] = 'neutron.plugins.ml2.plugin.Ml2Plugin'
233
+        plugins['ovs']['server_packages'] = ['neutron-server',
234
+                                             'neutron-plugin-ml2']
235
+        # NOTE: patch in vmware renames nvp->nsx for icehouse onwards
236
+        plugins['nvp'] = plugins['nsx']
237
+    if release >= 'kilo':
238
+        plugins['midonet']['driver'] = (
239
+            'neutron.plugins.midonet.plugin.MidonetPluginV2')
240
+    if release >= 'liberty':
241
+        plugins['midonet']['driver'] = (
242
+            'midonet.neutron.plugin_v1.MidonetPluginV2')
243
+        plugins['midonet']['server_packages'].remove(
244
+            'python-neutron-plugin-midonet')
245
+        plugins['midonet']['server_packages'].append(
246
+            'python-networking-midonet')
247
+        plugins['plumgrid']['driver'] = (
248
+            'networking_plumgrid.neutron.plugins.plugin.NeutronPluginPLUMgridV2')
249
+        plugins['plumgrid']['server_packages'].remove(
250
+            'neutron-plugin-plumgrid')
251
+    if release >= 'mitaka':
252
+        plugins['nsx']['server_packages'].remove('neutron-plugin-vmware')
253
+        plugins['nsx']['server_packages'].append('python-vmware-nsx')
254
+        plugins['nsx']['config'] = '/etc/neutron/nsx.ini'
255
+        plugins['vsp']['driver'] = (
256
+            'nuage_neutron.plugins.nuage.plugin.NuagePlugin')
257
+    return plugins
258
+
259
+
260
+def neutron_plugin_attribute(plugin, attr, net_manager=None):
261
+    manager = net_manager or network_manager()
262
+    if manager == 'quantum':
263
+        plugins = quantum_plugins()
264
+    elif manager == 'neutron':
265
+        plugins = neutron_plugins()
266
+    else:
267
+        log("Network manager '%s' does not support plugins." % (manager),
268
+            level=ERROR)
269
+        raise Exception
270
+
271
+    try:
272
+        _plugin = plugins[plugin]
273
+    except KeyError:
274
+        log('Unrecognised plugin for %s: %s' % (manager, plugin), level=ERROR)
275
+        raise Exception
276
+
277
+    try:
278
+        return _plugin[attr]
279
+    except KeyError:
280
+        return None
281
+
282
+
283
+def network_manager():
284
+    '''
285
+    Deals with the renaming of Quantum to Neutron in H and any situations
286
+    that require compatability (eg, deploying H with network-manager=quantum,
287
+    upgrading from G).
288
+    '''
289
+    release = os_release('nova-common')
290
+    manager = config('network-manager').lower()
291
+
292
+    if manager not in ['quantum', 'neutron']:
293
+        return manager
294
+
295
+    if release in ['essex']:
296
+        # E does not support neutron
297
+        log('Neutron networking not supported in Essex.', level=ERROR)
298
+        raise Exception
299
+    elif release in ['folsom', 'grizzly']:
300
+        # neutron is named quantum in F and G
301
+        return 'quantum'
302
+    else:
303
+        # ensure accurate naming for all releases post-H
304
+        return 'neutron'
305
+
306
+
307
+def parse_mappings(mappings, key_rvalue=False):
308
+    """By default mappings are lvalue keyed.
309
+
310
+    If key_rvalue is True, the mapping will be reversed to allow multiple
311
+    configs for the same lvalue.
312
+    """
313
+    parsed = {}
314
+    if mappings:
315
+        mappings = mappings.split()
316
+        for m in mappings:
317
+            p = m.partition(':')
318
+
319
+            if key_rvalue:
320
+                key_index = 2
321
+                val_index = 0
322
+                # if there is no rvalue skip to next
323
+                if not p[1]:
324
+                    continue
325
+            else:
326
+                key_index = 0
327
+                val_index = 2
328
+
329
+            key = p[key_index].strip()
330
+            parsed[key] = p[val_index].strip()
331
+
332
+    return parsed
333
+
334
+
335
+def parse_bridge_mappings(mappings):
336
+    """Parse bridge mappings.
337
+
338
+    Mappings must be a space-delimited list of provider:bridge mappings.
339
+
340
+    Returns dict of the form {provider:bridge}.
341
+    """
342
+    return parse_mappings(mappings)
343
+
344
+
345
+def parse_data_port_mappings(mappings, default_bridge='br-data'):
346
+    """Parse data port mappings.
347
+
348
+    Mappings must be a space-delimited list of bridge:port.
349
+
350
+    Returns dict of the form {port:bridge} where ports may be mac addresses or
351
+    interface names.
352
+    """
353
+
354
+    # NOTE(dosaboy): we use rvalue for key to allow multiple values to be
355
+    # proposed for <port> since it may be a mac address which will differ
356
+    # across units this allowing first-known-good to be chosen.
357
+    _mappings = parse_mappings(mappings, key_rvalue=True)
358
+    if not _mappings or list(_mappings.values()) == ['']:
359
+        if not mappings:
360
+            return {}
361
+
362
+        # For backwards-compatibility we need to support port-only provided in
363
+        # config.
364
+        _mappings = {mappings.split()[0]: default_bridge}
365
+
366
+    ports = _mappings.keys()
367
+    if len(set(ports)) != len(ports):
368
+        raise Exception("It is not allowed to have the same port configured "
369
+                        "on more than one bridge")
370
+
371
+    return _mappings
372
+
373
+
374
+def parse_vlan_range_mappings(mappings):
375
+    """Parse vlan range mappings.
376
+
377
+    Mappings must be a space-delimited list of provider:start:end mappings.
378
+
379
+    The start:end range is optional and may be omitted.
380
+
381
+    Returns dict of the form {provider: (start, end)}.
382
+    """
383
+    _mappings = parse_mappings(mappings)
384
+    if not _mappings:
385
+        return {}
386
+
387
+    mappings = {}
388
+    for p, r in six.iteritems(_mappings):
389
+        mappings[p] = tuple(r.split(':'))
390
+
391
+    return mappings
Back to file index

hooks/charmhelpers/contrib/openstack/templates/__init__.py

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/__init__.py
 3
@@ -0,0 +1,16 @@
 4
+# Copyright 2014-2015 Canonical Limited.
 5
+#
 6
+# Licensed under the Apache License, Version 2.0 (the "License");
 7
+# you may not use this file except in compliance with the License.
 8
+# You may obtain a copy of the License at
 9
+#
10
+#  http://www.apache.org/licenses/LICENSE-2.0
11
+#
12
+# Unless required by applicable law or agreed to in writing, software
13
+# distributed under the License is distributed on an "AS IS" BASIS,
14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+# See the License for the specific language governing permissions and
16
+# limitations under the License.
17
+
18
+# dummy __init__.py to fool syncer into thinking this is a syncable python
19
+# module
Back to file index

hooks/charmhelpers/contrib/openstack/templates/ceph.conf

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/ceph.conf
 3
@@ -0,0 +1,21 @@
 4
+###############################################################################
 5
+# [ WARNING ]
 6
+# cinder configuration file maintained by Juju
 7
+# local changes may be overwritten.
 8
+###############################################################################
 9
+[global]
10
+{% if auth -%}
11
+auth_supported = {{ auth }}
12
+keyring = /etc/ceph/$cluster.$name.keyring
13
+mon host = {{ mon_hosts }}
14
+{% endif -%}
15
+log to syslog = {{ use_syslog }}
16
+err to syslog = {{ use_syslog }}
17
+clog to syslog = {{ use_syslog }}
18
+
19
+[client]
20
+{% if rbd_client_cache_settings -%}
21
+{% for key, value in rbd_client_cache_settings.iteritems() -%}
22
+{{ key }} = {{ value }}
23
+{% endfor -%}
24
+{%- endif %}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/git.upstart

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/git.upstart
 3
@@ -0,0 +1,17 @@
 4
+description "{{ service_description }}"
 5
+author "Juju {{ service_name }} Charm <juju@localhost>"
 6
+
 7
+start on runlevel [2345]
 8
+stop on runlevel [!2345]
 9
+
10
+respawn
11
+
12
+exec start-stop-daemon --start --chuid {{ user_name }} \
13
+            --chdir {{ start_dir }} --name {{ process_name }} \
14
+            --exec {{ executable_name }} -- \
15
+            {% for config_file in config_files -%}
16
+            --config-file={{ config_file }} \
17
+            {% endfor -%}
18
+            {% if log_file -%}
19
+            --log-file={{ log_file }}
20
+            {% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/haproxy.cfg
 3
@@ -0,0 +1,66 @@
 4
+global
 5
+    log {{ local_host }} local0
 6
+    log {{ local_host }} local1 notice
 7
+    maxconn 20000
 8
+    user haproxy
 9
+    group haproxy
10
+    spread-checks 0
11
+
12
+defaults
13
+    log global
14
+    mode tcp
15
+    option tcplog
16
+    option dontlognull
17
+    retries 3
18
+{%- if haproxy_queue_timeout %}
19
+    timeout queue {{ haproxy_queue_timeout }}
20
+{%- else %}
21
+    timeout queue 5000
22
+{%- endif %}
23
+{%- if haproxy_connect_timeout %}
24
+    timeout connect {{ haproxy_connect_timeout }}
25
+{%- else %}
26
+    timeout connect 5000
27
+{%- endif %}
28
+{%- if haproxy_client_timeout %}
29
+    timeout client {{ haproxy_client_timeout }}
30
+{%- else %}
31
+    timeout client 30000
32
+{%- endif %}
33
+{%- if haproxy_server_timeout %}
34
+    timeout server {{ haproxy_server_timeout }}
35
+{%- else %}
36
+    timeout server 30000
37
+{%- endif %}
38
+
39
+listen stats
40
+    bind {{ local_host }}:{{ stat_port }}
41
+    mode http
42
+    stats enable
43
+    stats hide-version
44
+    stats realm Haproxy\ Statistics
45
+    stats uri /
46
+    stats auth admin:{{ stat_password }}
47
+
48
+{% if frontends -%}
49
+{% for service, ports in service_ports.items() -%}
50
+frontend tcp-in_{{ service }}
51
+    bind *:{{ ports[0] }}
52
+    {% if ipv6 -%}
53
+    bind :::{{ ports[0] }}
54
+    {% endif -%}
55
+    {% for frontend in frontends -%}
56
+    acl net_{{ frontend }} dst {{ frontends[frontend]['network'] }}
57
+    use_backend {{ service }}_{{ frontend }} if net_{{ frontend }}
58
+    {% endfor -%}
59
+    default_backend {{ service }}_{{ default_backend }}
60
+
61
+{% for frontend in frontends -%}
62
+backend {{ service }}_{{ frontend }}
63
+    balance leastconn
64
+    {% for unit, address in frontends[frontend]['backends'].items() -%}
65
+    server {{ unit }} {{ address }}:{{ ports[1] }} check
66
+    {% endfor %}
67
+{% endfor -%}
68
+{% endfor -%}
69
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend
 3
@@ -0,0 +1,26 @@
 4
+{% if endpoints -%}
 5
+{% for ext_port in ext_ports -%}
 6
+Listen {{ ext_port }}
 7
+{% endfor -%}
 8
+{% for address, endpoint, ext, int in endpoints -%}
 9
+<VirtualHost {{ address }}:{{ ext }}>
10
+    ServerName {{ endpoint }}
11
+    SSLEngine on
12
+    SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2
13
+    SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
14
+    SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }}
15
+    SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }}
16
+    ProxyPass / http://localhost:{{ int }}/
17
+    ProxyPassReverse / http://localhost:{{ int }}/
18
+    ProxyPreserveHost on
19
+</VirtualHost>
20
+{% endfor -%}
21
+<Proxy *>
22
+    Order deny,allow
23
+    Allow from all
24
+</Proxy>
25
+<Location />
26
+    Order allow,deny
27
+    Allow from all
28
+</Location>
29
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/openstack_https_frontend.conf
 3
@@ -0,0 +1,26 @@
 4
+{% if endpoints -%}
 5
+{% for ext_port in ext_ports -%}
 6
+Listen {{ ext_port }}
 7
+{% endfor -%}
 8
+{% for address, endpoint, ext, int in endpoints -%}
 9
+<VirtualHost {{ address }}:{{ ext }}>
10
+    ServerName {{ endpoint }}
11
+    SSLEngine on
12
+    SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2
13
+    SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
14
+    SSLCertificateFile /etc/apache2/ssl/{{ namespace }}/cert_{{ endpoint }}
15
+    SSLCertificateKeyFile /etc/apache2/ssl/{{ namespace }}/key_{{ endpoint }}
16
+    ProxyPass / http://localhost:{{ int }}/
17
+    ProxyPassReverse / http://localhost:{{ int }}/
18
+    ProxyPreserveHost on
19
+</VirtualHost>
20
+{% endfor -%}
21
+<Proxy *>
22
+    Order deny,allow
23
+    Allow from all
24
+</Proxy>
25
+<Location />
26
+    Order allow,deny
27
+    Allow from all
28
+</Location>
29
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken
 3
@@ -0,0 +1,12 @@
 4
+{% if auth_host -%}
 5
+[keystone_authtoken]
 6
+auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}
 7
+auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}
 8
+auth_plugin = password
 9
+project_domain_id = default
10
+user_domain_id = default
11
+project_name = {{ admin_tenant_name }}
12
+username = {{ admin_user }}
13
+password = {{ admin_password }}
14
+signing_dir = {{ signing_dir }}
15
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-legacy
 3
@@ -0,0 +1,10 @@
 4
+{% if auth_host -%}
 5
+[keystone_authtoken]
 6
+# Juno specific config (Bug #1557223)
 7
+auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}/{{ service_admin_prefix }}
 8
+identity_uri = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}
 9
+admin_tenant_name = {{ admin_tenant_name }}
10
+admin_user = {{ admin_user }}
11
+admin_password = {{ admin_password }}
12
+signing_dir = {{ signing_dir }}
13
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/section-keystone-authtoken-mitaka
 3
@@ -0,0 +1,12 @@
 4
+{% if auth_host -%}
 5
+[keystone_authtoken]
 6
+auth_uri = {{ service_protocol }}://{{ service_host }}:{{ service_port }}
 7
+auth_url = {{ auth_protocol }}://{{ auth_host }}:{{ auth_port }}
 8
+auth_type = password
 9
+project_domain_name = default
10
+user_domain_name = default
11
+project_name = {{ admin_tenant_name }}
12
+username = {{ admin_user }}
13
+password = {{ admin_password }}
14
+signing_dir = {{ signing_dir }}
15
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/section-rabbitmq-oslo
 3
@@ -0,0 +1,22 @@
 4
+{% if rabbitmq_host or rabbitmq_hosts -%}
 5
+[oslo_messaging_rabbit]
 6
+rabbit_userid = {{ rabbitmq_user }}
 7
+rabbit_virtual_host = {{ rabbitmq_virtual_host }}
 8
+rabbit_password = {{ rabbitmq_password }}
 9
+{% if rabbitmq_hosts -%}
10
+rabbit_hosts = {{ rabbitmq_hosts }}
11
+{% if rabbitmq_ha_queues -%}
12
+rabbit_ha_queues = True
13
+rabbit_durable_queues = False
14
+{% endif -%}
15
+{% else -%}
16
+rabbit_host = {{ rabbitmq_host }}
17
+{% endif -%}
18
+{% if rabbit_ssl_port -%}
19
+rabbit_use_ssl = True
20
+rabbit_port = {{ rabbit_ssl_port }}
21
+{% if rabbit_ssl_ca -%}
22
+kombu_ssl_ca_certs = {{ rabbit_ssl_ca }}
23
+{% endif -%}
24
+{% endif -%}
25
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templates/section-zeromq

 1
--- 
 2
+++ hooks/charmhelpers/contrib/openstack/templates/section-zeromq
 3
@@ -0,0 +1,14 @@
 4
+{% if zmq_host -%}
 5
+# ZeroMQ configuration (restart-nonce: {{ zmq_nonce }})
 6
+rpc_backend = zmq
 7
+rpc_zmq_host = {{ zmq_host }}
 8
+{% if zmq_redis_address -%}
 9
+rpc_zmq_matchmaker = redis
10
+matchmaker_heartbeat_freq = 15
11
+matchmaker_heartbeat_ttl = 30
12
+[matchmaker_redis]
13
+host = {{ zmq_redis_address }}
14
+{% else -%}
15
+rpc_zmq_matchmaker = ring
16
+{% endif -%}
17
+{% endif -%}
Back to file index

hooks/charmhelpers/contrib/openstack/templating.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/openstack/templating.py
  3
@@ -0,0 +1,321 @@
  4
+# Copyright 2014-2015 Canonical Limited.
  5
+#
  6
+# Licensed under the Apache License, Version 2.0 (the "License");
  7
+# you may not use this file except in compliance with the License.
  8
+# You may obtain a copy of the License at
  9
+#
 10
+#  http://www.apache.org/licenses/LICENSE-2.0
 11
+#
 12
+# Unless required by applicable law or agreed to in writing, software
 13
+# distributed under the License is distributed on an "AS IS" BASIS,
 14
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 15
+# See the License for the specific language governing permissions and
 16
+# limitations under the License.
 17
+
 18
+import os
 19
+
 20
+import six
 21
+
 22
+from charmhelpers.fetch import apt_install, apt_update
 23
+from charmhelpers.core.hookenv import (
 24
+    log,
 25
+    ERROR,
 26
+    INFO
 27
+)
 28
+from charmhelpers.contrib.openstack.utils import OPENSTACK_CODENAMES
 29
+
 30
+try:
 31
+    from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
 32
+except ImportError:
 33
+    apt_update(fatal=True)
 34
+    apt_install('python-jinja2', fatal=True)
 35
+    from jinja2 import FileSystemLoader, ChoiceLoader, Environment, exceptions
 36
+
 37
+
 38
+class OSConfigException(Exception):
 39
+    pass
 40
+
 41
+
 42
+def get_loader(templates_dir, os_release):
 43
+    """
 44
+    Create a jinja2.ChoiceLoader containing template dirs up to
 45
+    and including os_release.  If directory template directory
 46
+    is missing at templates_dir, it will be omitted from the loader.
 47
+    templates_dir is added to the bottom of the search list as a base
 48
+    loading dir.
 49
+
 50
+    A charm may also ship a templates dir with this module
 51
+    and it will be appended to the bottom of the search list, eg::
 52
+
 53
+        hooks/charmhelpers/contrib/openstack/templates
 54
+
 55
+    :param templates_dir (str): Base template directory containing release
 56
+        sub-directories.
 57
+    :param os_release (str): OpenStack release codename to construct template
 58
+        loader.
 59
+    :returns: jinja2.ChoiceLoader constructed with a list of
 60
+        jinja2.FilesystemLoaders, ordered in descending
 61
+        order by OpenStack release.
 62
+    """
 63
+    tmpl_dirs = [(rel, os.path.join(templates_dir, rel))
 64
+                 for rel in six.itervalues(OPENSTACK_CODENAMES)]
 65
+
 66
+    if not os.path.isdir(templates_dir):
 67
+        log('Templates directory not found @ %s.' % templates_dir,
 68
+            level=ERROR)
 69
+        raise OSConfigException
 70
+
 71
+    # the bottom contains tempaltes_dir and possibly a common templates dir
 72
+    # shipped with the helper.
 73
+    loaders = [FileSystemLoader(templates_dir)]
 74
+    helper_templates = os.path.join(os.path.dirname(__file__), 'templates')
 75
+    if os.path.isdir(helper_templates):
 76
+        loaders.append(FileSystemLoader(helper_templates))
 77
+
 78
+    for rel, tmpl_dir in tmpl_dirs:
 79
+        if os.path.isdir(tmpl_dir):
 80
+            loaders.insert(0, FileSystemLoader(tmpl_dir))
 81
+        if rel == os_release:
 82
+            break
 83
+    log('Creating choice loader with dirs: %s' %
 84
+        [l.searchpath for l in loaders], level=INFO)
 85
+    return ChoiceLoader(loaders)
 86
+
 87
+
 88
+class OSConfigTemplate(object):
 89
+    """
 90
+    Associates a config file template with a list of context generators.
 91
+    Responsible for constructing a template context based on those generators.
 92
+    """
 93
+    def __init__(self, config_file, contexts):
 94
+        self.config_file = config_file
 95
+
 96
+        if hasattr(contexts, '__call__'):
 97
+            self.contexts = [contexts]
 98
+        else:
 99
+            self.contexts = contexts
100
+
101
+        self._complete_contexts = []
102
+
103
+    def context(self):
104
+        ctxt = {}
105
+        for context in self.contexts:
106
+            _ctxt = context()
107
+            if _ctxt:
108
+                ctxt.update(_ctxt)
109
+                # track interfaces for every complete context.
110
+                [self._complete_contexts.append(interface)
111
+                 for interface in context.interfaces
112
+                 if interface not in self._complete_contexts]
113
+        return ctxt
114
+
115
+    def complete_contexts(self):
116
+        '''
117
+        Return a list of interfaces that have satisfied contexts.
118
+        '''
119
+        if self._complete_contexts:
120
+            return self._complete_contexts
121
+        self.context()
122
+        return self._complete_contexts
123
+
124
+
125
+class OSConfigRenderer(object):
126
+    """
127
+    This class provides a common templating system to be used by OpenStack
128
+    charms.  It is intended to help charms share common code and templates,
129
+    and ease the burden of managing config templates across multiple OpenStack
130
+    releases.
131
+
132
+    Basic usage::
133
+
134
+        # import some common context generates from charmhelpers
135
+        from charmhelpers.contrib.openstack import context
136
+
137
+        # Create a renderer object for a specific OS release.
138
+        configs = OSConfigRenderer(templates_dir='/tmp/templates',
139
+                                   openstack_release='folsom')
140
+        # register some config files with context generators.
141
+        configs.register(config_file='/etc/nova/nova.conf',
142
+                         contexts=[context.SharedDBContext(),
143
+                                   context.AMQPContext()])
144
+        configs.register(config_file='/etc/nova/api-paste.ini',
145
+                         contexts=[context.IdentityServiceContext()])
146
+        configs.register(config_file='/etc/haproxy/haproxy.conf',
147
+                         contexts=[context.HAProxyContext()])
148
+        # write out a single config
149
+        configs.write('/etc/nova/nova.conf')
150
+        # write out all registered configs
151
+        configs.write_all()
152
+
153
+    **OpenStack Releases and template loading**
154
+
155
+    When the object is instantiated, it is associated with a specific OS
156
+    release.  This dictates how the template loader will be constructed.
157
+
158
+    The constructed loader attempts to load the template from several places
159
+    in the following order:
160
+    - from the most recent OS release-specific template dir (if one exists)
161
+    - the base templates_dir
162
+    - a template directory shipped in the charm with this helper file.
163
+
164
+    For the example above, '/tmp/templates' contains the following structure::
165
+
166
+        /tmp/templates/nova.conf
167
+        /tmp/templates/api-paste.ini
168
+        /tmp/templates/grizzly/api-paste.ini
169
+        /tmp/templates/havana/api-paste.ini
170
+
171
+    Since it was registered with the grizzly release, it first seraches
172
+    the grizzly directory for nova.conf, then the templates dir.
173
+
174
+    When writing api-paste.ini, it will find the template in the grizzly
175
+    directory.
176
+
177
+    If the object were created with folsom, it would fall back to the
178
+    base templates dir for its api-paste.ini template.
179
+
180
+    This system should help manage changes in config files through
181
+    openstack releases, allowing charms to fall back to the most recently
182
+    updated config template for a given release
183
+
184
+    The haproxy.conf, since it is not shipped in the templates dir, will
185
+    be loaded from the module directory's template directory, eg
186
+    $CHARM/hooks/charmhelpers/contrib/openstack/templates.  This allows
187
+    us to ship common templates (haproxy, apache) with the helpers.
188
+
189
+    **Context generators**
190
+
191
+    Context generators are used to generate template contexts during hook
192
+    execution.  Doing so may require inspecting service relations, charm
193
+    config, etc.  When registered, a config file is associated with a list
194
+    of generators.  When a template is rendered and written, all context
195
+    generates are called in a chain to generate the context dictionary
196
+    passed to the jinja2 template. See context.py for more info.
197
+    """
198
+    def __init__(self, templates_dir, openstack_release):
199
+        if not os.path.isdir(templates_dir):
200
+            log('Could not locate templates dir %s' % templates_dir,
201
+                level=ERROR)
202
+            raise OSConfigException
203
+
204
+        self.templates_dir = templates_dir
205
+        self.openstack_release = openstack_release
206
+        self.templates = {}
207
+        self._tmpl_env = None
208
+
209
+        if None in [Environment, ChoiceLoader, FileSystemLoader]:
210
+            # if this code is running, the object is created pre-install hook.
211
+            # jinja2 shouldn't get touched until the module is reloaded on next
212
+            # hook execution, with proper jinja2 bits successfully imported.
213
+            apt_install('python-jinja2')
214
+
215
+    def register(self, config_file, contexts):
216
+        """
217
+        Register a config file with a list of context generators to be called
218
+        during rendering.
219
+        """
220
+        self.templates[config_file] = OSConfigTemplate(config_file=config_file,
221
+                                                       contexts=contexts)
222
+        log('Registered config file: %s' % config_file, level=INFO)
223
+
224
+    def _get_tmpl_env(self):
225
+        if not self._tmpl_env:
226
+            loader = get_loader(self.templates_dir, self.openstack_release)
227
+            self._tmpl_env = Environment(loader=loader)
228
+
229
+    def _get_template(self, template):
230
+        self._get_tmpl_env()
231
+        template = self._tmpl_env.get_template(template)
232
+        log('Loaded template from %s' % template.filename, level=INFO)
233
+        return template
234
+
235
+    def render(self, config_file):
236
+        if config_file not in self.templates:
237
+            log('Config not registered: %s' % config_file, level=ERROR)
238
+            raise OSConfigException
239
+        ctxt = self.templates[config_file].context()
240
+
241
+        _tmpl = os.path.basename(config_file)
242
+        try:
243
+            template = self._get_template(_tmpl)
244
+        except exceptions.TemplateNotFound:
245
+            # if no template is found with basename, try looking for it
246
+            # using a munged full path, eg:
247
+            #   /etc/apache2/apache2.conf -> etc_apache2_apache2.conf
248
+            _tmpl = '_'.join(config_file.split('/')[1:])
249
+            try:
250