~jamesbeedy/elasticsearch-1

Owner: jamesbeedy
Status: Needs Fixing
Vote: -1 (+2 needed for approval)

CPP?: No
OIL?: No

Support ES 5.x!


Tests

Substrate Status Results Last Updated
lxc RETRY 19 days ago
gce RETRY 19 days ago
aws RETRY 19 days ago

Voted: -1
kwmonroe wrote 2 months ago
Looks good James! Thanks for the work here to bring this up to ES5. I have a couple minor suggestions for you:

http://paste.ubuntu.com/24420775/

- use juju2 syntax (juju config vs juju set)
- be more explicit in the yaml required to configure the es5 repo

I'd also like to move this charm to the newly created https://launchpad.net/~elastic-ops team to distribute charm maintenance (you're already a member there). Let's figure out an appropriate repository to house this charm. I'm going to set this to "Needs fixing" for now while we work out those details.

Thanks again!
Voted: +0
jamesbeedy wrote 2 months ago
@kwmonroe Thanks for the review. I'll see about putting up a fix for those asap!

Add Comment

Login to comment/vote on this review.


Policy Checklist

Description Unreviewed Pass Fail

General

Must verify that any software installed or utilized is verified as coming from the intended source. kwmonroe
  • Any software installed from the Ubuntu or CentOS default archives satisfies this due to the apt and yum sources including cryptographic signing information.
  • Third party repositories must be listed as a configuration option that can be overridden by the user and not hard coded in the charm itself.
  • Launchpad PPAs are acceptable as the add-apt-repository command retrieves the keys securely.
  • Other third party repositories are acceptable if the signing key is embedded in the charm.
Must provide a means to protect users from known security vulnerabilities in a way consistent with best practices as defined by either operating system policies or upstream documentation. kwmonroe
Basically, this means there must be instructions on how to apply updates if you use software not from distribution channels.
Must have hooks that are idempotent. kwmonroe
Should be built using charm layers.
Should use Juju Resources to deliver required payloads.

Testing and Quality

charm proof must pass without errors or warnings. kwmonroe
Must include passing unit, functional, or integration tests. kwmonroe
Tests must exercise all relations. kwmonroe
Tests must exercise config. kwmonroe
set-config, unset-config, and re-set must be tested as a minimum
Must not use anything infrastructure-provider specific (i.e. querying EC2 metadata service). kwmonroe
Must be self contained unless the charm is a proxy for an existing cloud service, e.g. ec2-elb charm.
Must not use symlinks. kwmonroe
Bundles must only use promulgated charms, they cannot reference charms in personal namespaces.
Must call Juju hook tools (relation-*, unit-*, config-*, etc) without a hard coded path. kwmonroe
Should include a tests.yaml for all integration tests. kwmonroe

Metadata

Must include a full description of what the software does. kwmonroe
Must include a maintainer email address for a team or individual who will be responsive to contact. kwmonroe
Must include a license. Call the file 'copyright' and make sure all files' licenses are specified clearly. kwmonroe
Must be under a Free license. kwmonroe
Must have a well documented and valid README.md. kwmonroe
Must describe the service. kwmonroe
Must describe how it interacts with other services, if applicable. kwmonroe
Must document the interfaces. kwmonroe
Must show how to deploy the charm. kwmonroe
Must define external dependencies, if applicable. kwmonroe
Should link to a recommend production usage bundle and recommended configuration if this differs from the default. kwmonroe
Should reference and link to upstream documentation and best practices. kwmonroe

Security

Must not run any network services using default passwords. kwmonroe
Must verify and validate any external payload kwmonroe
  • Known and understood packaging systems that verify packages like apt, pip, and yum are ok.
  • wget | sh style is not ok.
Should make use of whatever Mandatory Access Control system is provided by the distribution. kwmonroe
Should avoid running services as root. kwmonroe

Source Diff

Files changed 64

Inline diff comments 0

No comments yet.

Back to file index

HACKING.md

 1
--- 
 2
+++ HACKING.md
 3
@@ -0,0 +1,17 @@
 4
+# Local development
 5
+
 6
+To deploy ElasticSearch locally, pull the bzr branch into your
 7
+local charm repository and deploy from there:
 8
+
 9
+    mkdir -p ~/charms/trusty && cd ~/charms/trusty
10
+    charm-get trusty/elasticsearch
11
+    juju bootstrap
12
+    juju deploy --repository=../.. local:elasticsearch
13
+
14
+
15
+# Testing the ElasticSearch charm
16
+
17
+Run the unit-tests with `make test`.
18
+
19
+Run the functional tests with `juju test`.
20
+
Back to file index

Makefile

 1
--- 
 2
+++ Makefile
 3
@@ -0,0 +1,29 @@
 4
+#!/usr/bin/make
 5
+PYTHON := /usr/bin/env python
 6
+
 7
+build: sync-charm-helpers test
 8
+
 9
+lint:
10
+	@flake8 --exclude hooks/charmhelpers --ignore=E125 hooks
11
+	@flake8 --exclude hooks/charmhelpers --ignore=E125 unit_tests
12
+	@charm proof
13
+
14
+test:
15
+	@echo Starting unit tests...
16
+	@PYTHONPATH=./hooks $(PYTHON) unit_tests/test_hooks.py
17
+
18
+deploy:
19
+	@echo Deploying local elasticsearch charm
20
+	@juju deploy --num-units=2 --repository=../.. local:trusty/elasticsearch
21
+
22
+health:
23
+	juju ssh elasticsearch/0 "curl http://localhost:9200/_cluster/health?pretty=true"
24
+
25
+# The following targets are used for charm maintenance only.
26
+bin/charm_helpers_sync.py:
27
+	@bzr cat lp:charm-helpers/tools/charm_helpers_sync/charm_helpers_sync.py \
28
+		> bin/charm_helpers_sync.py
29
+
30
+sync-charm-helpers: bin/charm_helpers_sync.py
31
+	@$(PYTHON) bin/charm_helpers_sync.py -c charm-helpers.yaml
32
+
Back to file index

README.md

  1
--- 
  2
+++ README.md
  3
@@ -0,0 +1,107 @@
  4
+# Overview
  5
+
  6
+Elasticsearch is a flexible and powerful open source, distributed, real-time
  7
+search and analytics engine. Architected from the ground up for use in
  8
+distributed environments where reliability and scalability are must haves,
  9
+Elasticsearch gives you the ability to move easily beyond simple full-text
 10
+search. Through its robust set of APIs and query DSLs, plus clients for the
 11
+most popular programming languages, Elasticsearch delivers on the near
 12
+limitless promises of search technology.
 13
+
 14
+Excerpt from [elasticsearch.org](http://www.elasticsearch.org/overview/ "Elasticsearch Overview")
 15
+
 16
+# Usage
 17
+
 18
+You can simply deploy one node with:
 19
+
 20
+    juju deploy elasticsearch
 21
+
 22
+You can also deploy and relate the Kibana dashboard:
 23
+
 24
+    juju deploy kibana
 25
+    juju add-relation kibana elasticsearch
 26
+    juju expose kibana
 27
+
 28
+This will expose the Kibana web UI, which will then act as a front end to
 29
+all subsequent Elasticsearch units.
 30
+
 31
+## Scale Out Usage
 32
+
 33
+Deploy three or more units with:
 34
+
 35
+    juju deploy -n3 elasticsearch
 36
+
 37
+And when they have started you can inspect the cluster health:
 38
+
 39
+    juju run --unit elasticsearch/0 "curl http://localhost:9200/_cat/health?v"
 40
+    epoch      timestamp cluster       status node.total node.data shards ...
 41
+    1404728290 10:18:10  elasticsearch green           2         2      0
 42
+
 43
+Note that for security reasons the admin port (9200) is only accessible from
 44
+the instance itself and any clients that join. Similarly the node-to-node
 45
+communication port (9300) is only available to other units in the elasticsearch
 46
+service. You can change this explicitly with:
 47
+
 48
+    juju set elasticsearch firewall_enabled=false
 49
+
 50
+See the separate HACKING.md for information about deploying this charm
 51
+from a local repository.
 52
+
 53
+### Relating to the Elasticsearch cluster
 54
+
 55
+This charm currently provides the elasticsearch client interface to the
 56
+consuming service (cluster-name, host and port). Normally the other service
 57
+will only need this data from one elasticsearch unit to start as most client
 58
+libraries then query for the list of backends [1].
 59
+
 60
+[1] http://elasticsearch-py.readthedocs.org/en/latest/api.html#elasticsearch
 61
+
 62
+### Discovery
 63
+
 64
+This charm uses unicast discovery which utilises the orchestration
 65
+of juju so that whether you deploy on ec2, lxc or any other cloud
 66
+provider, the functionality for discovering other nodes remains the same.
 67
+
 68
+When a new unit first joins the cluster, it will update its config
 69
+with the other units in the cluster (via the peer-relation-joined
 70
+hook), after which ElasticSearch handles the rest.
 71
+
 72
+# Configuration
 73
+
 74
+## Elasticsearch 5.x
 75
+This charm fully supports Elasticsearch 5.x!
 76
+To deploy ES 5.x add the following to your config prior
 77
+to deploying the charm.
 78
+
 79
+Example config for Elasticsearch 5.x
 80
+```yaml
 81
+# es-config.yaml
 82
+
 83
+apt-repository: "deb https://artifacts.elastic.co/packages/5.x/apt stable main"
 84
+apt-key-url: "https://artifacts.elastic.co/GPG-KEY-elasticsearch"
 85
+gpg-key-id: "D88E42B4"
 86
+
 87
+```
 88
+Then reference the config when deploying this charm.
 89
+```bash
 90
+juju deploy elasticsearch --config es-config.yaml
 91
+```
 92
+
 93
+## Downloading ElasticSearch
 94
+
 95
+This charm installs elasticsearch from a configured apt repository.
 96
+By default, this is the 1.0 repository from elasticsearch.org, but
 97
+you can configure your own internal repo if you don't want your
 98
+deployment to be dependent on external resources.
 99
+
100
+Alternatively, you can include a files/elasticsearch.deb in the
101
+charm payload and it will be installed instead.
102
+
103
+# Contact Information
104
+
105
+## Elasticsearch
106
+
107
+- [Elasticsearch website](http://www.elasticsearch.org/)
108
+- [Source code](http://github.com/elasticsearch)
109
+- [Mailing List](https://groups.google.com/forum/?fromgroups#!forum/elasticsearch)
110
+- [Other community resources](http://www.elasticsearch.org/community/)
Back to file index

ansible_module_backports/ufw

  1
--- 
  2
+++ ansible_module_backports/ufw
  3
@@ -0,0 +1,264 @@
  4
+#!/usr/bin/python
  5
+# -*- coding: utf-8 -*-
  6
+
  7
+# (c) 2014, Ahti Kitsik <ak@ahtik.com>
  8
+# (c) 2014, Jarno Keskikangas <jarno.keskikangas@gmail.com>
  9
+# (c) 2013, Aleksey Ovcharenko <aleksey.ovcharenko@gmail.com>
 10
+# (c) 2013, James Martin <jmartin@basho.com>
 11
+#
 12
+# This file is part of Ansible
 13
+#
 14
+# Ansible is free software: you can redistribute it and/or modify
 15
+# it under the terms of the GNU General Public License as published by
 16
+# the Free Software Foundation, either version 3 of the License, or
 17
+# (at your option) any later version.
 18
+#
 19
+# Ansible is distributed in the hope that it will be useful,
 20
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
 21
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 22
+# GNU General Public License for more details.
 23
+#
 24
+# You should have received a copy of the GNU General Public License
 25
+# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.
 26
+
 27
+DOCUMENTATION = '''
 28
+---
 29
+module: ufw
 30
+short_description: Manage firewall with UFW
 31
+description:
 32
+    - Manage firewall with UFW.
 33
+version_added: 1.6
 34
+author: Aleksey Ovcharenko, Jarno Keskikangas, Ahti Kitsik
 35
+notes:
 36
+    - See C(man ufw) for more examples.
 37
+requirements:
 38
+    - C(ufw) package
 39
+options:
 40
+  state:
 41
+    description:
 42
+      - C(enabled) reloads firewall and enables firewall on boot.
 43
+      - C(disabled) unloads firewall and disables firewall on boot.
 44
+      - C(reloaded) reloads firewall.
 45
+      - C(reset) disables and resets firewall to installation defaults.
 46
+    required: false
 47
+    choices: ['enabled', 'disabled', 'reloaded', 'reset']
 48
+  policy:
 49
+    description:
 50
+      - Change the default policy for incoming or outgoing traffic.
 51
+    required: false
 52
+    alias: default
 53
+    choices: ['allow', 'deny', 'reject']
 54
+  direction:
 55
+    description:
 56
+      - Select direction for a rule or default policy command.
 57
+    required: false
 58
+    choices: ['in', 'out', 'incoming', 'outgoing']
 59
+  logging:
 60
+    description:
 61
+      - Toggles logging. Logged packets use the LOG_KERN syslog facility.
 62
+    choices: ['on', 'off', 'low', 'medium', 'high', 'full']
 63
+    required: false
 64
+  insert:
 65
+    description:
 66
+      - Insert the corresponding rule as rule number NUM
 67
+    required: false
 68
+  rule:
 69
+    description:
 70
+      - Add firewall rule
 71
+    required: false
 72
+    choices: ['allow', 'deny', 'reject', 'limit']
 73
+  log:
 74
+    description:
 75
+      - Log new connections matched to this rule
 76
+    required: false
 77
+    choices: ['yes', 'no']
 78
+  from_ip:
 79
+    description:
 80
+      - Source IP address.
 81
+    required: false
 82
+    aliases: ['from', 'src']
 83
+    default: 'any'
 84
+  from_port:
 85
+    description:
 86
+      - Source port.
 87
+    required: false
 88
+  to_ip:
 89
+    description:
 90
+      - Destination IP address.
 91
+    required: false
 92
+    aliases: ['to', 'dest']
 93
+    default: 'any'
 94
+  to_port:
 95
+    description:
 96
+      - Destination port.
 97
+    required: false
 98
+    aliases: ['port']
 99
+  proto:
100
+    description:
101
+      - TCP/IP protocol.
102
+    choices: ['any', 'tcp', 'udp', 'ipv6', 'esp', 'ah']
103
+    required: false
104
+  name:
105
+    description:
106
+      - Use profile located in C(/etc/ufw/applications.d)
107
+    required: false
108
+    aliases: ['app']
109
+  delete:
110
+    description:
111
+      - Delete rule.
112
+    required: false
113
+    choices: ['yes', 'no']
114
+'''
115
+
116
+EXAMPLES = '''
117
+# Allow everything and enable UFW
118
+ufw: state=enabled policy=allow
119
+
120
+# Set logging
121
+ufw: logging=on
122
+
123
+# Sometimes it is desirable to let the sender know when traffic is
124
+# being denied, rather than simply ignoring it. In these cases, use
125
+# reject instead of deny. In addition, log rejected connections:
126
+ufw: rule=reject port=auth log=yes
127
+
128
+# ufw supports connection rate limiting, which is useful for protecting
129
+# against brute-force login attacks. ufw will deny connections if an IP
130
+# address has attempted to initiate 6 or more connections in the last
131
+# 30 seconds. See  http://www.debian-administration.org/articles/187
132
+# for details. Typical usage is:
133
+ufw: rule=limit port=ssh proto=tcp
134
+
135
+# Allow OpenSSH
136
+ufw: rule=allow name=OpenSSH
137
+
138
+# Delete OpenSSH rule
139
+ufw: rule=allow name=OpenSSH delete=yes
140
+
141
+# Deny all access to port 53:
142
+ufw: rule=deny port=53
143
+
144
+# Allow all access to tcp port 80:
145
+ufw: rule=allow port=80 proto=tcp
146
+
147
+# Allow all access from RFC1918 networks to this host:
148
+ufw: rule=allow src={{ item }}
149
+with_items:
150
+- 10.0.0.0/8
151
+- 172.16.0.0/12
152
+- 192.168.0.0/16
153
+
154
+# Deny access to udp port 514 from host 1.2.3.4:
155
+ufw: rule=deny proto=udp src=1.2.3.4 port=514
156
+
157
+# Allow incoming access to eth0 from 1.2.3.5 port 5469 to 1.2.3.4 port 5469
158
+ufw: rule=allow interface=eth0 direction=in proto=udp src=1.2.3.5 from_port=5469 dest=1.2.3.4 to_port=5469
159
+
160
+# Deny all traffic from the IPv6 2001:db8::/32 to tcp port 25 on this host.
161
+# Note that IPv6 must be enabled in /etc/default/ufw for IPv6 firewalling to work.
162
+ufw: rule=deny proto=tcp src=2001:db8::/32 port=25
163
+'''
164
+
165
+from operator import itemgetter
166
+
167
+
168
+def main():
169
+    module = AnsibleModule(
170
+        argument_spec = dict(
171
+            state     = dict(default=None,  choices=['enabled', 'disabled', 'reloaded', 'reset']),
172
+            default   = dict(default=None,  aliases=['policy'], choices=['allow', 'deny', 'reject']),
173
+            logging   = dict(default=None,  choices=['on', 'off', 'low', 'medium', 'high', 'full']),
174
+            direction = dict(default=None,  choices=['in', 'incoming', 'out', 'outgoing']),
175
+            delete    = dict(default=False, type='bool'),
176
+            insert    = dict(default=None),
177
+            rule      = dict(default=None,  choices=['allow', 'deny', 'reject', 'limit']),
178
+            interface = dict(default=None,  aliases=['if']),
179
+            log       = dict(default=False, type='bool'),
180
+            from_ip   = dict(default='any', aliases=['src', 'from']),
181
+            from_port = dict(default=None),
182
+            to_ip     = dict(default='any', aliases=['dest', 'to']),
183
+            to_port   = dict(default=None,  aliases=['port']),
184
+            proto     = dict(default=None,  aliases=['protocol'], choices=['any', 'tcp', 'udp', 'ipv6', 'esp', 'ah']),
185
+            app       = dict(default=None,  aliases=['name'])
186
+        ),
187
+        supports_check_mode = True,
188
+        mutually_exclusive = [['app', 'proto', 'logging']]
189
+    )
190
+
191
+    cmds = []
192
+
193
+    def execute(cmd):
194
+        cmd = ' '.join(map(itemgetter(-1), filter(itemgetter(0), cmd)))
195
+
196
+        cmds.append(cmd)
197
+        (rc, out, err) = module.run_command(cmd)
198
+
199
+        if rc != 0:
200
+            module.fail_json(msg=err or out)
201
+
202
+    params = module.params
203
+
204
+    # Ensure at least one of the command arguments are given
205
+    command_keys = ['state', 'default', 'rule', 'logging']
206
+    commands = dict((key, params[key]) for key in command_keys if params[key])
207
+
208
+    if len(commands) < 1:
209
+        module.fail_json(msg="Not any of the command arguments %s given" % commands)
210
+
211
+    if('interface' in params and 'direction' not in params):
212
+      module.fail_json(msg="Direction must be specified when creating a rule on an interface")
213
+
214
+    # Ensure ufw is available
215
+    ufw_bin = module.get_bin_path('ufw', True)
216
+
217
+    # Save the pre state and rules in order to recognize changes
218
+    (_, pre_state, _) = module.run_command(ufw_bin + ' status verbose')
219
+    (_, pre_rules, _) = module.run_command("grep '^### tuple' /lib/ufw/user*.rules")
220
+
221
+    # Execute commands
222
+    for (command, value) in commands.iteritems():
223
+        cmd = [[ufw_bin], [module.check_mode, '--dry-run']]
224
+
225
+        if command == 'state':
226
+            states = { 'enabled': 'enable',  'disabled': 'disable',
227
+                       'reloaded': 'reload', 'reset': 'reset' }
228
+            execute(cmd + [['-f'], [states[value]]])
229
+
230
+        elif command == 'logging':
231
+            execute(cmd + [[command], [value]])
232
+
233
+        elif command == 'default':
234
+            execute(cmd + [[command], [value], [params['direction']]])
235
+
236
+        elif command == 'rule':
237
+            # Rules are constructed according to the long format
238
+            #
239
+            # ufw [--dry-run] [delete] [insert NUM] allow|deny|reject|limit [in|out on INTERFACE] [log|log-all] \
240
+            #     [from ADDRESS [port PORT]] [to ADDRESS [port PORT]] \
241
+            #     [proto protocol] [app application]
242
+            cmd.append([module.boolean(params['delete']), 'delete'])
243
+            cmd.append([params['insert'], "insert %s" % params['insert']])
244
+            cmd.append([value])
245
+            cmd.append([module.boolean(params['log']), 'log'])
246
+
247
+            for (key, template) in [('direction', "%s"      ), ('interface', "on %s"   ),
248
+                                    ('from_ip',   "from %s" ), ('from_port', "port %s" ),
249
+                                    ('to_ip',     "to %s"   ), ('to_port',   "port %s" ),
250
+                                    ('proto',     "proto %s"), ('app',       "app '%s'")]:
251
+
252
+                value = params[key]
253
+                cmd.append([value, template % (value)])
254
+
255
+            execute(cmd)
256
+
257
+    # Get the new state
258
+    (_, post_state, _) = module.run_command(ufw_bin + ' status verbose')
259
+    (_, post_rules, _) = module.run_command("grep '^### tuple' /lib/ufw/user*.rules")
260
+    changed = (pre_state != post_state) or (pre_rules != post_rules)
261
+
262
+    return module.exit_json(changed=changed, commands=cmds, msg=post_state.rstrip())
263
+
264
+# import module snippets
265
+from ansible.module_utils.basic import *
266
+
267
+main()
Back to file index

charm-helpers.yaml

 1
--- 
 2
+++ charm-helpers.yaml
 3
@@ -0,0 +1,8 @@
 4
+branch: lp:charm-helpers
 5
+destination: hooks/charmhelpers
 6
+include:
 7
+    - core
 8
+    - fetch
 9
+    - contrib.ansible|inc=*
10
+    - contrib.templating.contexts
11
+    - payload.execd|inc=*
Back to file index

config.yaml

 1
--- 
 2
+++ config.yaml
 3
@@ -0,0 +1,29 @@
 4
+options:
 5
+  apt-repository:
 6
+    type: string
 7
+    default: "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main"
 8
+    description: |
 9
+      A deb-line for the apt archive which contains the elasticsearch package.
10
+      This is necessary until elasticsearch gets into the debian/ubuntu archives.
11
+  apt-key-url:
12
+    type: string
13
+    default: "http://packages.elasticsearch.org/GPG-KEY-elasticsearch"
14
+    description: |
15
+      The url for the key for the apt-repository.
16
+  gpg-key-id:
17
+    type: string
18
+    default: D88E42B4
19
+    description: |
20
+      Elasticsearch's GPG fingerprint to validate the apt key
21
+  cluster-name:
22
+    type: string
23
+    default: "elasticsearch"
24
+    description: |
25
+      This sets the elasticsearch cluster name.
26
+  firewall_enabled:
27
+    type: boolean
28
+    default: true
29
+    description: |
30
+      By default, the admin and peer ports (9200 and 9300) are only accessible
31
+      to clients and peers respectively. Switch this to false to enable access
32
+      from any machine.
Back to file index

copyright

 1
--- 
 2
+++ copyright
 3
@@ -0,0 +1,17 @@
 4
+Format: http://dep.debian.net/deps/dep5/
 5
+
 6
+Files: *
 7
+Copyright: Copyright 2014, Canonical Ltd., All Rights Reserved.
 8
+License: GPL-3
 9
+ This program is free software: you can redistribute it and/or modify
10
+ it under the terms of the GNU General Public License as published by
11
+ the Free Software Foundation, either version 3 of the License, or
12
+ (at your option) any later version.
13
+ .
14
+ This program is distributed in the hope that it will be useful,
15
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
16
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
17
+ GNU General Public License for more details.
18
+ .
19
+ You should have received a copy of the GNU General Public License
20
+ along with this program.  If not, see <http://www.gnu.org/licenses/>.
Back to file index

hooks/charmhelpers/contrib/ansible/__init__.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/ansible/__init__.py
  3
@@ -0,0 +1,171 @@
  4
+# Copyright 2013 Canonical Ltd.
  5
+#
  6
+# Authors:
  7
+#  Charm Helpers Developers <juju@lists.ubuntu.com>
  8
+"""Charm Helpers ansible - declare the state of your machines.
  9
+
 10
+This helper enables you to declare your machine state, rather than
 11
+program it procedurally (and have to test each change to your procedures).
 12
+Your install hook can be as simple as::
 13
+
 14
+    {{{
 15
+    import charmhelpers.contrib.ansible
 16
+
 17
+
 18
+    def install():
 19
+        charmhelpers.contrib.ansible.install_ansible_support()
 20
+        charmhelpers.contrib.ansible.apply_playbook('playbooks/install.yaml')
 21
+    }}}
 22
+
 23
+and won't need to change (nor will its tests) when you change the machine
 24
+state.
 25
+
 26
+All of your juju config and relation-data are available as template
 27
+variables within your playbooks and templates. An install playbook looks
 28
+something like::
 29
+
 30
+    {{{
 31
+    ---
 32
+    - hosts: localhost
 33
+      user: root
 34
+
 35
+      tasks:
 36
+        - name: Add private repositories.
 37
+          template:
 38
+            src: ../templates/private-repositories.list.jinja2
 39
+            dest: /etc/apt/sources.list.d/private.list
 40
+
 41
+        - name: Update the cache.
 42
+          apt: update_cache=yes
 43
+
 44
+        - name: Install dependencies.
 45
+          apt: pkg={{ item }}
 46
+          with_items:
 47
+            - python-mimeparse
 48
+            - python-webob
 49
+            - sunburnt
 50
+
 51
+        - name: Setup groups.
 52
+          group: name={{ item.name }} gid={{ item.gid }}
 53
+          with_items:
 54
+            - { name: 'deploy_user', gid: 1800 }
 55
+            - { name: 'service_user', gid: 1500 }
 56
+
 57
+      ...
 58
+    }}}
 59
+
 60
+Read more online about `playbooks`_ and standard ansible `modules`_.
 61
+
 62
+.. _playbooks: http://www.ansibleworks.com/docs/playbooks.html
 63
+.. _modules: http://www.ansibleworks.com/docs/modules.html
 64
+
 65
+"""
 66
+import os
 67
+import subprocess
 68
+
 69
+import charmhelpers.contrib.templating.contexts
 70
+import charmhelpers.core.host
 71
+import charmhelpers.core.hookenv
 72
+import charmhelpers.fetch
 73
+
 74
+
 75
+charm_dir = os.environ.get('CHARM_DIR', '')
 76
+ansible_hosts_path = '/etc/ansible/hosts'
 77
+# Ansible will automatically include any vars in the following
 78
+# file in its inventory when run locally.
 79
+ansible_vars_path = '/etc/ansible/host_vars/localhost'
 80
+
 81
+
 82
+def install_ansible_support(from_ppa=True, ppa_location='ppa:rquillo/ansible'):
 83
+    """Installs the ansible package.
 84
+
 85
+    By default it is installed from the `PPA`_ linked from
 86
+    the ansible `website`_ or from a ppa specified by a charm config..
 87
+
 88
+    .. _PPA: https://launchpad.net/~rquillo/+archive/ansible
 89
+    .. _website: http://docs.ansible.com/intro_installation.html#latest-releases-via-apt-ubuntu
 90
+
 91
+    If from_ppa is empty, you must ensure that the package is available
 92
+    from a configured repository.
 93
+    """
 94
+    if from_ppa:
 95
+        charmhelpers.fetch.add_source(ppa_location)
 96
+        charmhelpers.fetch.apt_update(fatal=True)
 97
+    charmhelpers.fetch.apt_install('ansible')
 98
+    with open(ansible_hosts_path, 'w+') as hosts_file:
 99
+        hosts_file.write('localhost ansible_connection=local')
100
+
101
+
102
+def apply_playbook(playbook, tags=None):
103
+    tags = tags or []
104
+    tags = ",".join(tags)
105
+    charmhelpers.contrib.templating.contexts.juju_state_to_yaml(
106
+        ansible_vars_path, namespace_separator='__',
107
+        allow_hyphens_in_keys=False)
108
+    # we want ansible's log output to be unbuffered
109
+    env = os.environ.copy()
110
+    env['PYTHONUNBUFFERED'] = "1"
111
+    call = [
112
+        'ansible-playbook',
113
+        '-c',
114
+        'local',
115
+        playbook,
116
+    ]
117
+    if tags:
118
+        call.extend(['--tags', '{}'.format(tags)])
119
+    subprocess.check_call(call, env=env)
120
+
121
+
122
+class AnsibleHooks(charmhelpers.core.hookenv.Hooks):
123
+    """Run a playbook with the hook-name as the tag.
124
+
125
+    This helper builds on the standard hookenv.Hooks helper,
126
+    but additionally runs the playbook with the hook-name specified
127
+    using --tags (ie. running all the tasks tagged with the hook-name).
128
+
129
+    Example::
130
+
131
+        hooks = AnsibleHooks(playbook_path='playbooks/my_machine_state.yaml')
132
+
133
+        # All the tasks within my_machine_state.yaml tagged with 'install'
134
+        # will be run automatically after do_custom_work()
135
+        @hooks.hook()
136
+        def install():
137
+            do_custom_work()
138
+
139
+        # For most of your hooks, you won't need to do anything other
140
+        # than run the tagged tasks for the hook:
141
+        @hooks.hook('config-changed', 'start', 'stop')
142
+        def just_use_playbook():
143
+            pass
144
+
145
+        # As a convenience, you can avoid the above noop function by specifying
146
+        # the hooks which are handled by ansible-only and they'll be registered
147
+        # for you:
148
+        # hooks = AnsibleHooks(
149
+        #     'playbooks/my_machine_state.yaml',
150
+        #     default_hooks=['config-changed', 'start', 'stop'])
151
+
152
+        if __name__ == "__main__":
153
+            # execute a hook based on the name the program is called by
154
+            hooks.execute(sys.argv)
155
+
156
+    """
157
+
158
+    def __init__(self, playbook_path, default_hooks=None):
159
+        """Register any hooks handled by ansible."""
160
+        super(AnsibleHooks, self).__init__()
161
+
162
+        self.playbook_path = playbook_path
163
+
164
+        default_hooks = default_hooks or []
165
+        noop = lambda *args, **kwargs: None
166
+        for hook in default_hooks:
167
+            self.register(hook, noop)
168
+
169
+    def execute(self, args):
170
+        """Execute the hook followed by the playbook using the hook as tag."""
171
+        super(AnsibleHooks, self).execute(args)
172
+        hook_name = os.path.basename(args[0])
173
+        charmhelpers.contrib.ansible.apply_playbook(
174
+            self.playbook_path, tags=[hook_name])
Back to file index

hooks/charmhelpers/contrib/templating/contexts.py

  1
--- 
  2
+++ hooks/charmhelpers/contrib/templating/contexts.py
  3
@@ -0,0 +1,116 @@
  4
+# Copyright 2013 Canonical Ltd.
  5
+#
  6
+# Authors:
  7
+#  Charm Helpers Developers <juju@lists.ubuntu.com>
  8
+"""A helper to create a yaml cache of config with namespaced relation data."""
  9
+import os
 10
+import yaml
 11
+
 12
+import charmhelpers.core.hookenv
 13
+
 14
+
 15
+charm_dir = os.environ.get('CHARM_DIR', '')
 16
+
 17
+
 18
+def dict_keys_without_hyphens(a_dict):
 19
+    """Return the a new dict with underscores instead of hyphens in keys."""
 20
+    return dict(
 21
+        (key.replace('-', '_'), val) for key, val in a_dict.items())
 22
+
 23
+
 24
+def update_relations(context, namespace_separator=':'):
 25
+    """Update the context with the relation data."""
 26
+    # Add any relation data prefixed with the relation type.
 27
+    relation_type = charmhelpers.core.hookenv.relation_type()
 28
+    relations = []
 29
+    context['current_relation'] = {}
 30
+    if relation_type is not None:
 31
+        relation_data = charmhelpers.core.hookenv.relation_get()
 32
+        context['current_relation'] = relation_data
 33
+        # Deprecated: the following use of relation data as keys
 34
+        # directly in the context will be removed.
 35
+        relation_data = dict(
 36
+            ("{relation_type}{namespace_separator}{key}".format(
 37
+                relation_type=relation_type,
 38
+                key=key,
 39
+                namespace_separator=namespace_separator), val)
 40
+            for key, val in relation_data.items())
 41
+        relation_data = dict_keys_without_hyphens(relation_data)
 42
+        context.update(relation_data)
 43
+        relations = charmhelpers.core.hookenv.relations_of_type(relation_type)
 44
+        relations = [dict_keys_without_hyphens(rel) for rel in relations]
 45
+
 46
+    context['relations_full'] = charmhelpers.core.hookenv.relations()
 47
+
 48
+    # the hookenv.relations() data structure is effectively unusable in
 49
+    # templates and other contexts when trying to access relation data other
 50
+    # than the current relation. So provide a more useful structure that works
 51
+    # with any hook.
 52
+    local_unit = charmhelpers.core.hookenv.local_unit()
 53
+    relations = {}
 54
+    for rname, rids in context['relations_full'].items():
 55
+        relations[rname] = []
 56
+        for rid, rdata in rids.items():
 57
+            data = rdata.copy()
 58
+            if local_unit in rdata:
 59
+                data.pop(local_unit)
 60
+            for unit_name, rel_data in data.items():
 61
+                new_data = {'__relid__': rid, '__unit__': unit_name}
 62
+                new_data.update(rel_data)
 63
+                relations[rname].append(new_data)
 64
+    context['relations'] = relations
 65
+
 66
+
 67
+def juju_state_to_yaml(yaml_path, namespace_separator=':',
 68
+                       allow_hyphens_in_keys=True):
 69
+    """Update the juju config and state in a yaml file.
 70
+
 71
+    This includes any current relation-get data, and the charm
 72
+    directory.
 73
+
 74
+    This function was created for the ansible and saltstack
 75
+    support, as those libraries can use a yaml file to supply
 76
+    context to templates, but it may be useful generally to
 77
+    create and update an on-disk cache of all the config, including
 78
+    previous relation data.
 79
+
 80
+    By default, hyphens are allowed in keys as this is supported
 81
+    by yaml, but for tools like ansible, hyphens are not valid [1].
 82
+
 83
+    [1] http://www.ansibleworks.com/docs/playbooks_variables.html#what-makes-a-valid-variable-name
 84
+    """
 85
+    config = charmhelpers.core.hookenv.config()
 86
+
 87
+    # Add the charm_dir which we will need to refer to charm
 88
+    # file resources etc.
 89
+    config['charm_dir'] = charm_dir
 90
+    config['local_unit'] = charmhelpers.core.hookenv.local_unit()
 91
+    config['unit_private_address'] = charmhelpers.core.hookenv.unit_private_ip()
 92
+    config['unit_public_address'] = charmhelpers.core.hookenv.unit_get(
 93
+        'public-address'
 94
+    )
 95
+
 96
+    # Don't use non-standard tags for unicode which will not
 97
+    # work when salt uses yaml.load_safe.
 98
+    yaml.add_representer(unicode, lambda dumper,
 99
+                         value: dumper.represent_scalar(
100
+                             u'tag:yaml.org,2002:str', value))
101
+
102
+    yaml_dir = os.path.dirname(yaml_path)
103
+    if not os.path.exists(yaml_dir):
104
+        os.makedirs(yaml_dir)
105
+
106
+    if os.path.exists(yaml_path):
107
+        with open(yaml_path, "r") as existing_vars_file:
108
+            existing_vars = yaml.load(existing_vars_file.read())
109
+    else:
110
+        existing_vars = {}
111
+
112
+    if not allow_hyphens_in_keys:
113
+        config = dict_keys_without_hyphens(config)
114
+    existing_vars.update(config)
115
+
116
+    update_relations(existing_vars, namespace_separator)
117
+
118
+    with open(yaml_path, "w+") as fp:
119
+        fp.write(yaml.dump(existing_vars, default_flow_style=False))
Back to file index

hooks/charmhelpers/core/fstab.py

  1
--- 
  2
+++ hooks/charmhelpers/core/fstab.py
  3
@@ -0,0 +1,116 @@
  4
+#!/usr/bin/env python
  5
+# -*- coding: utf-8 -*-
  6
+
  7
+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
  8
+
  9
+import os
 10
+
 11
+
 12
+class Fstab(file):
 13
+    """This class extends file in order to implement a file reader/writer
 14
+    for file `/etc/fstab`
 15
+    """
 16
+
 17
+    class Entry(object):
 18
+        """Entry class represents a non-comment line on the `/etc/fstab` file
 19
+        """
 20
+        def __init__(self, device, mountpoint, filesystem,
 21
+                     options, d=0, p=0):
 22
+            self.device = device
 23
+            self.mountpoint = mountpoint
 24
+            self.filesystem = filesystem
 25
+
 26
+            if not options:
 27
+                options = "defaults"
 28
+
 29
+            self.options = options
 30
+            self.d = d
 31
+            self.p = p
 32
+
 33
+        def __eq__(self, o):
 34
+            return str(self) == str(o)
 35
+
 36
+        def __str__(self):
 37
+            return "{} {} {} {} {} {}".format(self.device,
 38
+                                              self.mountpoint,
 39
+                                              self.filesystem,
 40
+                                              self.options,
 41
+                                              self.d,
 42
+                                              self.p)
 43
+
 44
+    DEFAULT_PATH = os.path.join(os.path.sep, 'etc', 'fstab')
 45
+
 46
+    def __init__(self, path=None):
 47
+        if path:
 48
+            self._path = path
 49
+        else:
 50
+            self._path = self.DEFAULT_PATH
 51
+        file.__init__(self, self._path, 'r+')
 52
+
 53
+    def _hydrate_entry(self, line):
 54
+        # NOTE: use split with no arguments to split on any
 55
+        #       whitespace including tabs
 56
+        return Fstab.Entry(*filter(
 57
+            lambda x: x not in ('', None),
 58
+            line.strip("\n").split()))
 59
+
 60
+    @property
 61
+    def entries(self):
 62
+        self.seek(0)
 63
+        for line in self.readlines():
 64
+            try:
 65
+                if not line.startswith("#"):
 66
+                    yield self._hydrate_entry(line)
 67
+            except ValueError:
 68
+                pass
 69
+
 70
+    def get_entry_by_attr(self, attr, value):
 71
+        for entry in self.entries:
 72
+            e_attr = getattr(entry, attr)
 73
+            if e_attr == value:
 74
+                return entry
 75
+        return None
 76
+
 77
+    def add_entry(self, entry):
 78
+        if self.get_entry_by_attr('device', entry.device):
 79
+            return False
 80
+
 81
+        self.write(str(entry) + '\n')
 82
+        self.truncate()
 83
+        return entry
 84
+
 85
+    def remove_entry(self, entry):
 86
+        self.seek(0)
 87
+
 88
+        lines = self.readlines()
 89
+
 90
+        found = False
 91
+        for index, line in enumerate(lines):
 92
+            if not line.startswith("#"):
 93
+                if self._hydrate_entry(line) == entry:
 94
+                    found = True
 95
+                    break
 96
+
 97
+        if not found:
 98
+            return False
 99
+
100
+        lines.remove(line)
101
+
102
+        self.seek(0)
103
+        self.write(''.join(lines))
104
+        self.truncate()
105
+        return True
106
+
107
+    @classmethod
108
+    def remove_by_mountpoint(cls, mountpoint, path=None):
109
+        fstab = cls(path=path)
110
+        entry = fstab.get_entry_by_attr('mountpoint', mountpoint)
111
+        if entry:
112
+            return fstab.remove_entry(entry)
113
+        return False
114
+
115
+    @classmethod
116
+    def add(cls, device, mountpoint, filesystem, options=None, path=None):
117
+        return cls(path=path).add_entry(Fstab.Entry(device,
118
+                                                    mountpoint, filesystem,
119
+                                                    options=options))
Back to file index

hooks/charmhelpers/core/hookenv.py

  1
--- 
  2
+++ hooks/charmhelpers/core/hookenv.py
  3
@@ -0,0 +1,532 @@
  4
+"Interactions with the Juju environment"
  5
+# Copyright 2013 Canonical Ltd.
  6
+#
  7
+# Authors:
  8
+#  Charm Helpers Developers <juju@lists.ubuntu.com>
  9
+
 10
+import os
 11
+import json
 12
+import yaml
 13
+import subprocess
 14
+import sys
 15
+import UserDict
 16
+from subprocess import CalledProcessError
 17
+
 18
+CRITICAL = "CRITICAL"
 19
+ERROR = "ERROR"
 20
+WARNING = "WARNING"
 21
+INFO = "INFO"
 22
+DEBUG = "DEBUG"
 23
+MARKER = object()
 24
+
 25
+cache = {}
 26
+
 27
+
 28
+def cached(func):
 29
+    """Cache return values for multiple executions of func + args
 30
+
 31
+    For example::
 32
+
 33
+        @cached
 34
+        def unit_get(attribute):
 35
+            pass
 36
+
 37
+        unit_get('test')
 38
+
 39
+    will cache the result of unit_get + 'test' for future calls.
 40
+    """
 41
+    def wrapper(*args, **kwargs):
 42
+        global cache
 43
+        key = str((func, args, kwargs))
 44
+        try:
 45
+            return cache[key]
 46
+        except KeyError:
 47
+            res = func(*args, **kwargs)
 48
+            cache[key] = res
 49
+            return res
 50
+    return wrapper
 51
+
 52
+
 53
+def flush(key):
 54
+    """Flushes any entries from function cache where the
 55
+    key is found in the function+args """
 56
+    flush_list = []
 57
+    for item in cache:
 58
+        if key in item:
 59
+            flush_list.append(item)
 60
+    for item in flush_list:
 61
+        del cache[item]
 62
+
 63
+
 64
+def log(message, level=None):
 65
+    """Write a message to the juju log"""
 66
+    command = ['juju-log']
 67
+    if level:
 68
+        command += ['-l', level]
 69
+    command += [message]
 70
+    subprocess.call(command)
 71
+
 72
+
 73
+class Serializable(UserDict.IterableUserDict):
 74
+    """Wrapper, an object that can be serialized to yaml or json"""
 75
+
 76
+    def __init__(self, obj):
 77
+        # wrap the object
 78
+        UserDict.IterableUserDict.__init__(self)
 79
+        self.data = obj
 80
+
 81
+    def __getattr__(self, attr):
 82
+        # See if this object has attribute.
 83
+        if attr in ("json", "yaml", "data"):
 84
+            return self.__dict__[attr]
 85
+        # Check for attribute in wrapped object.
 86
+        got = getattr(self.data, attr, MARKER)
 87
+        if got is not MARKER:
 88
+            return got
 89
+        # Proxy to the wrapped object via dict interface.
 90
+        try:
 91
+            return self.data[attr]
 92
+        except KeyError:
 93
+            raise AttributeError(attr)
 94
+
 95
+    def __getstate__(self):
 96
+        # Pickle as a standard dictionary.
 97
+        return self.data
 98
+
 99
+    def __setstate__(self, state):
100
+        # Unpickle into our wrapper.
101
+        self.data = state
102
+
103
+    def json(self):
104
+        """Serialize the object to json"""
105
+        return json.dumps(self.data)
106
+
107
+    def yaml(self):
108
+        """Serialize the object to yaml"""
109
+        return yaml.dump(self.data)
110
+
111
+
112
+def execution_environment():
113
+    """A convenient bundling of the current execution context"""
114
+    context = {}
115
+    context['conf'] = config()
116
+    if relation_id():
117
+        context['reltype'] = relation_type()
118
+        context['relid'] = relation_id()
119
+        context['rel'] = relation_get()
120
+    context['unit'] = local_unit()
121
+    context['rels'] = relations()
122
+    context['env'] = os.environ
123
+    return context
124
+
125
+
126
+def in_relation_hook():
127
+    """Determine whether we're running in a relation hook"""
128
+    return 'JUJU_RELATION' in os.environ
129
+
130
+
131
+def relation_type():
132
+    """The scope for the current relation hook"""
133
+    return os.environ.get('JUJU_RELATION', None)
134
+
135
+
136
+def relation_id():
137
+    """The relation ID for the current relation hook"""
138
+    return os.environ.get('JUJU_RELATION_ID', None)
139
+
140
+
141
+def local_unit():
142
+    """Local unit ID"""
143
+    return os.environ['JUJU_UNIT_NAME']
144
+
145
+
146
+def remote_unit():
147
+    """The remote unit for the current relation hook"""
148
+    return os.environ['JUJU_REMOTE_UNIT']
149
+
150
+
151
+def service_name():
152
+    """The name service group this unit belongs to"""
153
+    return local_unit().split('/')[0]
154
+
155
+
156
+def hook_name():
157
+    """The name of the currently executing hook"""
158
+    return os.path.basename(sys.argv[0])
159
+
160
+
161
+class Config(dict):
162
+    """A dictionary representation of the charm's config.yaml, with some
163
+    extra features:
164
+
165
+    - See which values in the dictionary have changed since the previous hook.
166
+    - For values that have changed, see what the previous value was.
167
+    - Store arbitrary data for use in a later hook.
168
+
169
+    NOTE: Do not instantiate this object directly - instead call
170
+    ``hookenv.config()``, which will return an instance of :class:`Config`.
171
+
172
+    Example usage::
173
+
174
+        >>> # inside a hook
175
+        >>> from charmhelpers.core import hookenv
176
+        >>> config = hookenv.config()
177
+        >>> config['foo']
178
+        'bar'
179
+        >>> # store a new key/value for later use
180
+        >>> config['mykey'] = 'myval'
181
+
182
+
183
+        >>> # user runs `juju set mycharm foo=baz`
184
+        >>> # now we're inside subsequent config-changed hook
185
+        >>> config = hookenv.config()
186
+        >>> config['foo']
187
+        'baz'
188
+        >>> # test to see if this val has changed since last hook
189
+        >>> config.changed('foo')
190
+        True
191
+        >>> # what was the previous value?
192
+        >>> config.previous('foo')
193
+        'bar'
194
+        >>> # keys/values that we add are preserved across hooks
195
+        >>> config['mykey']
196
+        'myval'
197
+
198
+    """
199
+    CONFIG_FILE_NAME = '.juju-persistent-config'
200
+
201
+    def __init__(self, *args, **kw):
202
+        super(Config, self).__init__(*args, **kw)
203
+        self.implicit_save = True
204
+        self._prev_dict = None
205
+        self.path = os.path.join(charm_dir(), Config.CONFIG_FILE_NAME)
206
+        if os.path.exists(self.path):
207
+            self.load_previous()
208
+
209
+    def __getitem__(self, key):
210
+        """For regular dict lookups, check the current juju config first,
211
+        then the previous (saved) copy. This ensures that user-saved values
212
+        will be returned by a dict lookup.
213
+
214
+        """
215
+        try:
216
+            return dict.__getitem__(self, key)
217
+        except KeyError:
218
+            return (self._prev_dict or {})[key]
219
+
220
+    def keys(self):
221
+        prev_keys = []
222
+        if self._prev_dict is not None:
223
+            prev_keys = self._prev_dict.keys()
224
+        return list(set(prev_keys + dict.keys(self)))
225
+
226
+    def load_previous(self, path=None):
227
+        """Load previous copy of config from disk.
228
+
229
+        In normal usage you don't need to call this method directly - it
230
+        is called automatically at object initialization.
231
+
232
+        :param path:
233
+
234
+            File path from which to load the previous config. If `None`,
235
+            config is loaded from the default location. If `path` is
236
+            specified, subsequent `save()` calls will write to the same
237
+            path.
238
+
239
+        """
240
+        self.path = path or self.path
241
+        with open(self.path) as f:
242
+            self._prev_dict = json.load(f)
243
+
244
+    def changed(self, key):
245
+        """Return True if the current value for this key is different from
246
+        the previous value.
247
+
248
+        """
249
+        if self._prev_dict is None:
250
+            return True
251
+        return self.previous(key) != self.get(key)
252
+
253
+    def previous(self, key):
254
+        """Return previous value for this key, or None if there
255
+        is no previous value.
256
+
257
+        """
258
+        if self._prev_dict:
259
+            return self._prev_dict.get(key)
260
+        return None
261
+
262
+    def save(self):
263
+        """Save this config to disk.
264
+
265
+        If the charm is using the :mod:`Services Framework <services.base>`
266
+        or :meth:'@hook <Hooks.hook>' decorator, this
267
+        is called automatically at the end of successful hook execution.
268
+        Otherwise, it should be called directly by user code.
269
+
270
+        To disable automatic saves, set ``implicit_save=False`` on this
271
+        instance.
272
+
273
+        """
274
+        if self._prev_dict:
275
+            for k, v in self._prev_dict.iteritems():
276
+                if k not in self:
277
+                    self[k] = v
278
+        with open(self.path, 'w') as f:
279
+            json.dump(self, f)
280
+
281
+
282
+@cached
283
+def config(scope=None):
284
+    """Juju charm configuration"""
285
+    config_cmd_line = ['config-get']
286
+    if scope is not None:
287
+        config_cmd_line.append(scope)
288
+    config_cmd_line.append('--format=json')
289
+    try:
290
+        config_data = json.loads(subprocess.check_output(config_cmd_line))
291
+        if scope is not None:
292
+            return config_data
293
+        return Config(config_data)
294
+    except ValueError:
295
+        return None
296
+
297
+
298
+@cached
299
+def relation_get(attribute=None, unit=None, rid=None):
300
+    """Get relation information"""
301
+    _args = ['relation-get', '--format=json']
302
+    if rid:
303
+        _args.append('-r')
304
+        _args.append(rid)
305
+    _args.append(attribute or '-')
306
+    if unit:
307
+        _args.append(unit)
308
+    try:
309
+        return json.loads(subprocess.check_output(_args))
310
+    except ValueError:
311
+        return None
312
+    except CalledProcessError, e:
313
+        if e.returncode == 2:
314
+            return None
315
+        raise
316
+
317
+
318
+def relation_set(relation_id=None, relation_settings=None, **kwargs):
319
+    """Set relation information for the current unit"""
320
+    relation_settings = relation_settings if relation_settings else {}
321
+    relation_cmd_line = ['relation-set']
322
+    if relation_id is not None:
323
+        relation_cmd_line.extend(('-r', relation_id))
324
+    for k, v in (relation_settings.items() + kwargs.items()):
325
+        if v is None:
326
+            relation_cmd_line.append('{}='.format(k))
327
+        else:
328
+            relation_cmd_line.append('{}={}'.format(k, v))
329
+    subprocess.check_call(relation_cmd_line)
330
+    # Flush cache of any relation-gets for local unit
331
+    flush(local_unit())
332
+
333
+
334
+@cached
335
+def relation_ids(reltype=None):
336
+    """A list of relation_ids"""
337
+    reltype = reltype or relation_type()
338
+    relid_cmd_line = ['relation-ids', '--format=json']
339
+    if reltype is not None:
340
+        relid_cmd_line.append(reltype)
341
+        return json.loads(subprocess.check_output(relid_cmd_line)) or []
342
+    return []
343
+
344
+
345
+@cached
346
+def related_units(relid=None):
347
+    """A list of related units"""
348
+    relid = relid or relation_id()
349
+    units_cmd_line = ['relation-list', '--format=json']
350
+    if relid is not None:
351
+        units_cmd_line.extend(('-r', relid))
352
+    return json.loads(subprocess.check_output(units_cmd_line)) or []
353
+
354
+
355
+@cached
356
+def relation_for_unit(unit=None, rid=None):
357
+    """Get the json represenation of a unit's relation"""
358
+    unit = unit or remote_unit()
359
+    relation = relation_get(unit=unit, rid=rid)
360
+    for key in relation:
361
+        if key.endswith('-list'):
362
+            relation[key] = relation[key].split()
363
+    relation['__unit__'] = unit
364
+    return relation
365
+
366
+
367
+@cached
368
+def relations_for_id(relid=None):
369
+    """Get relations of a specific relation ID"""
370
+    relation_data = []
371
+    relid = relid or relation_ids()
372
+    for unit in related_units(relid):
373
+        unit_data = relation_for_unit(unit, relid)
374
+        unit_data['__relid__'] = relid
375
+        relation_data.append(unit_data)
376
+    return relation_data
377
+
378
+
379
+@cached
380
+def relations_of_type(reltype=None):
381
+    """Get relations of a specific type"""
382
+    relation_data = []
383
+    reltype = reltype or relation_type()
384
+    for relid in relation_ids(reltype):
385
+        for relation in relations_for_id(relid):
386
+            relation['__relid__'] = relid
387
+            relation_data.append(relation)
388
+    return relation_data
389
+
390
+
391
+@cached
392
+def relation_types():
393
+    """Get a list of relation types supported by this charm"""
394
+    charmdir = os.environ.get('CHARM_DIR', '')
395
+    mdf = open(os.path.join(charmdir, 'metadata.yaml'))
396
+    md = yaml.safe_load(mdf)
397
+    rel_types = []
398
+    for key in ('provides', 'requires', 'peers'):
399
+        section = md.get(key)
400
+        if section:
401
+            rel_types.extend(section.keys())
402
+    mdf.close()
403
+    return rel_types
404
+
405
+
406
+@cached
407
+def relations():
408
+    """Get a nested dictionary of relation data for all related units"""
409
+    rels = {}
410
+    for reltype in relation_types():
411
+        relids = {}
412
+        for relid in relation_ids(reltype):
413
+            units = {local_unit(): relation_get(unit=local_unit(), rid=relid)}
414
+            for unit in related_units(relid):
415
+                reldata = relation_get(unit=unit, rid=relid)
416
+                units[unit] = reldata
417
+            relids[relid] = units
418
+        rels[reltype] = relids
419
+    return rels
420
+
421
+
422
+@cached
423
+def is_relation_made(relation, keys='private-address'):
424
+    '''
425
+    Determine whether a relation is established by checking for
426
+    presence of key(s).  If a list of keys is provided, they
427
+    must all be present for the relation to be identified as made
428
+    '''
429
+    if isinstance(keys, str):
430
+        keys = [keys]
431
+    for r_id in relation_ids(relation):
432
+        for unit in related_units(r_id):
433
+            context = {}
434
+            for k in keys:
435
+                context[k] = relation_get(k, rid=r_id,
436
+                                          unit=unit)
437
+            if None not in context.values():
438
+                return True
439
+    return False
440
+
441
+
442
+def open_port(port, protocol="TCP"):
443
+    """Open a service network port"""
444
+    _args = ['open-port']
445
+    _args.append('{}/{}'.format(port, protocol))
446
+    subprocess.check_call(_args)
447
+
448
+
449
+def close_port(port, protocol="TCP"):
450
+    """Close a service network port"""
451
+    _args = ['close-port']
452
+    _args.append('{}/{}'.format(port, protocol))
453
+    subprocess.check_call(_args)
454
+
455
+
456
+@cached
457
+def unit_get(attribute):
458
+    """Get the unit ID for the remote unit"""
459
+    _args = ['unit-get', '--format=json', attribute]
460
+    try:
461
+        return json.loads(subprocess.check_output(_args))
462
+    except ValueError:
463
+        return None
464
+
465
+
466
+def unit_private_ip():
467
+    """Get this unit's private IP address"""
468
+    return unit_get('private-address')
469
+
470
+
471
+class UnregisteredHookError(Exception):
472
+    """Raised when an undefined hook is called"""
473
+    pass
474
+
475
+
476
+class Hooks(object):
477
+    """A convenient handler for hook functions.
478
+
479
+    Example::
480
+
481
+        hooks = Hooks()
482
+
483
+        # register a hook, taking its name from the function name
484
+        @hooks.hook()
485
+        def install():
486
+            pass  # your code here
487
+
488
+        # register a hook, providing a custom hook name
489
+        @hooks.hook("config-changed")
490
+        def config_changed():
491
+            pass  # your code here
492
+
493
+        if __name__ == "__main__":
494
+            # execute a hook based on the name the program is called by
495
+            hooks.execute(sys.argv)
496
+    """
497
+
498
+    def __init__(self, config_save=True):
499
+        super(Hooks, self).__init__()
500
+        self._hooks = {}
501
+        self._config_save = config_save
502
+
503
+    def register(self, name, function):
504
+        """Register a hook"""
505
+        self._hooks[name] = function
506
+
507
+    def execute(self, args):
508
+        """Execute a registered hook based on args[0]"""
509
+        hook_name = os.path.basename(args[0])
510
+        if hook_name in self._hooks:
511
+            self._hooks[hook_name]()
512
+            if self._config_save:
513
+                cfg = config()
514
+                if cfg.implicit_save:
515
+                    cfg.save()
516
+        else:
517
+            raise UnregisteredHookError(hook_name)
518
+
519
+    def hook(self, *hook_names):
520
+        """Decorator, registering them as hooks"""
521
+        def wrapper(decorated):
522
+            for hook_name in hook_names:
523
+                self.register(hook_name, decorated)
524
+            else:
525
+                self.register(decorated.__name__, decorated)
526
+                if '_' in decorated.__name__:
527
+                    self.register(
528
+                        decorated.__name__.replace('_', '-'), decorated)
529
+            return decorated
530
+        return wrapper
531
+
532
+
533
+def charm_dir():
534
+    """Return the root directory of the current charm"""
535
+    return os.environ.get('CHARM_DIR')
Back to file index

hooks/charmhelpers/core/host.py

  1
--- 
  2
+++ hooks/charmhelpers/core/host.py
  3
@@ -0,0 +1,391 @@
  4
+"""Tools for working with the host system"""
  5
+# Copyright 2012 Canonical Ltd.
  6
+#
  7
+# Authors:
  8
+#  Nick Moffitt <nick.moffitt@canonical.com>
  9
+#  Matthew Wedgwood <matthew.wedgwood@canonical.com>
 10
+
 11
+import os
 12
+import re
 13
+import pwd
 14
+import grp
 15
+import random
 16
+import string
 17
+import subprocess
 18
+import hashlib
 19
+from contextlib import contextmanager
 20
+
 21
+from collections import OrderedDict
 22
+
 23
+from hookenv import log
 24
+from fstab import Fstab
 25
+
 26
+
 27
+def service_start(service_name):
 28
+    """Start a system service"""
 29
+    return service('start', service_name)
 30
+
 31
+
 32
+def service_stop(service_name):
 33
+    """Stop a system service"""
 34
+    return service('stop', service_name)
 35
+
 36
+
 37
+def service_restart(service_name):
 38
+    """Restart a system service"""
 39
+    return service('restart', service_name)
 40
+
 41
+
 42
+def service_reload(service_name, restart_on_failure=False):
 43
+    """Reload a system service, optionally falling back to restart if
 44
+    reload fails"""
 45
+    service_result = service('reload', service_name)
 46
+    if not service_result and restart_on_failure:
 47
+        service_result = service('restart', service_name)
 48
+    return service_result
 49
+
 50
+
 51
+def service(action, service_name):
 52
+    """Control a system service"""
 53
+    cmd = ['service', service_name, action]
 54
+    return subprocess.call(cmd) == 0
 55
+
 56
+
 57
+def service_running(service):
 58
+    """Determine whether a system service is running"""
 59
+    try:
 60
+        output = subprocess.check_output(['service', service, 'status'], stderr=subprocess.STDOUT)
 61
+    except subprocess.CalledProcessError:
 62
+        return False
 63
+    else:
 64
+        if ("start/running" in output or "is running" in output):
 65
+            return True
 66
+        else:
 67
+            return False
 68
+
 69
+
 70
+def service_available(service_name):
 71
+    """Determine whether a system service is available"""
 72
+    try:
 73
+        subprocess.check_output(['service', service_name, 'status'], stderr=subprocess.STDOUT)
 74
+    except subprocess.CalledProcessError as e:
 75
+        return 'unrecognized service' not in e.output
 76
+    else:
 77
+        return True
 78
+
 79
+
 80
+def adduser(username, password=None, shell='/bin/bash', system_user=False):
 81
+    """Add a user to the system"""
 82
+    try:
 83
+        user_info = pwd.getpwnam(username)
 84
+        log('user {0} already exists!'.format(username))
 85
+    except KeyError:
 86
+        log('creating user {0}'.format(username))
 87
+        cmd = ['useradd']
 88
+        if system_user or password is None:
 89
+            cmd.append('--system')
 90
+        else:
 91
+            cmd.extend([
 92
+                '--create-home',
 93
+                '--shell', shell,
 94
+                '--password', password,
 95
+            ])
 96
+        cmd.append(username)
 97
+        subprocess.check_call(cmd)
 98
+        user_info = pwd.getpwnam(username)
 99
+    return user_info
100
+
101
+
102
+def add_user_to_group(username, group):
103
+    """Add a user to a group"""
104
+    cmd = [
105
+        'gpasswd', '-a',
106
+        username,
107
+        group
108
+    ]
109
+    log("Adding user {} to group {}".format(username, group))
110
+    subprocess.check_call(cmd)
111
+
112
+
113
+def rsync(from_path, to_path, flags='-r', options=None):
114
+    """Replicate the contents of a path"""
115
+    options = options or ['--delete', '--executability']
116
+    cmd = ['/usr/bin/rsync', flags]
117
+    cmd.extend(options)
118
+    cmd.append(from_path)
119
+    cmd.append(to_path)
120
+    log(" ".join(cmd))
121
+    return subprocess.check_output(cmd).strip()
122
+
123
+
124
+def symlink(source, destination):
125
+    """Create a symbolic link"""
126
+    log("Symlinking {} as {}".format(source, destination))
127
+    cmd = [
128
+        'ln',
129
+        '-sf',
130
+        source,
131
+        destination,
132
+    ]
133
+    subprocess.check_call(cmd)
134
+
135
+
136
+def mkdir(path, owner='root', group='root', perms=0555, force=False):
137
+    """Create a directory"""
138
+    log("Making dir {} {}:{} {:o}".format(path, owner, group,
139
+                                          perms))
140
+    uid = pwd.getpwnam(owner).pw_uid
141
+    gid = grp.getgrnam(group).gr_gid
142
+    realpath = os.path.abspath(path)
143
+    if os.path.exists(realpath):
144
+        if force and not os.path.isdir(realpath):
145
+            log("Removing non-directory file {} prior to mkdir()".format(path))
146
+            os.unlink(realpath)
147
+    else:
148
+        os.makedirs(realpath, perms)
149
+    os.chown(realpath, uid, gid)
150
+
151
+
152
+def write_file(path, content, owner='root', group='root', perms=0444):
153
+    """Create or overwrite a file with the contents of a string"""
154
+    log("Writing file {} {}:{} {:o}".format(path, owner, group, perms))
155
+    uid = pwd.getpwnam(owner).pw_uid
156
+    gid = grp.getgrnam(group).gr_gid
157
+    with open(path, 'w') as target:
158
+        os.fchown(target.fileno(), uid, gid)
159
+        os.fchmod(target.fileno(), perms)
160
+        target.write(content)
161
+
162
+
163
+def fstab_remove(mp):
164
+    """Remove the given mountpoint entry from /etc/fstab
165
+    """
166
+    return Fstab.remove_by_mountpoint(mp)
167
+
168
+
169
+def fstab_add(dev, mp, fs, options=None):
170
+    """Adds the given device entry to the /etc/fstab file
171
+    """
172
+    return Fstab.add(dev, mp, fs, options=options)
173
+
174
+
175
+def mount(device, mountpoint, options=None, persist=False, filesystem="ext3"):
176
+    """Mount a filesystem at a particular mountpoint"""
177
+    cmd_args = ['mount']
178
+    if options is not None:
179
+        cmd_args.extend(['-o', options])
180
+    cmd_args.extend([device, mountpoint])
181
+    try:
182
+        subprocess.check_output(cmd_args)
183
+    except subprocess.CalledProcessError, e:
184
+        log('Error mounting {} at {}\n{}'.format(device, mountpoint, e.output))
185
+        return False
186
+
187
+    if persist:
188
+        return fstab_add(device, mountpoint, filesystem, options=options)
189
+    return True
190
+
191
+
192
+def umount(mountpoint, persist=False):
193
+    """Unmount a filesystem"""
194
+    cmd_args = ['umount', mountpoint]
195
+    try:
196
+        subprocess.check_output(cmd_args)
197
+    except subprocess.CalledProcessError, e:
198
+        log('Error unmounting {}\n{}'.format(mountpoint, e.output))
199
+        return False
200
+
201
+    if persist:
202
+        return fstab_remove(mountpoint)
203
+    return True
204
+
205
+
206
+def mounts():
207
+    """Get a list of all mounted volumes as [[mountpoint,device],[...]]"""
208
+    with open('/proc/mounts') as f:
209
+        # [['/mount/point','/dev/path'],[...]]
210
+        system_mounts = [m[1::-1] for m in [l.strip().split()
211
+                                            for l in f.readlines()]]
212
+    return system_mounts
213
+
214
+
215
+def file_hash(path, hash_type='md5'):
216
+    """
217
+    Generate a hash checksum of the contents of 'path' or None if not found.
218
+
219
+    :param str hash_type: Any hash alrgorithm supported by :mod:`hashlib`,
220
+                          such as md5, sha1, sha256, sha512, etc.
221
+    """
222
+    if os.path.exists(path):
223
+        h = getattr(hashlib, hash_type)()
224
+        with open(path, 'r') as source:
225
+            h.update(source.read())  # IGNORE:E1101 - it does have update
226
+        return h.hexdigest()
227
+    else:
228
+        return None
229
+
230
+
231
+def check_hash(path, checksum, hash_type='md5'):
232
+    """
233
+    Validate a file using a cryptographic checksum.
234
+
235
+    :param str checksum: Value of the checksum used to validate the file.
236
+    :param str hash_type: Hash algorithm used to generate `checksum`.
237
+        Can be any hash alrgorithm supported by :mod:`hashlib`,
238
+        such as md5, sha1, sha256, sha512, etc.
239
+    :raises ChecksumError: If the file fails the checksum
240
+
241
+    """
242
+    actual_checksum = file_hash(path, hash_type)
243
+    if checksum != actual_checksum:
244
+        raise ChecksumError("'%s' != '%s'" % (checksum, actual_checksum))
245
+
246
+
247
+class ChecksumError(ValueError):
248
+    pass
249
+
250
+
251
+def restart_on_change(restart_map, stopstart=False):
252
+    """Restart services based on configuration files changing
253
+
254
+    This function is used a decorator, for example::
255
+
256
+        @restart_on_change({
257
+            '/etc/ceph/ceph.conf': [ 'cinder-api', 'cinder-volume' ]
258
+            })
259
+        def ceph_client_changed():
260
+            pass  # your code here
261
+
262
+    In this example, the cinder-api and cinder-volume services
263
+    would be restarted if /etc/ceph/ceph.conf is changed by the
264
+    ceph_client_changed function.
265
+    """
266
+    def wrap(f):
267
+        def wrapped_f(*args):
268
+            checksums = {}
269
+            for path in restart_map:
270
+                checksums[path] = file_hash(path)
271
+            f(*args)
272
+            restarts = []
273
+            for path in restart_map:
274
+                if checksums[path] != file_hash(path):
275
+                    restarts += restart_map[path]
276
+            services_list = list(OrderedDict.fromkeys(restarts))
277
+            if not stopstart:
278
+                for service_name in services_list:
279
+                    service('restart', service_name)
280
+            else:
281
+                for action in ['stop', 'start']:
282
+                    for service_name in services_list:
283
+                        service(action, service_name)
284
+        return wrapped_f
285
+    return wrap
286
+
287
+
288
+def lsb_release():
289
+    """Return /etc/lsb-release in a dict"""
290
+    d = {}
291
+    with open('/etc/lsb-release', 'r') as lsb:
292
+        for l in lsb:
293
+            k, v = l.split('=')
294
+            d[k.strip()] = v.strip()
295
+    return d
296
+
297
+
298
+def pwgen(length=None):
299
+    """Generate a random pasword."""
300
+    if length is None:
301
+        length = random.choice(range(35, 45))
302
+    alphanumeric_chars = [
303
+        l for l in (string.letters + string.digits)
304
+        if l not in 'l0QD1vAEIOUaeiou']
305
+    random_chars = [
306
+        random.choice(alphanumeric_chars) for _ in range(length)]
307
+    return(''.join(random_chars))
308
+
309
+
310
+def list_nics(nic_type):
311
+    '''Return a list of nics of given type(s)'''
312
+    if isinstance(nic_type, basestring):
313
+        int_types = [nic_type]
314
+    else:
315
+        int_types = nic_type
316
+    interfaces = []
317
+    for int_type in int_types:
318
+        cmd = ['ip', 'addr', 'show', 'label', int_type + '*']
319
+        ip_output = subprocess.check_output(cmd).split('\n')
320
+        ip_output = (line for line in ip_output if line)
321
+        for line in ip_output:
322
+            if line.split()[1].startswith(int_type):
323
+                matched = re.search('.*: (bond[0-9]+\.[0-9]+)@.*', line)
324
+                if matched:
325
+                    interface = matched.groups()[0]
326
+                else:
327
+                    interface = line.split()[1].replace(":", "")
328
+                interfaces.append(interface)
329
+
330
+    return interfaces
331
+
332
+
333
+def set_nic_mtu(nic, mtu):
334
+    '''Set MTU on a network interface'''
335
+    cmd = ['ip', 'link', 'set', nic, 'mtu', mtu]
336
+    subprocess.check_call(cmd)
337
+
338
+
339
+def get_nic_mtu(nic):
340
+    cmd = ['ip', 'addr', 'show', nic]
341
+    ip_output = subprocess.check_output(cmd).split('\n')
342
+    mtu = ""
343
+    for line in ip_output:
344
+        words = line.split()
345
+        if 'mtu' in words:
346
+            mtu = words[words.index("mtu") + 1]
347
+    return mtu
348
+
349
+
350
+def get_nic_hwaddr(nic):
351
+    cmd = ['ip', '-o', '-0', 'addr', 'show', nic]
352
+    ip_output = subprocess.check_output(cmd)
353
+    hwaddr = ""
354
+    words = ip_output.split()
355
+    if 'link/ether' in words:
356
+        hwaddr = words[words.index('link/ether') + 1]
357
+    return hwaddr
358
+
359
+
360
+def cmp_pkgrevno(package, revno, pkgcache=None):
361
+    '''Compare supplied revno with the revno of the installed package
362
+
363
+    *  1 => Installed revno is greater than supplied arg
364
+    *  0 => Installed revno is the same as supplied arg
365
+    * -1 => Installed revno is less than supplied arg
366
+
367
+    '''
368
+    import apt_pkg
369
+    from charmhelpers.fetch import apt_cache
370
+    if not pkgcache:
371
+        pkgcache = apt_cache()
372
+    pkg = pkgcache[package]
373
+    return apt_pkg.version_compare(pkg.current_ver.ver_str, revno)
374
+
375
+
376
+@contextmanager
377
+def chdir(d):
378
+    cur = os.getcwd()
379
+    try:
380
+        yield os.chdir(d)
381
+    finally:
382
+        os.chdir(cur)
383
+
384
+
385
+def chownr(path, owner, group):
386
+    uid = pwd.getpwnam(owner).pw_uid
387
+    gid = grp.getgrnam(group).gr_gid
388
+
389
+    for root, dirs, files in os.walk(path):
390
+        for name in dirs + files:
391
+            full = os.path.join(root, name)
392
+            broken_symlink = os.path.lexists(full) and not os.path.exists(full)
393
+            if not broken_symlink:
394
+                os.chown(full, uid, gid)
Back to file index

hooks/charmhelpers/core/services/__init__.py

1
--- 
2
+++ hooks/charmhelpers/core/services/__init__.py
3
@@ -0,0 +1,2 @@
4
+from .base import *  # NOQA
5
+from .helpers import *  # NOQA
Back to file index

hooks/charmhelpers/core/services/base.py

  1
--- 
  2
+++ hooks/charmhelpers/core/services/base.py
  3
@@ -0,0 +1,313 @@
  4
+import os
  5
+import re
  6
+import json
  7
+from collections import Iterable
  8
+
  9
+from charmhelpers.core import host
 10
+from charmhelpers.core import hookenv
 11
+
 12
+
 13
+__all__ = ['ServiceManager', 'ManagerCallback',
 14
+           'PortManagerCallback', 'open_ports', 'close_ports', 'manage_ports',
 15
+           'service_restart', 'service_stop']
 16
+
 17
+
 18
+class ServiceManager(object):
 19
+    def __init__(self, services=None):
 20
+        """
 21
+        Register a list of services, given their definitions.
 22
+
 23
+        Service definitions are dicts in the following formats (all keys except
 24
+        'service' are optional)::
 25
+
 26
+            {
 27
+                "service": <service name>,
 28
+                "required_data": <list of required data contexts>,
 29
+                "provided_data": <list of provided data contexts>,
 30
+                "data_ready": <one or more callbacks>,
 31
+                "data_lost": <one or more callbacks>,
 32
+                "start": <one or more callbacks>,
 33
+                "stop": <one or more callbacks>,
 34
+                "ports": <list of ports to manage>,
 35
+            }
 36
+
 37
+        The 'required_data' list should contain dicts of required data (or
 38
+        dependency managers that act like dicts and know how to collect the data).
 39
+        Only when all items in the 'required_data' list are populated are the list
 40
+        of 'data_ready' and 'start' callbacks executed.  See `is_ready()` for more
 41
+        information.
 42
+
 43
+        The 'provided_data' list should contain relation data providers, most likely
 44
+        a subclass of :class:`charmhelpers.core.services.helpers.RelationContext`,
 45
+        that will indicate a set of data to set on a given relation.
 46
+
 47
+        The 'data_ready' value should be either a single callback, or a list of
 48
+        callbacks, to be called when all items in 'required_data' pass `is_ready()`.
 49
+        Each callback will be called with the service name as the only parameter.
 50
+        After all of the 'data_ready' callbacks are called, the 'start' callbacks
 51
+        are fired.
 52
+
 53
+        The 'data_lost' value should be either a single callback, or a list of
 54
+        callbacks, to be called when a 'required_data' item no longer passes
 55
+        `is_ready()`.  Each callback will be called with the service name as the
 56
+        only parameter.  After all of the 'data_lost' callbacks are called,
 57
+        the 'stop' callbacks are fired.
 58
+
 59
+        The 'start' value should be either a single callback, or a list of
 60
+        callbacks, to be called when starting the service, after the 'data_ready'
 61
+        callbacks are complete.  Each callback will be called with the service
 62
+        name as the only parameter.  This defaults to
 63
+        `[host.service_start, services.open_ports]`.
 64
+
 65
+        The 'stop' value should be either a single callback, or a list of
 66
+        callbacks, to be called when stopping the service.  If the service is
 67
+        being stopped because it no longer has all of its 'required_data', this
 68
+        will be called after all of the 'data_lost' callbacks are complete.
 69
+        Each callback will be called with the service name as the only parameter.
 70
+        This defaults to `[services.close_ports, host.service_stop]`.
 71
+
 72
+        The 'ports' value should be a list of ports to manage.  The default
 73
+        'start' handler will open the ports after the service is started,
 74
+        and the default 'stop' handler will close the ports prior to stopping
 75
+        the service.
 76
+
 77
+
 78
+        Examples:
 79
+
 80
+        The following registers an Upstart service called bingod that depends on
 81
+        a mongodb relation and which runs a custom `db_migrate` function prior to
 82
+        restarting the service, and a Runit service called spadesd::
 83
+
 84
+            manager = services.ServiceManager([
 85
+                {
 86
+                    'service': 'bingod',
 87
+                    'ports': [80, 443],
 88
+                    'required_data': [MongoRelation(), config(), {'my': 'data'}],
 89
+                    'data_ready': [
 90
+                        services.template(source='bingod.conf'),
 91
+                        services.template(source='bingod.ini',
 92
+                                          target='/etc/bingod.ini',
 93
+                                          owner='bingo', perms=0400),
 94
+                    ],
 95
+                },
 96
+                {
 97
+                    'service': 'spadesd',
 98
+                    'data_ready': services.template(source='spadesd_run.j2',
 99
+                                                    target='/etc/sv/spadesd/run',
100
+                                                    perms=0555),
101
+                    'start': runit_start,
102
+                    'stop': runit_stop,
103
+                },
104
+            ])
105
+            manager.manage()
106
+        """
107
+        self._ready_file = os.path.join(hookenv.charm_dir(), 'READY-SERVICES.json')
108
+        self._ready = None
109
+        self.services = {}
110
+        for service in services or []:
111
+            service_name = service['service']
112
+            self.services[service_name] = service
113
+
114
+    def manage(self):
115
+        """
116
+        Handle the current hook by doing The Right Thing with the registered services.
117
+        """
118
+        hook_name = hookenv.hook_name()
119
+        if hook_name == 'stop':
120
+            self.stop_services()
121
+        else:
122
+            self.provide_data()
123
+            self.reconfigure_services()
124
+        cfg = hookenv.config()
125
+        if cfg.implicit_save:
126
+            cfg.save()
127
+
128
+    def provide_data(self):
129
+        """
130
+        Set the relation data for each provider in the ``provided_data`` list.
131
+
132
+        A provider must have a `name` attribute, which indicates which relation
133
+        to set data on, and a `provide_data()` method, which returns a dict of
134
+        data to set.
135
+        """
136
+        hook_name = hookenv.hook_name()
137
+        for service in self.services.values():
138
+            for provider in service.get('provided_data', []):
139
+                if re.match(r'{}-relation-(joined|changed)'.format(provider.name), hook_name):
140
+                    data = provider.provide_data()
141
+                    _ready = provider._is_ready(data) if hasattr(provider, '_is_ready') else data
142
+                    if _ready:
143
+                        hookenv.relation_set(None, data)
144
+
145
+    def reconfigure_services(self, *service_names):
146
+        """
147
+        Update all files for one or more registered services, and,
148
+        if ready, optionally restart them.
149
+
150
+        If no service names are given, reconfigures all registered services.
151
+        """
152
+        for service_name in service_names or self.services.keys():
153
+            if self.is_ready(service_name):
154
+                self.fire_event('data_ready', service_name)
155
+                self.fire_event('start', service_name, default=[
156
+                    service_restart,
157
+                    manage_ports])
158
+                self.save_ready(service_name)
159
+            else:
160
+                if self.was_ready(service_name):
161
+                    self.fire_event('data_lost', service_name)
162
+                self.fire_event('stop', service_name, default=[
163
+                    manage_ports,
164
+                    service_stop])
165
+                self.save_lost(service_name)
166
+
167
+    def stop_services(self, *service_names):
168
+        """
169
+        Stop one or more registered services, by name.
170
+
171
+        If no service names are given, stops all registered services.
172
+        """
173
+        for service_name in service_names or self.services.keys():
174
+            self.fire_event('stop', service_name, default=[
175
+                manage_ports,
176
+                service_stop])
177
+
178
+    def get_service(self, service_name):
179
+        """
180
+        Given the name of a registered service, return its service definition.
181
+        """
182
+        service = self.services.get(service_name)
183
+        if not service:
184
+            raise KeyError('Service not registered: %s' % service_name)
185
+        return service
186
+
187
+    def fire_event(self, event_name, service_name, default=None):
188
+        """
189
+        Fire a data_ready, data_lost, start, or stop event on a given service.
190
+        """
191
+        service = self.get_service(service_name)
192
+        callbacks = service.get(event_name, default)
193
+        if not callbacks:
194
+            return
195
+        if not isinstance(callbacks, Iterable):
196
+            callbacks = [callbacks]
197
+        for callback in callbacks:
198
+            if isinstance(callback, ManagerCallback):
199
+                callback(self, service_name, event_name)
200
+            else:
201
+                callback(service_name)
202
+
203
+    def is_ready(self, service_name):
204
+        """
205
+        Determine if a registered service is ready, by checking its 'required_data'.
206
+
207
+        A 'required_data' item can be any mapping type, and is considered ready
208
+        if `bool(item)` evaluates as True.
209
+        """
210
+        service = self.get_service(service_name)
211
+        reqs = service.get('required_data', [])
212
+        return all(bool(req) for req in reqs)
213
+
214
+    def _load_ready_file(self):
215
+        if self._ready is not None:
216
+            return
217
+        if os.path.exists(self._ready_file):
218
+            with open(self._ready_file) as fp:
219
+                self._ready = set(json.load(fp))
220
+        else:
221
+            self._ready = set()
222
+
223
+    def _save_ready_file(self):
224
+        if self._ready is None:
225
+            return
226
+        with open(self._ready_file, 'w') as fp:
227
+            json.dump(list(self._ready), fp)
228
+
229
+    def save_ready(self, service_name):
230
+        """
231
+        Save an indicator that the given service is now data_ready.
232
+        """
233
+        self._load_ready_file()
234
+        self._ready.add(service_name)
235
+        self._save_ready_file()
236
+
237
+    def save_lost(self, service_name):
238
+        """
239
+        Save an indicator that the given service is no longer data_ready.
240
+        """
241
+        self._load_ready_file()
242
+        self._ready.discard(service_name)
243
+        self._save_ready_file()
244
+
245
+    def was_ready(self, service_name):
246
+        """
247
+        Determine if the given service was previously data_ready.
248
+        """
249
+        self._load_ready_file()
250
+        return service_name in self._ready
251
+
252
+
253
+class ManagerCallback(object):
254
+    """
255
+    Special case of a callback that takes the `ServiceManager` instance
256
+    in addition to the service name.
257
+
258
+    Subclasses should implement `__call__` which should accept three parameters:
259
+
260
+        * `manager`       The `ServiceManager` instance
261
+        * `service_name`  The name of the service it's being triggered for
262
+        * `event_name`    The name of the event that this callback is handling
263
+    """
264
+    def __call__(self, manager, service_name, event_name):
265
+        raise NotImplementedError()
266
+
267
+
268
+class PortManagerCallback(ManagerCallback):
269
+    """
270
+    Callback class that will open or close ports, for use as either
271
+    a start or stop action.
272
+    """
273
+    def __call__(self, manager, service_name, event_name):
274
+        service = manager.get_service(service_name)
275
+        new_ports = service.get('ports', [])
276
+        port_file = os.path.join(hookenv.charm_dir(), '.{}.ports'.format(service_name))
277
+        if os.path.exists(port_file):
278
+            with open(port_file) as fp:
279
+                old_ports = fp.read().split(',')
280
+            for old_port in old_ports:
281
+                if bool(old_port):
282
+                    old_port = int(old_port)
283
+                    if old_port not in new_ports:
284
+                        hookenv.close_port(old_port)
285
+        with open(port_file, 'w') as fp:
286
+            fp.write(','.join(str(port) for port in new_ports))
287
+        for port in new_ports:
288
+            if event_name == 'start':
289
+                hookenv.open_port(port)
290
+            elif event_name == 'stop':
291
+                hookenv.close_port(port)
292
+
293
+
294
+def service_stop(service_name):
295
+    """
296
+    Wrapper around host.service_stop to prevent spurious "unknown service"
297
+    messages in the logs.
298
+    """
299
+    if host.service_running(service_name):
300
+        host.service_stop(service_name)
301
+
302
+
303
+def service_restart(service_name):
304
+    """
305
+    Wrapper around host.service_restart to prevent spurious "unknown service"
306
+    messages in the logs.
307
+    """
308
+    if host.service_available(service_name):
309
+        if host.service_running(service_name):
310
+            host.service_restart(service_name)
311
+        else:
312
+            host.service_start(service_name)
313
+
314
+
315
+# Convenience aliases
316
+open_ports = close_ports = manage_ports = PortManagerCallback()
Back to file index

hooks/charmhelpers/core/services/helpers.py

  1
--- 
  2
+++ hooks/charmhelpers/core/services/helpers.py
  3
@@ -0,0 +1,239 @@
  4
+import os
  5
+import yaml
  6
+from charmhelpers.core import hookenv
  7
+from charmhelpers.core import templating
  8
+
  9
+from charmhelpers.core.services.base import ManagerCallback
 10
+
 11
+
 12
+__all__ = ['RelationContext', 'TemplateCallback',
 13
+           'render_template', 'template']
 14
+
 15
+
 16
+class RelationContext(dict):
 17
+    """
 18
+    Base class for a context generator that gets relation data from juju.
 19
+
 20
+    Subclasses must provide the attributes `name`, which is the name of the
 21
+    interface of interest, `interface`, which is the type of the interface of
 22
+    interest, and `required_keys`, which is the set of keys required for the
 23
+    relation to be considered complete.  The data for all interfaces matching
 24
+    the `name` attribute that are complete will used to populate the dictionary
 25
+    values (see `get_data`, below).
 26
+
 27
+    The generated context will be namespaced under the relation :attr:`name`,
 28
+    to prevent potential naming conflicts.
 29
+
 30
+    :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
 31
+    :param list additional_required_keys: Extend the list of :attr:`required_keys`
 32
+    """
 33
+    name = None
 34
+    interface = None
 35
+    required_keys = []
 36
+
 37
+    def __init__(self, name=None, additional_required_keys=None):
 38
+        if name is not None:
 39
+            self.name = name
 40
+        if additional_required_keys is not None:
 41
+            self.required_keys.extend(additional_required_keys)
 42
+        self.get_data()
 43
+
 44
+    def __bool__(self):
 45
+        """
 46
+        Returns True if all of the required_keys are available.
 47
+        """
 48
+        return self.is_ready()
 49
+
 50
+    __nonzero__ = __bool__
 51
+
 52
+    def __repr__(self):
 53
+        return super(RelationContext, self).__repr__()
 54
+
 55
+    def is_ready(self):
 56
+        """
 57
+        Returns True if all of the `required_keys` are available from any units.
 58
+        """
 59
+        ready = len(self.get(self.name, [])) > 0
 60
+        if not ready:
 61
+            hookenv.log('Incomplete relation: {}'.format(self.__class__.__name__), hookenv.DEBUG)
 62
+        return ready
 63
+
 64
+    def _is_ready(self, unit_data):
 65
+        """
 66
+        Helper method that tests a set of relation data and returns True if
 67
+        all of the `required_keys` are present.
 68
+        """
 69
+        return set(unit_data.keys()).issuperset(set(self.required_keys))
 70
+
 71
+    def get_data(self):
 72
+        """
 73
+        Retrieve the relation data for each unit involved in a relation and,
 74
+        if complete, store it in a list under `self[self.name]`.  This
 75
+        is automatically called when the RelationContext is instantiated.
 76
+
 77
+        The units are sorted lexographically first by the service ID, then by
 78
+        the unit ID.  Thus, if an interface has two other services, 'db:1'
 79
+        and 'db:2', with 'db:1' having two units, 'wordpress/0' and 'wordpress/1',
 80
+        and 'db:2' having one unit, 'mediawiki/0', all of which have a complete
 81
+        set of data, the relation data for the units will be stored in the
 82
+        order: 'wordpress/0', 'wordpress/1', 'mediawiki/0'.
 83
+
 84
+        If you only care about a single unit on the relation, you can just
 85
+        access it as `{{ interface[0]['key'] }}`.  However, if you can at all
 86
+        support multiple units on a relation, you should iterate over the list,
 87
+        like::
 88
+
 89
+            {% for unit in interface -%}
 90
+                {{ unit['key'] }}{% if not loop.last %},{% endif %}
 91
+            {%- endfor %}
 92
+
 93
+        Note that since all sets of relation data from all related services and
 94
+        units are in a single list, if you need to know which service or unit a
 95
+        set of data came from, you'll need to extend this class to preserve
 96
+        that information.
 97
+        """
 98
+        if not hookenv.relation_ids(self.name):
 99
+            return
100
+
101
+        ns = self.setdefault(self.name, [])
102
+        for rid in sorted(hookenv.relation_ids(self.name)):
103
+            for unit in sorted(hookenv.related_units(rid)):
104
+                reldata = hookenv.relation_get(rid=rid, unit=unit)
105
+                if self._is_ready(reldata):
106
+                    ns.append(reldata)
107
+
108
+    def provide_data(self):
109
+        """
110
+        Return data to be relation_set for this interface.
111
+        """
112
+        return {}
113
+
114
+
115
+class MysqlRelation(RelationContext):
116
+    """
117
+    Relation context for the `mysql` interface.
118
+
119
+    :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
120
+    :param list additional_required_keys: Extend the list of :attr:`required_keys`
121
+    """
122
+    name = 'db'
123
+    interface = 'mysql'
124
+    required_keys = ['host', 'user', 'password', 'database']
125
+
126
+
127
+class HttpRelation(RelationContext):
128
+    """
129
+    Relation context for the `http` interface.
130
+
131
+    :param str name: Override the relation :attr:`name`, since it can vary from charm to charm
132
+    :param list additional_required_keys: Extend the list of :attr:`required_keys`
133
+    """
134
+    name = 'website'
135
+    interface = 'http'
136
+    required_keys = ['host', 'port']
137
+
138
+    def provide_data(self):
139
+        return {
140
+            'host': hookenv.unit_get('private-address'),
141
+            'port': 80,
142
+        }
143
+
144
+
145
+class RequiredConfig(dict):
146
+    """
147
+    Data context that loads config options with one or more mandatory options.
148
+
149
+    Once the required options have been changed from their default values, all
150
+    config options will be available, namespaced under `config` to prevent
151
+    potential naming conflicts (for example, between a config option and a
152
+    relation property).
153
+
154
+    :param list *args: List of options that must be changed from their default values.
155
+    """
156
+
157
+    def __init__(self, *args):
158
+        self.required_options = args
159
+        self['config'] = hookenv.config()
160
+        with open(os.path.join(hookenv.charm_dir(), 'config.yaml')) as fp:
161
+            self.config = yaml.load(fp).get('options', {})
162
+
163
+    def __bool__(self):
164
+        for option in self.required_options:
165
+            if option not in self['config']:
166
+                return False
167
+            current_value = self['config'][option]
168
+            default_value = self.config[option].get('default')
169
+            if current_value == default_value:
170
+                return False
171
+            if current_value in (None, '') and default_value in (None, ''):
172
+                return False
173
+        return True
174
+
175
+    def __nonzero__(self):
176
+        return self.__bool__()
177
+
178
+
179
+class StoredContext(dict):
180
+    """
181
+    A data context that always returns the data that it was first created with.
182
+
183
+    This is useful to do a one-time generation of things like passwords, that
184
+    will thereafter use the same value that was originally generated, instead
185
+    of generating a new value each time it is run.
186
+    """
187
+    def __init__(self, file_name, config_data):
188
+        """
189
+        If the file exists, populate `self` with the data from the file.
190
+        Otherwise, populate with the given data and persist it to the file.
191
+        """
192
+        if os.path.exists(file_name):
193
+            self.update(self.read_context(file_name))
194
+        else:
195
+            self.store_context(file_name, config_data)
196
+            self.update(config_data)
197
+
198
+    def store_context(self, file_name, config_data):
199
+        if not os.path.isabs(file_name):
200
+            file_name = os.path.join(hookenv.charm_dir(), file_name)
201
+        with open(file_name, 'w') as file_stream:
202
+            os.fchmod(file_stream.fileno(), 0600)
203
+            yaml.dump(config_data, file_stream)
204
+
205
+    def read_context(self, file_name):
206
+        if not os.path.isabs(file_name):
207
+            file_name = os.path.join(hookenv.charm_dir(), file_name)
208
+        with open(file_name, 'r') as file_stream:
209
+            data = yaml.load(file_stream)
210
+            if not data:
211
+                raise OSError("%s is empty" % file_name)
212
+            return data
213
+
214
+
215
+class TemplateCallback(ManagerCallback):
216
+    """
217
+    Callback class that will render a Jinja2 template, for use as a ready action.
218
+
219
+    :param str source: The template source file, relative to `$CHARM_DIR/templates`
220
+    :param str target: The target to write the rendered template to
221
+    :param str owner: The owner of the rendered file
222
+    :param str group: The group of the rendered file
223
+    :param int perms: The permissions of the rendered file
224
+    """
225
+    def __init__(self, source, target, owner='root', group='root', perms=0444):
226
+        self.source = source
227
+        self.target = target
228
+        self.owner = owner
229
+        self.group = group
230
+        self.perms = perms
231
+
232
+    def __call__(self, manager, service_name, event_name):
233
+        service = manager.get_service(service_name)
234
+        context = {}
235
+        for ctx in service.get('required_data', []):
236
+            context.update(ctx)
237
+        templating.render(self.source, self.target, context,
238
+                          self.owner, self.group, self.perms)
239
+
240
+
241
+# Convenience aliases for templates
242
+render_template = template = TemplateCallback
Back to file index

hooks/charmhelpers/core/sysctl.py

 1
--- 
 2
+++ hooks/charmhelpers/core/sysctl.py
 3
@@ -0,0 +1,34 @@
 4
+#!/usr/bin/env python
 5
+# -*- coding: utf-8 -*-
 6
+
 7
+__author__ = 'Jorge Niedbalski R. <jorge.niedbalski@canonical.com>'
 8
+
 9
+import yaml
10
+
11
+from subprocess import check_call
12
+
13
+from charmhelpers.core.hookenv import (
14
+    log,
15
+    DEBUG,
16
+)
17
+
18
+
19
+def create(sysctl_dict, sysctl_file):
20
+    """Creates a sysctl.conf file from a YAML associative array
21
+
22
+    :param sysctl_dict: a dict of sysctl options eg { 'kernel.max_pid': 1337 }
23
+    :type sysctl_dict: dict
24
+    :param sysctl_file: path to the sysctl file to be saved
25
+    :type sysctl_file: str or unicode
26
+    :returns: None
27
+    """
28
+    sysctl_dict = yaml.load(sysctl_dict)
29
+
30
+    with open(sysctl_file, "w") as fd:
31
+        for key, value in sysctl_dict.items():
32
+            fd.write("{}={}\n".format(key, value))
33
+
34
+    log("Updating sysctl_file: %s values: %s" % (sysctl_file, sysctl_dict),
35
+        level=DEBUG)
36
+
37
+    check_call(["sysctl", "-p", sysctl_file])
Back to file index

hooks/charmhelpers/core/templating.py

 1
--- 
 2
+++ hooks/charmhelpers/core/templating.py
 3
@@ -0,0 +1,51 @@
 4
+import os
 5
+
 6
+from charmhelpers.core import host
 7
+from charmhelpers.core import hookenv
 8
+
 9
+
10
+def render(source, target, context, owner='root', group='root', perms=0444, templates_dir=None):
11
+    """
12
+    Render a template.
13
+
14
+    The `source` path, if not absolute, is relative to the `templates_dir`.
15
+
16
+    The `target` path should be absolute.
17
+
18
+    The context should be a dict containing the values to be replaced in the
19
+    template.
20
+
21
+    The `owner`, `group`, and `perms` options will be passed to `write_file`.
22
+
23
+    If omitted, `templates_dir` defaults to the `templates` folder in the charm.
24
+
25
+    Note: Using this requires python-jinja2; if it is not installed, calling
26
+    this will attempt to use charmhelpers.fetch.apt_install to install it.
27
+    """
28
+    try:
29
+        from jinja2 import FileSystemLoader, Environment, exceptions
30
+    except ImportError:
31
+        try:
32
+            from charmhelpers.fetch import apt_install
33
+        except ImportError:
34
+            hookenv.log('Could not import jinja2, and could not import '
35
+                        'charmhelpers.fetch to install it',
36
+                        level=hookenv.ERROR)
37
+            raise
38
+        apt_install('python-jinja2', fatal=True)
39
+        from jinja2 import FileSystemLoader, Environment, exceptions
40
+
41
+    if templates_dir is None:
42
+        templates_dir = os.path.join(hookenv.charm_dir(), 'templates')
43
+    loader = Environment(loader=FileSystemLoader(templates_dir))
44
+    try:
45
+        source = source
46
+        template = loader.get_template(source)
47
+    except exceptions.TemplateNotFound as e:
48
+        hookenv.log('Could not load template %s from %s.' %
49
+                    (source, templates_dir),
50
+                    level=hookenv.ERROR)
51
+        raise e
52
+    content = template.render(context)
53
+    host.mkdir(os.path.dirname(target))
54
+    host.write_file(target, content, owner, group, perms)
Back to file index

hooks/charmhelpers/fetch/__init__.py

  1
--- 
  2
+++ hooks/charmhelpers/fetch/__init__.py
  3
@@ -0,0 +1,414 @@
  4
+import importlib
  5
+from tempfile import NamedTemporaryFile
  6
+import time
  7
+from yaml import safe_load
  8
+from charmhelpers.core.host import (
  9
+    lsb_release
 10
+)
 11
+from urlparse import (
 12
+    urlparse,
 13
+    urlunparse,
 14
+)
 15
+import subprocess
 16
+from charmhelpers.core.hookenv import (
 17
+    config,
 18
+    log,
 19
+)
 20
+import os
 21
+
 22
+
 23
+CLOUD_ARCHIVE = """# Ubuntu Cloud Archive
 24
+deb http://ubuntu-cloud.archive.canonical.com/ubuntu {} main
 25
+"""
 26
+PROPOSED_POCKET = """# Proposed
 27
+deb http://archive.ubuntu.com/ubuntu {}-proposed main universe multiverse restricted
 28
+"""
 29
+CLOUD_ARCHIVE_POCKETS = {
 30
+    # Folsom
 31
+    'folsom': 'precise-updates/folsom',
 32
+    'precise-folsom': 'precise-updates/folsom',
 33
+    'precise-folsom/updates': 'precise-updates/folsom',
 34
+    'precise-updates/folsom': 'precise-updates/folsom',
 35
+    'folsom/proposed': 'precise-proposed/folsom',
 36
+    'precise-folsom/proposed': 'precise-proposed/folsom',
 37
+    'precise-proposed/folsom': 'precise-proposed/folsom',
 38
+    # Grizzly
 39
+    'grizzly': 'precise-updates/grizzly',
 40
+    'precise-grizzly': 'precise-updates/grizzly',
 41
+    'precise-grizzly/updates': 'precise-updates/grizzly',
 42
+    'precise-updates/grizzly': 'precise-updates/grizzly',
 43
+    'grizzly/proposed': 'precise-proposed/grizzly',
 44
+    'precise-grizzly/proposed': 'precise-proposed/grizzly',
 45
+    'precise-proposed/grizzly': 'precise-proposed/grizzly',
 46
+    # Havana
 47
+    'havana': 'precise-updates/havana',
 48
+    'precise-havana': 'precise-updates/havana',
 49
+    'precise-havana/updates': 'precise-updates/havana',
 50
+    'precise-updates/havana': 'precise-updates/havana',
 51
+    'havana/proposed': 'precise-proposed/havana',
 52
+    'precise-havana/proposed': 'precise-proposed/havana',
 53
+    'precise-proposed/havana': 'precise-proposed/havana',
 54
+    # Icehouse
 55
+    'icehouse': 'precise-updates/icehouse',
 56
+    'precise-icehouse': 'precise-updates/icehouse',
 57
+    'precise-icehouse/updates': 'precise-updates/icehouse',
 58
+    'precise-updates/icehouse': 'precise-updates/icehouse',
 59
+    'icehouse/proposed': 'precise-proposed/icehouse',
 60
+    'precise-icehouse/proposed': 'precise-proposed/icehouse',
 61
+    'precise-proposed/icehouse': 'precise-proposed/icehouse',
 62
+    # Juno
 63
+    'juno': 'trusty-updates/juno',
 64
+    'trusty-juno': 'trusty-updates/juno',
 65
+    'trusty-juno/updates': 'trusty-updates/juno',
 66
+    'trusty-updates/juno': 'trusty-updates/juno',
 67
+    'juno/proposed': 'trusty-proposed/juno',
 68
+    'juno/proposed': 'trusty-proposed/juno',
 69
+    'trusty-juno/proposed': 'trusty-proposed/juno',
 70
+    'trusty-proposed/juno': 'trusty-proposed/juno',
 71
+}
 72
+
 73
+# The order of this list is very important. Handlers should be listed in from
 74
+# least- to most-specific URL matching.
 75
+FETCH_HANDLERS = (
 76
+    'charmhelpers.fetch.archiveurl.ArchiveUrlFetchHandler',
 77
+    'charmhelpers.fetch.bzrurl.BzrUrlFetchHandler',
 78
+    'charmhelpers.fetch.giturl.GitUrlFetchHandler',
 79
+)
 80
+
 81
+APT_NO_LOCK = 100  # The return code for "couldn't acquire lock" in APT.
 82
+APT_NO_LOCK_RETRY_DELAY = 10  # Wait 10 seconds between apt lock checks.
 83
+APT_NO_LOCK_RETRY_COUNT = 30  # Retry to acquire the lock X times.
 84
+
 85
+
 86
+class SourceConfigError(Exception):
 87
+    pass
 88
+
 89
+
 90
+class UnhandledSource(Exception):
 91
+    pass
 92
+
 93
+
 94
+class AptLockError(Exception):
 95
+    pass
 96
+
 97
+
 98
+class BaseFetchHandler(object):
 99
+
100
+    """Base class for FetchHandler implementations in fetch plugins"""
101
+
102
+    def can_handle(self, source):
103
+        """Returns True if the source can be handled. Otherwise returns
104
+        a string explaining why it cannot"""
105
+        return "Wrong source type"
106
+
107
+    def install(self, source):
108
+        """Try to download and unpack the source. Return the path to the
109
+        unpacked files or raise UnhandledSource."""
110
+        raise UnhandledSource("Wrong source type {}".format(source))
111
+
112
+    def parse_url(self, url):
113
+        return urlparse(url)
114
+
115
+    def base_url(self, url):
116
+        """Return url without querystring or fragment"""
117
+        parts = list(self.parse_url(url))
118
+        parts[4:] = ['' for i in parts[4:]]
119
+        return urlunparse(parts)
120
+
121
+
122
+def filter_installed_packages(packages):
123
+    """Returns a list of packages that require installation"""
124
+    cache = apt_cache()
125
+    _pkgs = []
126
+    for package in packages:
127
+        try:
128
+            p = cache[package]
129
+            p.current_ver or _pkgs.append(package)
130
+        except KeyError:
131
+            log('Package {} has no installation candidate.'.format(package),
132
+                level='WARNING')
133
+            _pkgs.append(package)
134
+    return _pkgs
135
+
136
+
137
+def apt_cache(in_memory=True):
138
+    """Build and return an apt cache"""
139
+    import apt_pkg
140
+    apt_pkg.init()
141
+    if in_memory:
142
+        apt_pkg.config.set("Dir::Cache::pkgcache", "")
143
+        apt_pkg.config.set("Dir::Cache::srcpkgcache", "")
144
+    return apt_pkg.Cache()
145
+
146
+
147
+def apt_install(packages, options=None, fatal=False):
148
+    """Install one or more packages"""
149
+    if options is None:
150
+        options = ['--option=Dpkg::Options::=--force-confold']
151
+
152
+    cmd = ['apt-get', '--assume-yes']
153
+    cmd.extend(options)
154
+    cmd.append('install')
155
+    if isinstance(packages, basestring):
156
+        cmd.append(packages)
157
+    else:
158
+        cmd.extend(packages)
159
+    log("Installing {} with options: {}".format(packages,
160
+                                                options))
161
+    _run_apt_command(cmd, fatal)
162
+
163
+
164
+def apt_upgrade(options=None, fatal=False, dist=False):
165
+    """Upgrade all packages"""
166
+    if options is None:
167
+        options = ['--option=Dpkg::Options::=--force-confold']
168
+
169
+    cmd = ['apt-get', '--assume-yes']
170
+    cmd.extend(options)
171
+    if dist:
172
+        cmd.append('dist-upgrade')
173
+    else:
174
+        cmd.append('upgrade')
175
+    log("Upgrading with options: {}".format(options))
176
+    _run_apt_command(cmd, fatal)
177
+
178
+
179
+def apt_update(fatal=False):
180
+    """Update local apt cache"""
181
+    cmd = ['apt-get', 'update']
182
+    _run_apt_command(cmd, fatal)
183
+
184
+
185
+def apt_purge(packages, fatal=False):
186
+    """Purge one or more packages"""
187
+    cmd = ['apt-get', '--assume-yes', 'purge']
188
+    if isinstance(packages, basestring):
189
+        cmd.append(packages)
190
+    else:
191
+        cmd.extend(packages)
192
+    log("Purging {}".format(packages))
193
+    _run_apt_command(cmd, fatal)
194
+
195
+
196
+def apt_hold(packages, fatal=False):
197
+    """Hold one or more packages"""
198
+    cmd = ['apt-mark', 'hold']
199
+    if isinstance(packages, basestring):
200
+        cmd.append(packages)
201
+    else:
202
+        cmd.extend(packages)
203
+    log("Holding {}".format(packages))
204
+
205
+    if fatal:
206
+        subprocess.check_call(cmd)
207
+    else:
208
+        subprocess.call(cmd)
209
+
210
+
211
+def add_source(source, key=None):
212
+    """Add a package source to this system.
213
+
214
+    @param source: a URL or sources.list entry, as supported by
215
+    add-apt-repository(1). Examples::
216
+
217
+        ppa:charmers/example
218
+        deb https://stub:key@private.example.com/ubuntu trusty main
219
+
220
+    In addition:
221
+        'proposed:' may be used to enable the standard 'proposed'
222
+        pocket for the release.
223
+        'cloud:' may be used to activate official cloud archive pockets,
224
+        such as 'cloud:icehouse'
225
+        'distro' may be used as a noop
226
+
227
+    @param key: A key to be added to the system's APT keyring and used
228
+    to verify the signatures on packages. Ideally, this should be an
229
+    ASCII format GPG public key including the block headers. A GPG key
230
+    id may also be used, but be aware that only insecure protocols are
231
+    available to retrieve the actual public key from a public keyserver
232
+    placing your Juju environment at risk. ppa and cloud archive keys
233
+    are securely added automtically, so sould not be provided.
234
+    """
235
+    if source is None:
236
+        log('Source is not present. Skipping')
237
+        return
238
+
239
+    if (source.startswith('ppa:') or
240
+        source.startswith('http') or
241
+        source.startswith('deb ') or
242
+            source.startswith('cloud-archive:')):
243
+        subprocess.check_call(['add-apt-repository', '--yes', source])
244
+    elif source.startswith('cloud:'):
245
+        apt_install(filter_installed_packages(['ubuntu-cloud-keyring']),
246
+                    fatal=True)
247
+        pocket = source.split(':')[-1]
248
+        if pocket not in CLOUD_ARCHIVE_POCKETS:
249
+            raise SourceConfigError(
250
+                'Unsupported cloud: source option %s' %
251
+                pocket)
252
+        actual_pocket = CLOUD_ARCHIVE_POCKETS[pocket]
253
+        with open('/etc/apt/sources.list.d/cloud-archive.list', 'w') as apt:
254
+            apt.write(CLOUD_ARCHIVE.format(actual_pocket))
255
+    elif source == 'proposed':
256
+        release = lsb_release()['DISTRIB_CODENAME']
257
+        with open('/etc/apt/sources.list.d/proposed.list', 'w') as apt:
258
+            apt.write(PROPOSED_POCKET.format(release))
259
+    elif source == 'distro':
260
+        pass
261
+    else:
262
+        log("Unknown source: {!r}".format(source))
263
+
264
+    if key:
265
+        if '-----BEGIN PGP PUBLIC KEY BLOCK-----' in key:
266
+            with NamedTemporaryFile() as key_file:
267
+                key_file.write(key)
268
+                key_file.flush()
269
+                key_file.seek(0)
270
+                subprocess.check_call(['apt-key', 'add', '-'], stdin=key_file)
271
+        else:
272
+            # Note that hkp: is in no way a secure protocol. Using a
273
+            # GPG key id is pointless from a security POV unless you
274
+            # absolutely trust your network and DNS.
275
+            subprocess.check_call(['apt-key', 'adv', '--keyserver',
276
+                                   'hkp://keyserver.ubuntu.com:80', '--recv',
277
+                                   key])
278
+
279
+
280
+def configure_sources(update=False,
281
+                      sources_var='install_sources',
282
+                      keys_var='install_keys'):
283
+    """
284
+    Configure multiple sources from charm configuration.
285
+
286
+    The lists are encoded as yaml fragments in the configuration.
287
+    The frament needs to be included as a string. Sources and their
288
+    corresponding keys are of the types supported by add_source().
289
+
290
+    Example config:
291
+        install_sources: |
292
+          - "ppa:foo"
293
+          - "http://example.com/repo precise main"
294
+        install_keys: |
295
+          - null
296
+          - "a1b2c3d4"
297
+
298
+    Note that 'null' (a.k.a. None) should not be quoted.
299
+    """
300
+    sources = safe_load((config(sources_var) or '').strip()) or []
301
+    keys = safe_load((config(keys_var) or '').strip()) or None
302
+
303
+    if isinstance(sources, basestring):
304
+        sources = [sources]
305
+
306
+    if keys is None:
307
+        for source in sources:
308
+            add_source(source, None)
309
+    else:
310
+        if isinstance(keys, basestring):
311
+            keys = [keys]
312
+
313
+        if len(sources) != len(keys):
314
+            raise SourceConfigError(
315
+                'Install sources and keys lists are different lengths')
316
+        for source, key in zip(sources, keys):
317
+            add_source(source, key)
318
+    if update:
319
+        apt_update(fatal=True)
320
+
321
+
322
+def install_remote(source, *args, **kwargs):
323
+    """
324
+    Install a file tree from a remote source
325
+
326
+    The specified source should be a url of the form:
327
+        scheme://[host]/path[#[option=value][&...]]
328
+
329
+    Schemes supported are based on this modules submodules.
330
+    Options supported are submodule-specific.
331
+    Additional arguments are passed through to the submodule.
332
+
333
+    For example::
334
+
335
+        dest = install_remote('http://example.com/archive.tgz',
336
+                              checksum='deadbeef',
337
+                              hash_type='sha1')
338
+
339
+    This will download `archive.tgz`, validate it using SHA1 and, if
340
+    the file is ok, extract it and return the directory in which it
341
+    was extracted.  If the checksum fails, it will raise
342
+    :class:`charmhelpers.core.host.ChecksumError`.
343
+    """
344
+    # We ONLY check for True here because can_handle may return a string
345
+    # explaining why it can't handle a given source.
346
+    handlers = [h for h in plugins() if h.can_handle(source) is True]
347
+    installed_to = None
348
+    for handler in handlers:
349
+        try:
350
+            installed_to = handler.install(source, *args, **kwargs)
351
+        except UnhandledSource:
352
+            pass
353
+    if not installed_to:
354
+        raise UnhandledSource("No handler found for source {}".format(source))
355
+    return installed_to
356
+
357
+
358
+def install_from_config(config_var_name):
359
+    charm_config = config()
360
+    source = charm_config[config_var_name]
361
+    return install_remote(source)
362
+
363
+
364
+def plugins(fetch_handlers=None):
365
+    if not fetch_handlers:
366
+        fetch_handlers = FETCH_HANDLERS
367
+    plugin_list = []
368
+    for handler_name in fetch_handlers:
369
+        package, classname = handler_name.rsplit('.', 1)
370
+        try:
371
+            handler_class = getattr(
372
+                importlib.import_module(package),
373
+                classname)
374
+            plugin_list.append(handler_class())
375
+        except (ImportError, AttributeError):
376
+            # Skip missing plugins so that they can be ommitted from
377
+            # installation if desired
378
+            log("FetchHandler {} not found, skipping plugin".format(
379
+                handler_name))
380
+    return plugin_list
381
+
382
+
383
+def _run_apt_command(cmd, fatal=False):
384
+    """
385
+    Run an APT command, checking output and retrying if the fatal flag is set
386
+    to True.
387
+
388
+    :param: cmd: str: The apt command to run.
389
+    :param: fatal: bool: Whether the command's output should be checked and
390
+        retried.
391
+    """
392
+    env = os.environ.copy()
393
+
394
+    if 'DEBIAN_FRONTEND' not in env:
395
+        env['DEBIAN_FRONTEND'] = 'noninteractive'
396
+
397
+    if fatal:
398
+        retry_count = 0
399
+        result = None
400
+
401
+        # If the command is considered "fatal", we need to retry if the apt
402
+        # lock was not acquired.
403
+
404
+        while result is None or result == APT_NO_LOCK:
405
+            try:
406
+                result = subprocess.check_call(cmd, env=env)
407
+            except subprocess.CalledProcessError, e:
408
+                retry_count = retry_count + 1
409
+                if retry_count > APT_NO_LOCK_RETRY_COUNT:
410
+                    raise
411
+                result = e.returncode
412
+                log("Couldn't acquire DPKG lock. Will retry in {} seconds."
413
+                    "".format(APT_NO_LOCK_RETRY_DELAY))
414
+                time.sleep(APT_NO_LOCK_RETRY_DELAY)
415
+
416
+    else:
417
+        subprocess.call(cmd, env=env)
Back to file index

hooks/charmhelpers/fetch/archiveurl.py

  1
--- 
  2
+++ hooks/charmhelpers/fetch/archiveurl.py
  3
@@ -0,0 +1,108 @@
  4
+import os
  5
+import urllib2
  6
+from urllib import urlretrieve
  7
+import urlparse
  8
+import hashlib
  9
+
 10
+from charmhelpers.fetch import (
 11
+    BaseFetchHandler,
 12
+    UnhandledSource
 13
+)
 14
+from charmhelpers.payload.archive import (
 15
+    get_archive_handler,
 16
+    extract,
 17
+)
 18
+from charmhelpers.core.host import mkdir, check_hash
 19
+
 20
+
 21
+class ArchiveUrlFetchHandler(BaseFetchHandler):
 22
+    """
 23
+    Handler to download archive files from arbitrary URLs.
 24
+
 25
+    Can fetch from http, https, ftp, and file URLs.
 26
+
 27
+    Can install either tarballs (.tar, .tgz, .tbz2, etc) or zip files.
 28
+
 29
+    Installs the contents of the archive in $CHARM_DIR/fetched/.
 30
+    """
 31
+    def can_handle(self, source):
 32
+        url_parts = self.parse_url(source)
 33
+        if url_parts.scheme not in ('http', 'https', 'ftp', 'file'):
 34
+            return "Wrong source type"
 35
+        if get_archive_handler(self.base_url(source)):
 36
+            return True
 37
+        return False
 38
+
 39
+    def download(self, source, dest):
 40
+        """
 41
+        Download an archive file.
 42
+
 43
+        :param str source: URL pointing to an archive file.
 44
+        :param str dest: Local path location to download archive file to.
 45
+        """
 46
+        # propogate all exceptions
 47
+        # URLError, OSError, etc
 48
+        proto, netloc, path, params, query, fragment = urlparse.urlparse(source)
 49
+        if proto in ('http', 'https'):
 50
+            auth, barehost = urllib2.splituser(netloc)
 51
+            if auth is not None:
 52
+                source = urlparse.urlunparse((proto, barehost, path, params, query, fragment))
 53
+                username, password = urllib2.splitpasswd(auth)
 54
+                passman = urllib2.HTTPPasswordMgrWithDefaultRealm()
 55
+                # Realm is set to None in add_password to force the username and password
 56
+                # to be used whatever the realm
 57
+                passman.add_password(None, source, username, password)
 58
+                authhandler = urllib2.HTTPBasicAuthHandler(passman)
 59
+                opener = urllib2.build_opener(authhandler)
 60
+                urllib2.install_opener(opener)
 61
+        response = urllib2.urlopen(source)
 62
+        try:
 63
+            with open(dest, 'w') as dest_file:
 64
+                dest_file.write(response.read())
 65
+        except Exception as e:
 66
+            if os.path.isfile(dest):
 67
+                os.unlink(dest)
 68
+            raise e
 69
+
 70
+    # Mandatory file validation via Sha1 or MD5 hashing.
 71
+    def download_and_validate(self, url, hashsum, validate="sha1"):
 72
+        tempfile, headers = urlretrieve(url)
 73
+        check_hash(tempfile, hashsum, validate)
 74
+        return tempfile
 75
+
 76
+    def install(self, source, dest=None, checksum=None, hash_type='sha1'):
 77
+        """
 78
+        Download and install an archive file, with optional checksum validation.
 79
+
 80
+        The checksum can also be given on the `source` URL's fragment.
 81
+        For example::
 82
+
 83
+            handler.install('http://example.com/file.tgz#sha1=deadbeef')
 84
+
 85
+        :param str source: URL pointing to an archive file.
 86
+        :param str dest: Local destination path to install to. If not given,
 87
+            installs to `$CHARM_DIR/archives/archive_file_name`.
 88
+        :param str checksum: If given, validate the archive file after download.
 89
+        :param str hash_type: Algorithm used to generate `checksum`.
 90
+            Can be any hash alrgorithm supported by :mod:`hashlib`,
 91
+            such as md5, sha1, sha256, sha512, etc.
 92
+
 93
+        """
 94
+        url_parts = self.parse_url(source)
 95
+        dest_dir = os.path.join(os.environ.get('CHARM_DIR'), 'fetched')
 96
+        if not os.path.exists(dest_dir):
 97
+            mkdir(dest_dir, perms=0755)
 98
+        dld_file = os.path.join(dest_dir, os.path.basename(url_parts.path))
 99
+        try:
100
+            self.download(source, dld_file)
101
+        except urllib2.URLError as e:
102
+            raise UnhandledSource(e.reason)
103
+        except OSError as e:
104
+            raise UnhandledSource(e.strerror)
105
+        options = urlparse.parse_qs(url_parts.fragment)
106
+        for key, value in options.items():
107
+            if key in hashlib.algorithms:
108
+                check_hash(dld_file, value, key)
109
+        if checksum:
110
+            check_hash(dld_file, checksum, hash_type)
111
+        return extract(dld_file, dest)
Back to file index

hooks/charmhelpers/fetch/bzrurl.py

 1
--- 
 2
+++ hooks/charmhelpers/fetch/bzrurl.py
 3
@@ -0,0 +1,50 @@
 4
+import os
 5
+from charmhelpers.fetch import (
 6
+    BaseFetchHandler,
 7
+    UnhandledSource
 8
+)
 9
+from charmhelpers.core.host import mkdir
10
+
11
+try:
12
+    from bzrlib.branch import Branch
13
+except ImportError:
14
+    from charmhelpers.fetch import apt_install
15
+    apt_install("python-bzrlib")
16
+    from bzrlib.branch import Branch
17
+
18
+
19
+class BzrUrlFetchHandler(BaseFetchHandler):
20
+    """Handler for bazaar branches via generic and lp URLs"""
21
+    def can_handle(self, source):
22
+        url_parts = self.parse_url(source)
23
+        if url_parts.scheme not in ('bzr+ssh', 'lp'):
24
+            return False
25
+        else:
26
+            return True
27
+
28
+    def branch(self, source, dest):
29
+        url_parts = self.parse_url(source)
30
+        # If we use lp:branchname scheme we need to load plugins
31
+        if not self.can_handle(source):
32
+            raise UnhandledSource("Cannot handle {}".format(source))
33
+        if url_parts.scheme == "lp":
34
+            from bzrlib.plugin import load_plugins
35
+            load_plugins()
36
+        try:
37
+            remote_branch = Branch.open(source)
38
+            remote_branch.bzrdir.sprout(dest).open_branch()
39
+        except Exception as e:
40
+            raise e
41
+
42
+    def install(self, source):
43
+        url_parts = self.parse_url(source)
44
+        branch_name = url_parts.path.strip("/").split("/")[-1]
45
+        dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
46
+                                branch_name)
47
+        if not os.path.exists(dest_dir):
48
+            mkdir(dest_dir, perms=0755)
49
+        try:
50
+            self.branch(source, dest_dir)
51
+        except OSError as e:
52
+            raise UnhandledSource(e.strerror)
53
+        return dest_dir
Back to file index

hooks/charmhelpers/fetch/giturl.py

 1
--- 
 2
+++ hooks/charmhelpers/fetch/giturl.py
 3
@@ -0,0 +1,44 @@
 4
+import os
 5
+from charmhelpers.fetch import (
 6
+    BaseFetchHandler,
 7
+    UnhandledSource
 8
+)
 9
+from charmhelpers.core.host import mkdir
10
+
11
+try:
12
+    from git import Repo
13
+except ImportError:
14
+    from charmhelpers.fetch import apt_install
15
+    apt_install("python-git")
16
+    from git import Repo
17
+
18
+
19
+class GitUrlFetchHandler(BaseFetchHandler):
20
+    """Handler for git branches via generic and github URLs"""
21
+    def can_handle(self, source):
22
+        url_parts = self.parse_url(source)
23
+        #TODO (mattyw) no support for ssh git@ yet
24
+        if url_parts.scheme not in ('http', 'https', 'git'):
25
+            return False
26
+        else:
27
+            return True
28
+
29
+    def clone(self, source, dest, branch):
30
+        if not self.can_handle(source):
31
+            raise UnhandledSource("Cannot handle {}".format(source))
32
+
33
+        repo = Repo.clone_from(source, dest)
34
+        repo.git.checkout(branch)
35
+
36
+    def install(self, source, branch="master"):
37
+        url_parts = self.parse_url(source)
38
+        branch_name = url_parts.path.strip("/").split("/")[-1]
39
+        dest_dir = os.path.join(os.environ.get('CHARM_DIR'), "fetched",
40
+                                branch_name)
41
+        if not os.path.exists(dest_dir):
42
+            mkdir(dest_dir, perms=0755)
43
+        try:
44
+            self.clone(source, dest_dir, branch)
45
+        except OSError as e:
46
+            raise UnhandledSource(e.strerror)
47
+        return dest_dir
Back to file index

hooks/charmhelpers/payload/__init__.py

1
--- 
2
+++ hooks/charmhelpers/payload/__init__.py
3
@@ -0,0 +1 @@
4
+"Tools for working with files injected into a charm just before deployment."
Back to file index

hooks/charmhelpers/payload/execd.py

 1
--- 
 2
+++ hooks/charmhelpers/payload/execd.py
 3
@@ -0,0 +1,50 @@
 4
+#!/usr/bin/env python
 5
+
 6
+import os
 7
+import sys
 8
+import subprocess
 9
+from charmhelpers.core import hookenv
10
+
11
+
12
+def default_execd_dir():
13
+    return os.path.join(os.environ['CHARM_DIR'], 'exec.d')
14
+
15
+
16
+def execd_module_paths(execd_dir=None):
17
+    """Generate a list of full paths to modules within execd_dir."""
18
+    if not execd_dir:
19
+        execd_dir = default_execd_dir()
20
+
21
+    if not os.path.exists(execd_dir):
22
+        return
23
+
24
+    for subpath in os.listdir(execd_dir):
25
+        module = os.path.join(execd_dir, subpath)
26
+        if os.path.isdir(module):
27
+            yield module
28
+
29
+
30
+def execd_submodule_paths(command, execd_dir=None):
31
+    """Generate a list of full paths to the specified command within exec_dir.
32
+    """
33
+    for module_path in execd_module_paths(execd_dir):
34
+        path = os.path.join(module_path, command)
35
+        if os.access(path, os.X_OK) and os.path.isfile(path):
36
+            yield path
37
+
38
+
39
+def execd_run(command, execd_dir=None, die_on_error=False, stderr=None):
40
+    """Run command for each module within execd_dir which defines it."""
41
+    for submodule_path in execd_submodule_paths(command, execd_dir):
42
+        try:
43
+            subprocess.check_call(submodule_path, shell=True, stderr=stderr)
44
+        except subprocess.CalledProcessError as e:
45
+            hookenv.log("Error ({}) running  {}. Output: {}".format(
46
+                e.returncode, e.cmd, e.output))
47
+            if die_on_error:
48
+                sys.exit(e.returncode)
49
+
50
+
51
+def execd_preinstall(execd_dir=None):
52
+    """Run charm-pre-install for each module within execd_dir."""
53
+    execd_run('charm-pre-install', execd_dir=execd_dir)
Back to file index

hooks/client-relation-departed

  1
--- 
  2
+++ hooks/client-relation-departed
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/client-relation-joined

  1
--- 
  2
+++ hooks/client-relation-joined
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/config-changed

  1
--- 
  2
+++ hooks/config-changed
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/data-relation-changed

  1
--- 
  2
+++ hooks/data-relation-changed
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/data-relation-departed

  1
--- 
  2
+++ hooks/data-relation-departed
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/data-relation-joined

  1
--- 
  2
+++ hooks/data-relation-joined
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/hooks.py

  1
--- 
  2
+++ hooks/hooks.py
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/install

  1
--- 
  2
+++ hooks/install
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/logs-relation-joined

  1
--- 
  2
+++ hooks/logs-relation-joined
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/nrpe-external-master-relation-changed

  1
--- 
  2
+++ hooks/nrpe-external-master-relation-changed
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/peer-relation-changed

  1
--- 
  2
+++ hooks/peer-relation-changed
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/peer-relation-departed

  1
--- 
  2
+++ hooks/peer-relation-departed
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/peer-relation-joined

  1
--- 
  2
+++ hooks/peer-relation-joined
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/start

  1
--- 
  2
+++ hooks/start
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/stop

  1
--- 
  2
+++ hooks/stop
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

hooks/upgrade-charm

  1
--- 
  2
+++ hooks/upgrade-charm
  3
@@ -0,0 +1,101 @@
  4
+#!/usr/bin/env python
  5
+"""Setup hooks for the elasticsearch charm."""
  6
+
  7
+import sys
  8
+import charmhelpers.contrib.ansible
  9
+import charmhelpers.payload.execd
 10
+import charmhelpers.core.host
 11
+from charmhelpers.core import hookenv
 12
+import os
 13
+import shutil
 14
+
 15
+mountpoint = '/srv/elasticsearch'
 16
+
 17
+hooks = charmhelpers.contrib.ansible.AnsibleHooks(
 18
+    playbook_path='playbook.yaml',
 19
+    default_hooks=[
 20
+        'config-changed',
 21
+        'cluster-relation-joined',
 22
+        'logs-relation-joined',
 23
+        'data-relation-joined',
 24
+        'data-relation-changed',
 25
+        'data-relation-departed',
 26
+        'data-relation-broken',
 27
+        'peer-relation-joined',
 28
+        'peer-relation-changed',
 29
+        'peer-relation-departed',
 30
+        'nrpe-external-master-relation-changed',
 31
+        'rest-relation-joined',
 32
+        'start',
 33
+        'stop',
 34
+        'upgrade-charm',
 35
+        'client-relation-joined',
 36
+        'client-relation-departed',
 37
+    ])
 38
+
 39
+
 40
+@hooks.hook('install', 'upgrade-charm')
 41
+def install():
 42
+    """Install ansible before running the tasks tagged with 'install'."""
 43
+    # Allow charm users to run preinstall setup.
 44
+    charmhelpers.payload.execd.execd_preinstall()
 45
+    charmhelpers.contrib.ansible.install_ansible_support(
 46
+        from_ppa=False)
 47
+
 48
+    # We copy the backported ansible modules here because they need to be
 49
+    # in place by the time ansible runs any hook.
 50
+    charmhelpers.core.host.rsync(
 51
+        'ansible_module_backports',
 52
+        '/usr/share/ansible')
 53
+
 54
+
 55
+@hooks.hook('data-relation-joined', 'data-relation-changed')
 56
+def data_relation():
 57
+    if hookenv.relation_get('mountpoint') == mountpoint:
 58
+        # Other side of relation is ready
 59
+        migrate_to_mount(mountpoint)
 60
+    else:
 61
+        # Other side not ready yet, provide mountpoint
 62
+        hookenv.log('Requesting storage for {}'.format(mountpoint))
 63
+        hookenv.relation_set(mountpoint=mountpoint)
 64
+
 65
+
 66
+@hooks.hook('data-relation-departed', 'data-relation-broken')
 67
+def data_relation_gone():
 68
+    hookenv.log('Data relation no longer present, stopping elasticsearch.')
 69
+    charmhelpers.core.host.service_stop('elasticsearch')
 70
+
 71
+
 72
+def migrate_to_mount(new_path):
 73
+    """Invoked when new mountpoint appears. This function safely migrates
 74
+    elasticsearch data from local disk to persistent storage (only if needed)
 75
+    """
 76
+    old_path = '/var/lib/elasticsearch'
 77
+    if os.path.islink(old_path):
 78
+        hookenv.log('{} is already a symlink, skipping migration'.format(
 79
+            old_path))
 80
+        return True
 81
+    # Ensure our new mountpoint is empty. Otherwise error and allow
 82
+    # users to investigate and migrate manually
 83
+    files = os.listdir(new_path)
 84
+    try:
 85
+        files.remove('lost+found')
 86
+    except ValueError:
 87
+        pass
 88
+    if files:
 89
+        raise RuntimeError('Persistent storage contains old data. '
 90
+                           'Please investigate and migrate data manually '
 91
+                           'to: {}'.format(new_path))
 92
+    os.chmod(new_path, 0o700)
 93
+    charmhelpers.core.host.service_stop('elasticsearch')
 94
+    # Ensure we have trailing slashes
 95
+    charmhelpers.core.host.rsync(os.path.join(old_path, ''),
 96
+                                 os.path.join(new_path, ''),
 97
+                                 options=['--archive'])
 98
+    shutil.rmtree(old_path)
 99
+    os.symlink(new_path, old_path)
100
+    charmhelpers.core.host.service_start('elasticsearch')
101
+
102
+
103
+if __name__ == "__main__":
104
+    hooks.execute(sys.argv)
Back to file index

icon.svg

  1
--- 
  2
+++ icon.svg
  3
@@ -0,0 +1,402 @@
  4
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
  5
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
  6
+
  7
+<svg
  8
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
  9
+   xmlns:cc="http://creativecommons.org/ns#"
 10
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
 11
+   xmlns:svg="http://www.w3.org/2000/svg"
 12
+   xmlns="http://www.w3.org/2000/svg"
 13
+   xmlns:xlink="http://www.w3.org/1999/xlink"
 14
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
 15
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
 16
+   width="96"
 17
+   height="96"
 18
+   id="svg6517"
 19
+   version="1.1"
 20
+   inkscape:version="0.91 r13725"
 21
+   sodipodi:docname="elasticsearch_circle.svg"
 22
+   viewBox="0 0 96 96">
 23
+  <defs
 24
+     id="defs6519">
 25
+    <linearGradient
 26
+       id="Background">
 27
+      <stop
 28
+         id="stop4178"
 29
+         offset="0"
 30
+         style="stop-color:#22779e;stop-opacity:1" />
 31
+      <stop
 32
+         id="stop4180"
 33
+         offset="1"
 34
+         style="stop-color:#2991c0;stop-opacity:1" />
 35
+    </linearGradient>
 36
+    <filter
 37
+       style="color-interpolation-filters:sRGB"
 38
+       inkscape:label="Inner Shadow"
 39
+       id="filter1121">
 40
+      <feFlood
 41
+         flood-opacity="0.59999999999999998"
 42
+         flood-color="rgb(0,0,0)"
 43
+         result="flood"
 44
+         id="feFlood1123" />
 45
+      <feComposite
 46
+         in="flood"
 47
+         in2="SourceGraphic"
 48
+         operator="out"
 49
+         result="composite1"
 50
+         id="feComposite1125" />
 51
+      <feGaussianBlur
 52
+         in="composite1"
 53
+         stdDeviation="1"
 54
+         result="blur"
 55
+         id="feGaussianBlur1127" />
 56
+      <feOffset
 57
+         dx="0"
 58
+         dy="2"
 59
+         result="offset"
 60
+         id="feOffset1129" />
 61
+      <feComposite
 62
+         in="offset"
 63
+         in2="SourceGraphic"
 64
+         operator="atop"
 65
+         result="composite2"
 66
+         id="feComposite1131" />
 67
+    </filter>
 68
+    <filter
 69
+       style="color-interpolation-filters:sRGB"
 70
+       inkscape:label="Drop Shadow"
 71
+       id="filter950">
 72
+      <feFlood
 73
+         flood-opacity="0.25"
 74
+         flood-color="rgb(0,0,0)"
 75
+         result="flood"
 76
+         id="feFlood952" />
 77
+      <feComposite
 78
+         in="flood"
 79
+         in2="SourceGraphic"
 80
+         operator="in"
 81
+         result="composite1"
 82
+         id="feComposite954" />
 83
+      <feGaussianBlur
 84
+         in="composite1"
 85
+         stdDeviation="1"
 86
+         result="blur"
 87
+         id="feGaussianBlur956" />
 88
+      <feOffset
 89
+         dx="0"
 90
+         dy="1"
 91
+         result="offset"
 92
+         id="feOffset958" />
 93
+      <feComposite
 94
+         in="SourceGraphic"
 95
+         in2="offset"
 96
+         operator="over"
 97
+         result="composite2"
 98
+         id="feComposite960" />
 99
+    </filter>
100
+    <clipPath
101
+       clipPathUnits="userSpaceOnUse"
102
+       id="clipPath873">
103
+      <g
104
+         transform="matrix(0,-0.66666667,0.66604479,0,-258.25992,677.00001)"
105
+         id="g875"
106
+         inkscape:label="Layer 1"
107
+         style="display:inline;fill:#ff00ff;fill-opacity:1;stroke:none">
108
+        <path
109
+           style="display:inline;fill:#ff00ff;fill-opacity:1;stroke:none"
110
+           d="M 46.702703,898.22775 H 97.297297 C 138.16216,898.22775 144,904.06497 144,944.92583 v 50.73846 c 0,40.86071 -5.83784,46.69791 -46.702703,46.69791 H 46.702703 C 5.8378378,1042.3622 0,1036.525 0,995.66429 v -50.73846 c 0,-40.86086 5.8378378,-46.69808 46.702703,-46.69808 z"
111
+           id="path877"
112
+           inkscape:connector-curvature="0"
113
+           sodipodi:nodetypes="sssssssss" />
114
+      </g>
115
+    </clipPath>
116
+    <style
117
+       id="style867"
118
+       type="text/css"><![CDATA[
119
+    .fil0 {fill:#1F1A17}
120
+   ]]></style>
121
+    <clipPath
122
+       id="clipPath16">
123
+      <path
124
+         id="path18"
125
+         d="M -9,-9 H 605 V 222 H -9 Z"
126
+         inkscape:connector-curvature="0" />
127
+    </clipPath>
128
+    <clipPath
129
+       id="clipPath116">
130
+      <path
131
+         id="path118"
132
+         d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
133
+         inkscape:connector-curvature="0" />
134
+    </clipPath>
135
+    <clipPath
136
+       id="clipPath128">
137
+      <path
138
+         id="path130"
139
+         d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
140
+         inkscape:connector-curvature="0" />
141
+    </clipPath>
142
+    <linearGradient
143
+       id="linearGradient3850"
144
+       inkscape:collect="always">
145
+      <stop
146
+         id="stop3852"
147
+         offset="0"
148
+         style="stop-color:#000000;stop-opacity:1;" />
149
+      <stop
150
+         id="stop3854"
151
+         offset="1"
152
+         style="stop-color:#000000;stop-opacity:0;" />
153
+    </linearGradient>
154
+    <clipPath
155
+       clipPathUnits="userSpaceOnUse"
156
+       id="clipPath3095">
157
+      <path
158
+         d="M 976.648,389.551 H 134.246 V 1229.55 H 976.648 V 389.551"
159
+         id="path3097"
160
+         inkscape:connector-curvature="0" />
161
+    </clipPath>
162
+    <clipPath
163
+       clipPathUnits="userSpaceOnUse"
164
+       id="clipPath3195">
165
+      <path
166
+         d="m 611.836,756.738 -106.34,105.207 c -8.473,8.289 -13.617,20.102 -13.598,33.379 L 598.301,790.207 c -0.031,-13.418 5.094,-25.031 13.535,-33.469"
167
+         id="path3197"
168
+         inkscape:connector-curvature="0" />
169
+    </clipPath>
170
+    <clipPath
171
+       clipPathUnits="userSpaceOnUse"
172
+       id="clipPath3235">
173
+      <path
174
+         d="m 1095.64,1501.81 c 35.46,-35.07 70.89,-70.11 106.35,-105.17 4.4,-4.38 7.11,-10.53 7.11,-17.55 l -106.37,105.21 c 0,7 -2.71,13.11 -7.09,17.51"
175
+         id="path3237"
176
+         inkscape:connector-curvature="0" />
177
+    </clipPath>
178
+    <clipPath
179
+       id="clipPath4591"
180
+       clipPathUnits="userSpaceOnUse">
181
+      <path
182
+         inkscape:connector-curvature="0"
183
+         d="m 1106.6009,730.43734 -0.036,21.648 c -0.01,3.50825 -2.8675,6.61375 -6.4037,6.92525 l -83.6503,7.33162 c -3.5205,0.30763 -6.3812,-2.29987 -6.3671,-5.8145 l 0.036,-21.6475 20.1171,-1.76662 -0.011,4.63775 c 0,1.83937 1.4844,3.19925 3.3262,3.0395 l 49.5274,-4.33975 c 1.8425,-0.166 3.3425,-1.78125 3.3538,-3.626 l 0.01,-4.63025 20.1,-1.7575"
184
+         style="fill:#ff00ff;fill-opacity:1;fill-rule:nonzero;stroke:none"
185
+         id="path4593" />
186
+    </clipPath>
187
+    <radialGradient
188
+       gradientUnits="userSpaceOnUse"
189
+       gradientTransform="matrix(-1.4333926,-2.2742838,1.1731823,-0.73941125,-174.08025,98.374394)"
190
+       r="20.40658"
191
+       fy="93.399292"
192
+       fx="-26.508606"
193
+       cy="93.399292"
194
+       cx="-26.508606"
195
+       id="radialGradient3856"
196
+       xlink:href="#linearGradient3850"
197
+       inkscape:collect="always" />
198
+    <linearGradient
199
+       gradientTransform="translate(-318.48033,212.32022)"
200
+       gradientUnits="userSpaceOnUse"
201
+       y2="993.19702"
202
+       x2="-51.879555"
203
+       y1="593.11615"
204
+       x1="348.20132"
205
+       id="linearGradient3895"
206
+       xlink:href="#linearGradient3850"
207
+       inkscape:collect="always" />
208
+    <clipPath
209
+       id="clipPath3906"
210
+       clipPathUnits="userSpaceOnUse">
211
+      <rect
212
+         transform="scale(1,-1)"
213
+         style="color:#000000;display:inline;overflow:visible;visibility:visible;opacity:0.8;fill:#ff00ff;stroke:none;stroke-width:4;marker:none;enable-background:accumulate"
214
+         id="rect3908"
215
+         width="1019.1371"
216
+         height="1019.1371"
217
+         x="357.9816"
218
+         y="-1725.8152" />
219
+    </clipPath>
220
+  </defs>
221
+  <sodipodi:namedview
222
+     id="base"
223
+     pagecolor="#ffffff"
224
+     bordercolor="#666666"
225
+     borderopacity="1.0"
226
+     inkscape:pageopacity="0.0"
227
+     inkscape:pageshadow="2"
228
+     inkscape:zoom="9.9475976"
229
+     inkscape:cx="-62.367955"
230
+     inkscape:cy="56.688968"
231
+     inkscape:document-units="px"
232
+     inkscape:current-layer="layer1"
233
+     showgrid="true"
234
+     fit-margin-top="0"
235
+     fit-margin-left="0"
236
+     fit-margin-right="0"
237
+     fit-margin-bottom="0"
238
+     inkscape:window-width="1920"
239
+     inkscape:window-height="1029"
240
+     inkscape:window-x="0"
241
+     inkscape:window-y="24"
242
+     inkscape:window-maximized="1"
243
+     showborder="true"
244
+     showguides="false"
245
+     inkscape:guide-bbox="true"
246
+     inkscape:showpageshadow="false"
247
+     inkscape:snap-global="true"
248
+     inkscape:snap-bbox="true"
249
+     inkscape:bbox-paths="true"
250
+     inkscape:bbox-nodes="true"
251
+     inkscape:snap-bbox-edge-midpoints="true"
252
+     inkscape:snap-bbox-midpoints="true"
253
+     inkscape:object-paths="true"
254
+     inkscape:snap-intersection-paths="true"
255
+     inkscape:object-nodes="true"
256
+     inkscape:snap-smooth-nodes="true"
257
+     inkscape:snap-midpoints="true"
258
+     inkscape:snap-object-midpoints="true"
259
+     inkscape:snap-center="true"
260
+     inkscape:snap-nodes="true"
261
+     inkscape:snap-others="true"
262
+     inkscape:snap-page="true">
263
+    <inkscape:grid
264
+       type="xygrid"
265
+       id="grid821" />
266
+    <sodipodi:guide
267
+       orientation="1,0"
268
+       position="16,48"
269
+       id="guide823"
270
+       inkscape:locked="false" />
271
+    <sodipodi:guide
272
+       orientation="0,1"
273
+       position="64,80"
274
+       id="guide825"
275
+       inkscape:locked="false" />
276
+    <sodipodi:guide
277
+       orientation="1,0"
278
+       position="80,40"
279
+       id="guide827"
280
+       inkscape:locked="false" />
281
+    <sodipodi:guide
282
+       orientation="0,1"
283
+       position="64,16"
284
+       id="guide829"
285
+       inkscape:locked="false" />
286
+  </sodipodi:namedview>
287
+  <metadata
288
+     id="metadata6522">
289
+    <rdf:RDF>
290
+      <cc:Work
291
+         rdf:about="">
292
+        <dc:format>image/svg+xml</dc:format>
293
+        <dc:type
294
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
295
+        <dc:title />
296
+      </cc:Work>
297
+    </rdf:RDF>
298
+  </metadata>
299
+  <g
300
+     inkscape:label="BACKGROUND"
301
+     inkscape:groupmode="layer"
302
+     id="layer1"
303
+     transform="translate(268,-635.29076)"
304
+     style="display:inline">
305
+    <path
306
+       style="display:inline;fill:#3b3b3b;fill-opacity:1;stroke:none"
307
+       d="M 48 0 A 48 48 0 0 0 0 48 A 48 48 0 0 0 48 96 A 48 48 0 0 0 96 48 A 48 48 0 0 0 48 0 z "
308
+       transform="translate(-268,635.29076)"
309
+       id="path6455" />
310
+    <g
311
+       id="g4357"
312
+       transform="matrix(0.80919285,0,0,0.80919285,-42.567527,130.37676)"
313
+       inkscape:transform-center-x="2.940766">
314
+      <g
315
+         id="g4289"
316
+         transform="matrix(0.41415979,0,0,0.41415979,-257.57951,644.98098)">
317
+        <g
318
+           id="g4291">
319
+          <defs
320
+             id="defs4293">
321
+            <circle
322
+               r="92.5"
323
+               cy="92.5"
324
+               cx="92.5"
325
+               id="SVGID_1_" />
326
+          </defs>
327
+          <clipPath
328
+             id="SVGID_2_">
329
+            <use
330
+               id="use4297"
331
+               style="overflow:visible"
332
+               xlink:href="#SVGID_1_"
333
+               x="0"
334
+               y="0"
335
+               width="100%"
336
+               height="100%" />
337
+          </clipPath>
338
+          <path
339
+             clip-path="url(#SVGID_2_)"
340
+             id="path4299"
341
+             d="M 132.1,52 5.8,52 C 2.6,52 0,49.4 0,46.2 L 0,5.8 C 0,2.6 2.6,0 5.8,0 l 164.7,0 c 3.2,0 5.8,2.6 5.8,5.8 l 0,2.1 C 176.2,32.2 156.4,52 132.1,52 Z"
342
+             class="st0"
343
+             inkscape:connector-curvature="0"
344
+             style="fill:#efbf1b" />
345
+          <path
346
+             clip-path="url(#SVGID_2_)"
347
+             id="path4301"
348
+             d="M 176.6,185 0.5,185 c 0,0 -0.1,0 -0.1,-0.1 l 0,-51.9 c 0,0 0,-0.1 0.1,-0.1 l 132.1,0 c 24.3,0 44,19.7 44,44 l 0,8.1 c 0.1,0 0,0 0,0 z"
349
+             class="st1"
350
+             inkscape:connector-curvature="0"
351
+             style="fill:#40beb0" />
352
+          <path
353
+             clip-path="url(#SVGID_2_)"
354
+             id="path4303"
355
+             d="m 121.6,118.5 -130.8,0 0,-52 130.9,0 c 14.4,0 26,11.7 26,26 l 0,0 c -0.1,14.4 -11.7,26 -26.1,26 z"
356
+             class="st2"
357
+             inkscape:connector-curvature="0"
358
+             style="fill:#0aa5de" />
359
+          <path
360
+             clip-path="url(#SVGID_2_)"
361
+             id="path4305"
362
+             d="m 80.9,66.5 -85.4,0 0,52 85.6,0 c 1.9,-7.8 3,-16.5 3,-26 0,-9.6 -1.2,-18.2 -3.2,-26 z"
363
+             class="st3"
364
+             inkscape:connector-curvature="0"
365
+             style="fill:#ffffff" />
366
+        </g>
367
+        <path
368
+           id="path4307"
369
+           d="M 46,12.5 C 30.2,21.7 17.4,35.4 9.3,51.9 l 68.3,0 C 70.5,36.3 59.6,22.7 46,12.5 Z"
370
+           class="st4"
371
+           inkscape:connector-curvature="0"
372
+           style="fill:#d7a229" />
373
+        <path
374
+           id="path4309"
375
+           d="M 48.7,173.9 C 62,163.1 72.6,149.1 79.2,133 l -69.9,0 c 8.6,17.4 22.4,31.8 39.4,40.9 z"
376
+           class="st5"
377
+           inkscape:connector-curvature="0"
378
+           style="fill:#009b8f" />
379
+      </g>
380
+    </g>
381
+  </g>
382
+  <g
383
+     inkscape:groupmode="layer"
384
+     id="layer3"
385
+     inkscape:label="PLACE YOUR PICTOGRAM HERE"
386
+     style="display:inline">
387
+    <g
388
+       id="g4185" />
389
+  </g>
390
+  <style
391
+     id="style4217"
392
+     type="text/css">
393
+	.st0{fill:#419EDA;}
394
+</style>
395
+  <style
396
+     id="style4285"
397
+     type="text/css">
398
+	.st0{clip-path:url(#SVGID_2_);fill:#EFBF1B;}
399
+	.st1{clip-path:url(#SVGID_2_);fill:#40BEB0;}
400
+	.st2{clip-path:url(#SVGID_2_);fill:#0AA5DE;}
401
+	.st3{clip-path:url(#SVGID_2_);fill:#231F20;}
402
+	.st4{fill:#D7A229;}
403
+	.st5{fill:#009B8F;}
404
+</style>
405
+</svg>
Back to file index

lookup_plugins/dns.py

 1
--- 
 2
+++ lookup_plugins/dns.py
 3
@@ -0,0 +1,37 @@
 4
+# Copyright 2013 Dale Sedivec
 5
+#
 6
+# This program is free software: you can redistribute it and/or modify
 7
+# it under the terms of the GNU General Public License as published by
 8
+# the Free Software Foundation, either version 3 of the License, or
 9
+# (at your option) any later version.
10
+#
11
+# This program is distributed in the hope that it will be useful,
12
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
13
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
14
+# GNU General Public License for more details.
15
+#
16
+# You should have received a copy of the GNU General Public License
17
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
18
+
19
+
20
+import socket
21
+
22
+from ansible import utils, errors
23
+
24
+
25
+class LookupModule (object):
26
+    def __init__(self, basedir=None, **kwargs):
27
+        self.basedir = basedir
28
+
29
+    def run(self, terms, inject=None, **kwargs):
30
+        terms = utils.listify_lookup_plugin_terms(terms, self.basedir, inject)
31
+        if isinstance(terms, basestring):
32
+            terms = [terms]
33
+        ret = []
34
+        for term in terms:
35
+            try:
36
+                ret.append(socket.gethostbyname(term))
37
+            except socket.error, ex:
38
+                raise errors.AnsibleError("exception resolving %r" % (term,),
39
+                                          ex)
40
+        return ret
Back to file index

metadata.yaml

 1
--- 
 2
+++ metadata.yaml
 3
@@ -0,0 +1,25 @@
 4
+name: elasticsearch
 5
+summary: Open Source, Distributed, RESTful, Search Engine built on Apache Lucene
 6
+maintainer: Michael Nelson <michael.nelson@canonical.com>
 7
+description: |
 8
+  Distributed RESTful search and analytics
 9
+  Read more at http://www.elasticsearch.org
10
+tags:
11
+  - misc
12
+subordinate: false
13
+series:
14
+  - trusty
15
+peers:
16
+  peer:
17
+    interface: http
18
+provides:
19
+  client:
20
+    interface: elasticsearch
21
+  nrpe-external-master:
22
+     interface: nrpe-external-master
23
+     scope: container
24
+  logs:
25
+    interface: logs
26
+  data:
27
+    interface: block-storage
28
+    scope: container
Back to file index

playbook.yaml

 1
--- 
 2
+++ playbook.yaml
 3
@@ -0,0 +1,95 @@
 4
+- hosts: localhost
 5
+  roles:
 6
+    - role: nrpe
 7
+      check_name: check_http
 8
+      check_params: -H localhost -u /_cluster/health -p 9200 -w 2 -c 3 -s green
 9
+      service_description: "Verify the cluster health is green."
10
+
11
+  handlers:
12
+
13
+    - name: Restart ElasticSearch
14
+      service: name=elasticsearch state=restarted
15
+
16
+  vars:
17
+    - service_name: "{{ local_unit.split('/')[0] }}"
18
+
19
+  tasks:
20
+
21
+    - include: tasks/install-elasticsearch.yml
22
+    - include: tasks/peer-relations.yml
23
+    - include: tasks/setup-ufw.yml
24
+      tags:
25
+        - install
26
+        - upgrade-charm
27
+        - config-changed
28
+        - client-relation-joined
29
+        - client-relation-departed
30
+        - peer-relation-joined
31
+        - peer-relation-departed
32
+
33
+    - name: Update configuration
34
+      tags:
35
+        - config-changed
36
+      template: src={{ charm_dir }}/templates/elasticsearch.yml
37
+                dest=/etc/elasticsearch/elasticsearch.yml
38
+                mode=0644
39
+                backup=yes
40
+      notify:
41
+        - Restart ElasticSearch
42
+
43
+    - name: Open ES Port when exposed
44
+      command: open-port 9200
45
+      tags:
46
+        - start
47
+
48
+    - name: Start ElasticSearch
49
+      service: name=elasticsearch state=started
50
+      tags:
51
+        - start
52
+
53
+    - name: Stop ElasticSearch
54
+      service: name=elasticsearch state=stopped
55
+      tags:
56
+        - stop
57
+
58
+    - name: Relate the cluster name and host.
59
+      tags:
60
+        - client-relation-joined
61
+      command: >
62
+        relation-set
63
+        cluster-name={{ cluster_name }}
64
+        host={{ ansible_default_ipv4.address }}
65
+        port=9200
66
+
67
+    - name: Relate logs
68
+      tags:
69
+        - logs-relation-joined
70
+      command: >
71
+        relation-set
72
+        file=/var/log/elasticsearch/elasticsearch.log
73
+        type=elasticsearch
74
+
75
+    # A bug in the ansible hooks() helper requires at least
76
+    # one task to be tagged.
77
+    - name: Empty task to keep ansible helper satisfied.
78
+      debug: msg="Noop ansible task."
79
+      tags:
80
+        - data-relation-joined
81
+        - data-relation-changed
82
+        - data-relation-departed
83
+        - data-relation-broken
84
+
85
+    # A bug in the ansible hooks() helper requires at least
86
+    # one task to be tagged.
87
+    - name: Set exit user messaging.
88
+      command: >
89
+        status-set
90
+        active
91
+        Ready
92
+      tags:
93
+        - client-relation-joined
94
+        - client-relation-departed
95
+        - peer-relation-joined
96
+        - peer-relation-departed
97
+        - start
98
+
Back to file index

revision

1
--- 
2
+++ revision
3
@@ -0,0 +1 @@
4
+0
Back to file index

roles/nrpe/defaults/main.yml

1
--- 
2
+++ roles/nrpe/defaults/main.yml
3
@@ -0,0 +1,3 @@
4
+---
5
+plugin_dir: /usr/lib/nagios/plugins
6
+service_description:
Back to file index

roles/nrpe/tasks/main.yml

 1
--- 
 2
+++ roles/nrpe/tasks/main.yml
 3
@@ -0,0 +1,28 @@
 4
+- name: Write nagios check command config.
 5
+  tags:
 6
+    - nrpe-external-master-relation-changed
 7
+  template:
 8
+    src: "check_name.cfg.jinja2"
 9
+    dest: "/etc/nagios/nrpe.d/{{ check_name }}.cfg"
10
+    owner: nagios
11
+    group: nagios
12
+    mode: 0644
13
+  when: "'nagios_hostname' in current_relation"
14
+
15
+- name: Write nagios check service definition for export.
16
+  tags:
17
+    - nrpe-external-master-relation-changed
18
+  template:
19
+    src: "check_name_service_export.cfg.jinja2"
20
+    dest: "/var/lib/nagios/export/service__{{ current_relation['nagios_hostname'] }}_{{ check_name }}.cfg"
21
+    owner: nagios
22
+    group: nagios
23
+    mode: 0644
24
+  when: "'nagios_hostname' in current_relation"
25
+
26
+- name: Trigger nrpe-external-master-relation-changed to restart.
27
+  tags:
28
+    - nrpe-external-master-relation-changed
29
+  command: >
30
+    relation-set timestamp={{ ansible_date_time.iso8601_micro }}
31
+  when: "'nagios_hostname' in current_relation"
Back to file index

roles/nrpe/templates/check_name.cfg.jinja2

1
--- 
2
+++ roles/nrpe/templates/check_name.cfg.jinja2
3
@@ -0,0 +1,4 @@
4
+#---------------------------------------------------
5
+# This file is Juju managed
6
+#---------------------------------------------------
7
+command[{{ check_name }}]={{ plugin_dir }}/{{ check_name }} {{ check_params }}
Back to file index

roles/nrpe/templates/check_name_service_export.cfg.jinja2

 1
--- 
 2
+++ roles/nrpe/templates/check_name_service_export.cfg.jinja2
 3
@@ -0,0 +1,10 @@
 4
+#---------------------------------------------------
 5
+# This file is Juju managed
 6
+#---------------------------------------------------
 7
+define service {
 8
+    use                             active-service
 9
+    host_name                       {{ current_relation['nagios_hostname'] }}
10
+    service_description             {{ service_description }}
11
+    check_command                   check_nrpe!{{ check_name }}
12
+    servicegroups                   {{ current_relation['nagios_host_context'] }}
13
+}
Back to file index

tasks/install-elasticsearch.yml

 1
--- 
 2
+++ tasks/install-elasticsearch.yml
 3
@@ -0,0 +1,56 @@
 4
+- name: Add apt key.
 5
+  tags:
 6
+    - install
 7
+    - upgrade-charm
 8
+    - config-changed
 9
+  apt_key: url={{ apt_key_url }} state=present id={{gpg_key_id}} validate_certs=no
10
+  when: apt_key_url != ""
11
+
12
+- name: Add apt archive.
13
+  tags:
14
+    - install
15
+    - upgrade-charm
16
+    - config-changed
17
+  apt_repository:
18
+    repo: "{{ apt_repository }}"
19
+    state: present
20
+  when: apt_repository != ""
21
+
22
+- name: Add java apt archive.
23
+  tags:
24
+    - install
25
+    - upgrade-charm
26
+    - config-changed
27
+  apt_repository:
28
+    repo: "ppa:openjdk-r/ppa"
29
+    state: present
30
+
31
+- name: Install dependent packages.
32
+  apt: pkg={{ item }} state=latest update_cache=yes
33
+  tags:
34
+    - install
35
+    - upgrade-charm
36
+  with_items:
37
+    - openjdk-8-jre-headless
38
+    - ufw
39
+
40
+- name: Check for local elasticsearch.deb in payload.
41
+  stat: path=files/elasticsearch.deb
42
+  register: stat_elasticsearch_deb
43
+  tags:
44
+    - install
45
+    - upgrade-charm
46
+
47
+- name: Install elasticsearch from repository
48
+  apt: pkg=elasticsearch state=latest update_cache=yes
49
+  tags:
50
+    - install
51
+    - upgrade-charm
52
+  when: stat_elasticsearch_deb.stat.exists == false and apt_repository != ""
53
+
54
+- name: Install ElasticSearch from payload
55
+  command: dpkg -i {{ charm_dir }}/files/elasticsearch.deb
56
+  tags:
57
+    - install
58
+    - upgrade-charm
59
+  when: stat_elasticsearch_deb.stat.exists == true
Back to file index

tasks/peer-relations.yml

 1
--- 
 2
+++ tasks/peer-relations.yml
 3
@@ -0,0 +1,60 @@
 4
+# XXX Testing shows that currently peer-relation-joined is run
 5
+# not just when the unit joins the peer relationship, but every
 6
+# time a new unit is added to the peer relationship. We want to
 7
+# update the config with the extra peers, but only restart the service
 8
+# if we're not already part of the cluster (ie. num nodes is 1).
 9
+- name: Update config with peer hosts
10
+  tags:
11
+    - peer-relation-joined
12
+  template: src={{ charm_dir }}/templates/elasticsearch.yml
13
+            dest=/etc/elasticsearch/elasticsearch.yml
14
+            mode=0644
15
+            backup=yes
16
+
17
+# If multiple units are started simultaneously, peer-relation-joined
18
+# may be called before the service is running.
19
+- name: Wait until the local service is available
20
+  tags:
21
+    - peer-relation-joined
22
+    - peer-relation-changed
23
+  wait_for: port=9200
24
+
25
+- name: Record current cluster health
26
+  tags:
27
+    - peer-relation-joined
28
+    - peer-relation-changed
29
+  uri: url=http://localhost:9200/_cluster/health return_content=yes
30
+  register: cluster_health
31
+
32
+- name: Restart if not part of cluster
33
+  tags:
34
+    - peer-relation-joined
35
+    - peer-relation-changed
36
+  service: name=elasticsearch state=restarted
37
+  when: cluster_health.json.number_of_nodes == 1
38
+
39
+- name: Wait until the local service is available after restart
40
+  tags:
41
+    - peer-relation-joined
42
+    - peer-relation-changed
43
+  wait_for: port=9200
44
+  when: cluster_health.json.number_of_nodes == 1
45
+
46
+- name: Pause to ensure that after restart unit has time to join.
47
+  tags:
48
+    - peer-relation-changed
49
+  pause: seconds=30
50
+  when: cluster_health.json.number_of_nodes == 1
51
+
52
+- name: Record cluster health after restart
53
+  tags:
54
+    - peer-relation-changed
55
+  uri: url=http://localhost:9200/_cluster/health return_content=yes
56
+  register: cluster_health_after_restart
57
+  when: cluster_health.json.number_of_nodes == 1
58
+
59
+- name: Fail if unit is still not part of cluster
60
+  tags:
61
+    - peer-relation-changed
62
+  fail: msg="Unit failed to join cluster after peer-relation-changed"
63
+  when: cluster_health.json.number_of_nodes == 1 and cluster_health_after_restart.json.number_of_nodes == 1
Back to file index

tasks/setup-ufw.yml

 1
--- 
 2
+++ tasks/setup-ufw.yml
 3
@@ -0,0 +1,52 @@
 4
+# XXX 2014-07-08 michael nelson ip6 not supported on image (?)
 5
+# ufw errors unless you switch off ipv6 support. Not sure if it's
 6
+# related to the kernel used on the cloud image, but the actual
 7
+# error is:
 8
+# ip6tables v1.4.21: can't initialize ip6tables table `filter':
 9
+# Table does not exist (do you need to insmod?)
10
+# Perhaps ip6tables or your kernel needs to be upgraded.
11
+- name: Update ufw config to avoid error
12
+  lineinfile: dest=/etc/default/ufw
13
+              regexp="^IPV6=yes$"
14
+              line="IPV6=no"
15
+
16
+- name: Disable firewall only when explicitly configured to do so.
17
+  ufw: state=disabled
18
+  when: not firewall_enabled
19
+
20
+# XXX 2014-07-30 michael nelson: It'd be much nicer if we could
21
+# just render a config file for ufw, as it would be idempotent.
22
+# As it is, there isn't a way to do that (afaics), so instead we
23
+# reset the firewall rules each time based on the current clients.
24
+- name: Reset firewall
25
+  ufw: state=reset policy=allow logging=on
26
+  when: firewall_enabled
27
+
28
+- name: Turn on fire wall with logging.
29
+  ufw: state=enabled policy=allow logging=on
30
+  when: firewall_enabled
31
+
32
+- name: Open the firewall for all clients
33
+  ufw: rule=allow src={{ lookup('dns', item['private-address']) }} port=9200 proto=tcp
34
+  with_items: relations["client"]
35
+  when: firewall_enabled
36
+
37
+- name: Deny all other requests on 9200
38
+  ufw: rule=deny port=9200
39
+  when: firewall_enabled
40
+
41
+- name: Open the firewall for all peers
42
+  ufw: rule=allow src={{ lookup('dns', item['private-address']) }} port=9300 proto=tcp
43
+  with_items: relations["peer"]
44
+  when: firewall_enabled
45
+
46
+# Only deny incoming on 9300 once the unit is part of a cluster.
47
+- name: Record current cluster health
48
+  uri: url=http://localhost:9200/_cluster/health return_content=yes
49
+  register: cluster_health
50
+  ignore_errors: true
51
+  when: firewall_enabled
52
+
53
+- name: Deny all incoming requests on 9300 once unit is part of cluster
54
+  ufw: rule=deny port=9300
55
+  when: firewall_enabled and cluster_health|success and cluster_health.json.number_of_nodes > 1
Back to file index

templates/elasticsearch.yml

 1
--- 
 2
+++ templates/elasticsearch.yml
 3
@@ -0,0 +1,19 @@
 4
+# This config is autogenerated by ansible.
 5
+# See templates/elasticsearch.yml in the charm folder.
 6
+
 7
+cluster.name: {{ cluster_name }}
 8
+http.port: 9200
 9
+network.host: ["_site_", "_local_"]
10
+{% if not '5' in apt_repository %}
11
+discovery.zen.ping.multicast.enabled: false
12
+{% endif %}
13
+{% if relations.peer is defined and relations.peer|length > 0 %}
14
+discovery.zen.ping.unicast.hosts:
15
+{% for reln in relations.peer %}
16
+  - {{ reln['private-address'] }}
17
+{% endfor %}
18
+{% endif %}
19
+{% if not '5' in apt_repository %}
20
+# workaround for Kibana4 Export Everything bug https://github.com/elastic/kibana/issues/5524
21
+index.max_result_window: 2147483647
22
+{% endif %}
Back to file index

tests/00-single-to-scale-test.py

  1
--- 
  2
+++ tests/00-single-to-scale-test.py
  3
@@ -0,0 +1,112 @@
  4
+#!/usr/bin/python3
  5
+
  6
+import amulet
  7
+import unittest
  8
+import requests
  9
+import json
 10
+
 11
+class TestElasticsearch(unittest.TestCase):
 12
+
 13
+    @classmethod
 14
+    def setUpClass(self):
 15
+        self.deployment = amulet.Deployment(series='trusty')
 16
+        self.deployment.add('elasticsearch')
 17
+        self.deployment.configure('elasticsearch',
 18
+                                  {'cluster-name': 'unique-name'})
 19
+
 20
+        try:
 21
+            self.deployment.setup(timeout=1200)
 22
+            self.deployment.sentry.wait()
 23
+        except amulet.helpers.TimeoutError:
 24
+            amulet.raise_status(
 25
+                amulet.SKIP, msg="Environment wasn't setup in time")
 26
+
 27
+
 28
+    def test_health(self):
 29
+        ''' Test the health of the node upon first deployment
 30
+            by getting the cluster health, then inserting data and
 31
+            validating cluster health'''
 32
+        health = self.get_cluster_health()
 33
+        assert health['status'] in ('green', 'yellow')
 34
+
 35
+        # Create a test index.
 36
+        curl_command = """
 37
+        curl -XPUT 'http://localhost:9200/test/tweet/1' -d '{
 38
+            "user" : "me",
 39
+            "message" : "testing"
 40
+        }'
 41
+        """
 42
+        response = self.curl_on_unit(curl_command)
 43
+        health = self.get_index_health('test')
 44
+        assert health['status'] in ('green', 'yellow')
 45
+
 46
+    def test_config(self):
 47
+        ''' Validate our configuration of the cluster name made it to the
 48
+            application configuration'''
 49
+        health = self.get_cluster_health()
 50
+        cluster_name = health['cluster_name']
 51
+        assert cluster_name == 'unique-name'
 52
+
 53
+    def test_scale(self):
 54
+        ''' Validate scaling the elasticsearch cluster yields a healthy
 55
+            response from the API, and all units are participating '''
 56
+        self.deployment.add_unit('elasticsearch', units=2)
 57
+        self.deployment.setup(timeout=1200)
 58
+        self.deployment.sentry.wait()
 59
+        health = self.get_cluster_health(wait_for_nodes=3)
 60
+        index_health = self.get_index_health('test')
 61
+        print(health['number_of_nodes'])
 62
+        assert health['number_of_nodes'] == 3
 63
+        assert index_health['status'] in ('green', 'yellow')
 64
+
 65
+    def curl_on_unit(self, curl_command, unit_number=0):
 66
+        unit = "elasticsearch"
 67
+        response = self.deployment.sentry[unit][unit_number].run(curl_command)
 68
+        if response[1] != 0:
 69
+            msg = (
 70
+                "Elastic search didn't respond to the command \n"
 71
+                "'{curl_command}' as expected.\n"
 72
+                "Return code: {return_code}\n"
 73
+                "Result: {result}".format(
 74
+                    curl_command=curl_command,
 75
+                    return_code=response[1],
 76
+                    result=response[0])
 77
+            )
 78
+            amulet.raise_status(amulet.FAIL, msg=msg)
 79
+
 80
+        return json.loads(response[0])
 81
+
 82
+    def get_cluster_health(self, unit_number=0, wait_for_nodes=0,
 83
+                           timeout=180):
 84
+        curl_command = "curl http://localhost:9200"
 85
+        curl_command = curl_command + "/_cluster/health?timeout={}s".format(
 86
+            timeout)
 87
+        if wait_for_nodes > 0:
 88
+            curl_command = curl_command + "&wait_for_nodes={}".format(
 89
+                wait_for_nodes)
 90
+
 91
+        return self.curl_on_unit(curl_command, unit_number=unit_number)
 92
+
 93
+    def get_index_health(self, index_name, unit_number=0):
 94
+        curl_command = "curl http://localhost:9200"
 95
+        curl_command = curl_command + "/_cluster/health/" + index_name
 96
+
 97
+        return self.curl_on_unit(curl_command)
 98
+
 99
+
100
+def check_response(response, expected_code=200):
101
+    if response.status_code != expected_code:
102
+        msg = (
103
+            "Elastic search did not respond as expected. \n"
104
+            "Expected status code: %{expected_code} \n"
105
+            "Status code: %{status_code} \n"
106
+            "Response text: %{response_text}".format(
107
+                expected_code=expected_code,
108
+                status_code=response.status_code,
109
+                response_text=response.text))
110
+
111
+        amulet.raise_status(amulet.FAIL, msg=msg)
112
+
113
+
114
+if __name__ == "__main__":
115
+    unittest.main()
Back to file index

tests/01-single-to-scale-test-es-five.py

  1
--- 
  2
+++ tests/01-single-to-scale-test-es-five.py
  3
@@ -0,0 +1,119 @@
  4
+#!/usr/bin/python3
  5
+
  6
+import amulet
  7
+import unittest
  8
+import requests
  9
+import json
 10
+
 11
+
 12
+ES_FIVE = {'cluster-name': 'unique-name',
 13
+           'gpg-key-id': 'D88E42B4',
 14
+           'apt-repository': \
 15
+               'deb https://artifacts.elastic.co/packages/5.x/apt stable main',
 16
+           'apt-key-url': 'https://artifacts.elastic.co/GPG-KEY-elasticsearch'}
 17
+
 18
+
 19
+class TestElasticsearch(unittest.TestCase):
 20
+
 21
+    @classmethod
 22
+    def setUpClass(self):
 23
+        self.deployment = amulet.Deployment(series='trusty')
 24
+        self.deployment.add('elasticsearch')
 25
+        self.deployment.configure('elasticsearch', ES_FIVE)
 26
+
 27
+        try:
 28
+            self.deployment.setup(timeout=1200)
 29
+            self.deployment.sentry.wait()
 30
+        except amulet.helpers.TimeoutError:
 31
+            amulet.raise_status(
 32
+                amulet.SKIP, msg="Environment wasn't setup in time")
 33
+
 34
+
 35
+    def test_health(self):
 36
+        ''' Test the health of the node upon first deployment
 37
+            by getting the cluster health, then inserting data and
 38
+            validating cluster health'''
 39
+        health = self.get_cluster_health()
 40
+        assert health['status'] in ('green', 'yellow')
 41
+
 42
+        # Create a test index.
 43
+        curl_command = """
 44
+        curl -XPUT 'http://localhost:9200/test/tweet/1' -d '{
 45
+            "user" : "me",
 46
+            "message" : "testing"
 47
+        }'
 48
+        """
 49
+        response = self.curl_on_unit(curl_command)
 50
+        health = self.get_index_health('test')
 51
+        assert health['status'] in ('green', 'yellow')
 52
+
 53
+    def test_config(self):
 54
+        ''' Validate our configuration of the cluster name made it to the
 55
+            application configuration'''
 56
+        health = self.get_cluster_health()
 57
+        cluster_name = health['cluster_name']
 58
+        assert cluster_name == 'unique-name'
 59
+
 60
+    def test_scale(self):
 61
+        ''' Validate scaling the elasticsearch cluster yields a healthy
 62
+            response from the API, and all units are participating '''
 63
+        self.deployment.add_unit('elasticsearch', units=2)
 64
+        self.deployment.setup(timeout=1200)
 65
+        self.deployment.sentry.wait()
 66
+        health = self.get_cluster_health(wait_for_nodes=3)
 67
+        index_health = self.get_index_health('test')
 68
+        print(health['number_of_nodes'])
 69
+        assert health['number_of_nodes'] == 3
 70
+        assert index_health['status'] in ('green', 'yellow')
 71
+
 72
+    def curl_on_unit(self, curl_command, unit_number=0):
 73
+        unit = "elasticsearch"
 74
+        response = self.deployment.sentry[unit][unit_number].run(curl_command)
 75
+        if response[1] != 0:
 76
+            msg = (
 77
+                "Elastic search didn't respond to the command \n"
 78
+                "'{curl_command}' as expected.\n"
 79
+                "Return code: {return_code}\n"
 80
+                "Result: {result}".format(
 81
+                    curl_command=curl_command,
 82
+                    return_code=response[1],
 83
+                    result=response[0])
 84
+            )
 85
+            amulet.raise_status(amulet.FAIL, msg=msg)
 86
+
 87
+        return json.loads(response[0])
 88
+
 89
+    def get_cluster_health(self, unit_number=0, wait_for_nodes=0,
 90
+                           timeout=180):
 91
+        curl_command = "curl http://localhost:9200"
 92
+        curl_command = curl_command + "/_cluster/health?timeout={}s".format(
 93
+            timeout)
 94
+        if wait_for_nodes > 0:
 95
+            curl_command = curl_command + "&wait_for_nodes={}".format(
 96
+                wait_for_nodes)
 97
+
 98
+        return self.curl_on_unit(curl_command, unit_number=unit_number)
 99
+
100
+    def get_index_health(self, index_name, unit_number=0):
101
+        curl_command = "curl http://localhost:9200"
102
+        curl_command = curl_command + "/_cluster/health/" + index_name
103
+
104
+        return self.curl_on_unit(curl_command)
105
+
106
+
107
+def check_response(response, expected_code=200):
108
+    if response.status_code != expected_code:
109
+        msg = (
110
+            "Elastic search did not respond as expected. \n"
111
+            "Expected status code: %{expected_code} \n"
112
+            "Status code: %{status_code} \n"
113
+            "Response text: %{response_text}".format(
114
+                expected_code=expected_code,
115
+                status_code=response.status_code,
116
+                response_text=response.text))
117
+
118
+        amulet.raise_status(amulet.FAIL, msg=msg)
119
+
120
+
121
+if __name__ == "__main__":
122
+    unittest.main()
Back to file index

tests/tests.yaml

1
--- 
2
+++ tests/tests.yaml
3
@@ -0,0 +1,2 @@
4
+packages:
5
+  - amulet
Back to file index

unit_tests/test_hooks.py

  1
--- 
  2
+++ unit_tests/test_hooks.py
  3
@@ -0,0 +1,113 @@
  4
+"""Unit tests for the elasticsearch charm.
  5
+
  6
+These are near-worthless unit-tests, simply testing the behaviour
  7
+of the hooks.py module, as it is extremely difficult to unit-test anything
  8
+that calls into charmhelpers in a stateful way (as charmhelpers is always
  9
+writing to system directories), without setting up environments.
 10
+
 11
+For this reason, rely on the functional tests of the charm instead (tests
 12
+directory)
 13
+"""
 14
+import unittest
 15
+
 16
+try:
 17
+    import mock
 18
+except ImportError:
 19
+    raise ImportError(
 20
+        "Please ensure both python-mock and python-nose are installed.")
 21
+
 22
+
 23
+from hooks import hooks
 24
+from charmhelpers.core.hookenv import Config
 25
+
 26
+
 27
+class HooksTestCase(unittest.TestCase):
 28
+
 29
+    def setUp(self):
 30
+        super(HooksTestCase, self).setUp()
 31
+
 32
+        # charmhelpers is getting difficult to test against, as it writes
 33
+        # to system directories even for things that should be idempotent,
 34
+        # like accessing config options.
 35
+        patcher = mock.patch('charmhelpers.core.hookenv.charm_dir')
 36
+        self.mock_charm_dir = patcher.start()
 37
+        self.addCleanup(patcher.stop)
 38
+        self.mock_charm_dir.return_value = '/tmp/foo'
 39
+
 40
+        patcher = mock.patch('charmhelpers.core.hookenv.config')
 41
+        self.mock_config = patcher.start()
 42
+        self.addCleanup(patcher.stop)
 43
+        config = Config({
 44
+            'install_deps_from_ppa': False,
 45
+        })
 46
+        config.implicit_save = False
 47
+        self.mock_config.return_value = config
 48
+
 49
+        patcher = mock.patch('charmhelpers.payload.execd.execd_preinstall')
 50
+        self.mock_preinstall = patcher.start()
 51
+        self.addCleanup(patcher.stop)
 52
+
 53
+        patcher = mock.patch('charmhelpers.contrib.ansible')
 54
+        self.mock_ansible = patcher.start()
 55
+        self.addCleanup(patcher.stop)
 56
+
 57
+        patcher = mock.patch('charmhelpers.core.host.rsync')
 58
+        self.mock_rsync = patcher.start()
 59
+        self.addCleanup(patcher.stop)
 60
+
 61
+    def test_installs_ansible_support(self):
 62
+        hooks.execute(['install'])
 63
+
 64
+        ansible = self.mock_ansible
 65
+        ansible.install_ansible_support.assert_called_once_with(
 66
+            from_ppa=False)
 67
+
 68
+    def test_applies_install_playbook(self):
 69
+        hooks.execute(['install'])
 70
+
 71
+        self.assertEqual([
 72
+            mock.call('playbook.yaml', tags=['install']),
 73
+        ], self.mock_ansible.apply_playbook.call_args_list)
 74
+
 75
+    def test_executes_preinstall(self):
 76
+        hooks.execute(['install'])
 77
+
 78
+        self.mock_preinstall.assert_called_once_with()
 79
+
 80
+    def test_copys_backported_ansible_modules(self):
 81
+        hooks.execute(['install'])
 82
+
 83
+        self.mock_rsync.assert_called_once_with(
 84
+            'ansible_module_backports',
 85
+            '/usr/share/ansible')
 86
+
 87
+    def test_default_hooks(self):
 88
+        """Most of the hooks let ansible do all the work."""
 89
+        default_hooks = [
 90
+            'config-changed',
 91
+            'cluster-relation-joined',
 92
+            'peer-relation-joined',
 93
+            'peer-relation-departed',
 94
+            'nrpe-external-master-relation-changed',
 95
+            'rest-relation-joined',
 96
+            'start',
 97
+            'stop',
 98
+            'upgrade-charm',
 99
+            'client-relation-joined',
100
+            'client-relation-departed',
101
+        ]
102
+        mock_apply_playbook = self.mock_ansible.apply_playbook
103
+
104
+        for hook in default_hooks:
105
+            mock_apply_playbook.reset_mock()
106
+
107
+            hooks.execute([hook])
108
+
109
+            self.assertEqual([
110
+                mock.call('playbook.yaml',
111
+                          tags=[hook]),
112
+            ], mock_apply_playbook.call_args_list)
113
+
114
+
115
+if __name__ == '__main__':
116
+    unittest.main()