~containers/kubernetes-worker

Owner: mbruzek
Status: Needs Review
Vote: +0 (+2 needed for approval)

CPP?: No
OIL?: No

This is our attempt at getting the kubernetes charms promulgated. Please review this charm for policy and best practices.

Thanks.


Tests

Substrate Status Results Last Updated
aws RETRY 19 days ago
gce RETRY 19 days ago
lxc RETRY 19 days ago

Add Comment

Login to comment/vote on this review.


Policy Checklist

Description Unreviewed Pass Fail

General

Must verify that any software installed or utilized is verified as coming from the intended source. kos.tsakalozos
  • Any software installed from the Ubuntu or CentOS default archives satisfies this due to the apt and yum sources including cryptographic signing information.
  • Third party repositories must be listed as a configuration option that can be overridden by the user and not hard coded in the charm itself.
  • Launchpad PPAs are acceptable as the add-apt-repository command retrieves the keys securely.
  • Other third party repositories are acceptable if the signing key is embedded in the charm.
Must provide a means to protect users from known security vulnerabilities in a way consistent with best practices as defined by either operating system policies or upstream documentation. kos.tsakalozos
Basically, this means there must be instructions on how to apply updates if you use software not from distribution channels.
Must have hooks that are idempotent. kos.tsakalozos
Should be built using charm layers. kos.tsakalozos
Should use Juju Resources to deliver required payloads. kos.tsakalozos

Testing and Quality

charm proof must pass without errors or warnings. kos.tsakalozos
Must include passing unit, functional, or integration tests. kos.tsakalozos
Tests must exercise all relations. kos.tsakalozos
Tests must exercise config. kos.tsakalozos
set-config, unset-config, and re-set must be tested as a minimum
Must not use anything infrastructure-provider specific (i.e. querying EC2 metadata service). kos.tsakalozos
Must be self contained unless the charm is a proxy for an existing cloud service, e.g. ec2-elb charm.
Must not use symlinks. kos.tsakalozos
Bundles must only use promulgated charms, they cannot reference charms in personal namespaces. kos.tsakalozos
Must call Juju hook tools (relation-*, unit-*, config-*, etc) without a hard coded path. kos.tsakalozos
Should include a tests.yaml for all integration tests. kos.tsakalozos

Metadata

Must include a full description of what the software does. kos.tsakalozos
Must include a maintainer email address for a team or individual who will be responsive to contact. kos.tsakalozos
Must include a license. Call the file 'copyright' and make sure all files' licenses are specified clearly. kos.tsakalozos
Must be under a Free license. kos.tsakalozos
Must have a well documented and valid README.md. kos.tsakalozos
Must describe the service. kos.tsakalozos
Must describe how it interacts with other services, if applicable. kos.tsakalozos
Must document the interfaces. kos.tsakalozos
Must show how to deploy the charm. kos.tsakalozos
Must define external dependencies, if applicable. kos.tsakalozos
Should link to a recommend production usage bundle and recommended configuration if this differs from the default. kos.tsakalozos
Should reference and link to upstream documentation and best practices. kos.tsakalozos

Security

Must not run any network services using default passwords. kos.tsakalozos
Must verify and validate any external payload kos.tsakalozos
  • Known and understood packaging systems that verify packages like apt, pip, and yum are ok.
  • wget | sh style is not ok.
Should make use of whatever Mandatory Access Control system is provided by the distribution. kos.tsakalozos
Should avoid running services as root. kos.tsakalozos

All changes | Changes since last revision

Source Diff

Files changed 156

Inline diff comments 0

No comments yet.

Back to file index

HACKING.md

 1
--- 
 2
+++ HACKING.md
 3
@@ -0,0 +1,25 @@
 4
+ # Kubernetes Worker
 5
+
 6
+### Building from the layer
 7
+
 8
+You can clone the kubenetes-worker layer with git and build locally if you
 9
+have the charm package/snap installed.
10
+
11
+```shell
12
+# Instal the snap
13
+sudo snap install charm --channel=edge
14
+
15
+# Set the build environment
16
+export JUJU_REPOSITORY=$HOME
17
+
18
+# Clone the layer and build it to our JUJU_REPOSITORY
19
+git clone https://github.com/juju-solutions/kubernetes
20
+cd kubernetes/cluster/juju/layers/kubernetes-worker
21
+charm build -r
22
+```
23
+
24
+### Contributing
25
+
26
+TBD
27
+
28
+
Back to file index

Makefile

 1
--- 
 2
+++ Makefile
 3
@@ -0,0 +1,17 @@
 4
+#!/usr/bin/make
 5
+
 6
+all: lint unit_test
 7
+
 8
+
 9
+.PHONY: apt_prereqs
10
+apt_prereqs:
11
+	@# Need tox, but don't install the apt version unless we have to (don't want to conflict with pip)
12
+	@which tox >/dev/null || sudo apt-get install -y python-tox
13
+
14
+lint: apt_prereqs
15
+	@tox --notest
16
+	@PATH=.tox/py34/bin:.tox/py35/bin flake8 $(wildcard hooks reactive lib unit_tests tests)
17
+
18
+unit_test: apt_prereqs
19
+	@echo Starting tests...
20
+	tox
Back to file index

README.md

  1
--- 
  2
+++ README.md
  3
@@ -0,0 +1,100 @@
  4
+# Kubernetes Worker
  5
+
  6
+## Usage
  7
+
  8
+This charm deploys a container runtime, and additionally stands up the Kubernetes
  9
+worker applications: kubelet, and kube-proxy.
 10
+
 11
+In order for this charm to be useful, it should be deployed with its companion
 12
+charm [kubernetes-master](https://jujucharms.com/u/containers/kubernetes-master)
 13
+and linked with an SDN-Plugin.
 14
+
 15
+This charm has also been bundled up for your convenience so you can skip the
 16
+above steps, and deploy it with a single command:
 17
+
 18
+```shell
 19
+juju deploy canonical-kubernetes
 20
+```
 21
+
 22
+For more information about [Canonical Kubernetes](https://jujucharms.com/canonical-kubernetes)
 23
+consult the bundle `README.md` file.
 24
+
 25
+
 26
+## Scale out
 27
+
 28
+To add additional compute capacity to your Kubernetes workers, you may
 29
+`juju add-unit` scale the cluster of applications. They will automatically
 30
+join any related kubernetes-master, and enlist themselves as ready once the
 31
+deployment is complete.
 32
+
 33
+## Operational actions
 34
+
 35
+The kubernetes-worker charm supports the following Operational Actions:
 36
+
 37
+#### Pause
 38
+
 39
+Pausing the workload enables administrators to both [drain](http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/) and [cordon](http://kubernetes.io/docs/user-guide/kubectl/kubectl_cordon/)
 40
+a unit for maintenance.
 41
+
 42
+
 43
+#### Resume
 44
+
 45
+Resuming the workload will [uncordon](http://kubernetes.io/docs/user-guide/kubectl/kubectl_uncordon/) a paused unit. Workloads will automatically migrate unless otherwise directed via their application declaration.
 46
+
 47
+## Private registry
 48
+
 49
+With the "registry" action that is part for the kubernetes-worker charm, you can very easily create a private docker registry, with authentication, and available over TLS. Please note that the registry deployed with the action is not HA, and uses storage tied to the kubernetes node where the pod is running. So if the registry pod changes is migrated from one node to another for whatever reason, you will need to re-publish the images.
 50
+
 51
+### Example usage
 52
+
 53
+Create the relevant authentication files. Let's say you want user `userA` to authenticate with the password `passwordA`. Then you'll do :
 54
+
 55
+    echo "userA:passwordA" > htpasswd-plain
 56
+    htpasswd -c -b -B htpasswd userA passwordA
 57
+
 58
+(the `htpasswd` program comes with the `apache2-utils` package)
 59
+
 60
+Supposing your registry will be reachable at `myregistry.company.com`, and that you already have your TLS key in the `registry.key` file, and your TLS certificate (with `myregistry.company.com` as Common Name) in the `registry.crt` file, you would then run :
 61
+
 62
+    juju run-action kubernetes-worker/0 registry domain=myregistry.company.com htpasswd="$(base64 -w0 htpasswd)" htpasswd-plain="$(base64 -w0 htpasswd-plain)" tlscert="$(base64 -w0 registry.crt)" tlskey="$(base64 -w0 registry.key)" ingress=true
 63
+
 64
+If you then decide that you want do delete the registry, just run :
 65
+
 66
+    juju run-action kubernetes-worker/0 registry delete=true ingress=true
 67
+
 68
+## Known Limitations
 69
+
 70
+Kubernetes workers currently only support 'phaux' HA scenarios. Even when configured with an HA cluster string, they will only ever contact the first unit in the cluster map. To enable a proper HA story, kubernetes-worker units are encouraged to proxy through a [kubeapi-load-balancer](https://jujucharms.com/kubeapi-load-balancer)
 71
+application. This enables a HA deployment without the need to
 72
+re-render configuration and disrupt the worker services.
 73
+
 74
+External access to pods must be performed through a [Kubernetes
 75
+Ingress Resource](http://kubernetes.io/docs/user-guide/ingress/).
 76
+
 77
+When using NodePort type networking, there is no automation in exposing the
 78
+ports selected by kubernetes or chosen by the user. They will need to be
 79
+opened manually and can be performed across an entire worker pool.
 80
+
 81
+If your NodePort service port selected is `30510` you can open this across all
 82
+members of a worker pool named `kubernetes-worker` like so:
 83
+
 84
+```
 85
+juju run --application kubernetes-worker open-port 30510/tcp
 86
+```
 87
+
 88
+Don't forget to expose the kubernetes-worker application if its not already
 89
+exposed, as this can cause confusion once the port has been opened and the
 90
+service is not reachable.
 91
+
 92
+Note: When debugging connection issues with NodePort services, its important
 93
+to first check the kube-proxy service on the worker units. If kube-proxy is not
 94
+running, the associated port-mapping will not be configured in the iptables
 95
+rulechains. 
 96
+
 97
+If you need to close the NodePort once a workload has been terminated, you can
 98
+follow the same steps inversely.
 99
+
100
+```
101
+juju run --application kubernetes-worker close-port 30510
102
+```
103
+
Back to file index

actions.yaml

 1
--- 
 2
+++ actions.yaml
 3
@@ -0,0 +1,71 @@
 4
+"debug":
 5
+  "description": "Collect debug data"
 6
+"clean-containers":
 7
+  "description": "Garbage collect non-running containers"
 8
+"clean-images":
 9
+  "description": "Garbage collect non-running images"
10
+  "options":
11
+    "untagged":
12
+      "type": "boolean"
13
+      "description": "Only remove untagged"
14
+      "default": !!bool "true"
15
+"pause":
16
+  "description": |
17
+    Cordon the unit, draining all active workloads.
18
+  "params":
19
+    "delete-local-data":
20
+      "type": "boolean"
21
+      "description": "Force deletion of local storage to enable a drain"
22
+      "default": !!bool "false"
23
+    "force":
24
+      "type": "boolean"
25
+      "description": |
26
+        Continue even if there are pods not managed by a RC, RS, Job, DS or SS
27
+      "default": !!bool "false"
28
+"resume":
29
+  "description": |
30
+    UnCordon the unit, enabling workload scheduling.
31
+"microbot":
32
+  "description": "Launch microbot containers"
33
+  "params":
34
+    "replicas":
35
+      "type": "integer"
36
+      "default": !!int "3"
37
+      "description": "Number of microbots to launch in Kubernetes."
38
+    "delete":
39
+      "type": "boolean"
40
+      "default": !!bool "false"
41
+      "description": "Remove a microbots deployment, service, and ingress if True."
42
+"upgrade":
43
+  "description": "Upgrade the kubernetes snaps"
44
+"registry":
45
+  "description": "Create a private Docker registry"
46
+  "params":
47
+    "htpasswd":
48
+      "type": "string"
49
+      "description": "base64 encoded htpasswd file used for authentication."
50
+    "htpasswd-plain":
51
+      "type": "string"
52
+      "description": "base64 encoded plaintext version of the htpasswd file, needed\
53
+        \ by docker daemons to authenticate to the registry."
54
+    "tlscert":
55
+      "type": "string"
56
+      "description": "base64 encoded TLS certificate for the registry. Common Name\
57
+        \ must match the domain name of the registry."
58
+    "tlskey":
59
+      "type": "string"
60
+      "description": "base64 encoded TLS key for the registry."
61
+    "domain":
62
+      "type": "string"
63
+      "description": "The domain name for the registry. Must match the Common Name\
64
+        \ of the certificate."
65
+    "ingress":
66
+      "type": "boolean"
67
+      "default": !!bool "false"
68
+      "description": "Create an Ingress resource for the registry (or delete resource\
69
+        \ object if \"delete\" is True)"
70
+    "delete":
71
+      "type": "boolean"
72
+      "default": !!bool "false"
73
+      "description": "Remove a registry replication controller, service, and ingress\
74
+        \ if True."
Back to file index

actions/clean-containers

1
--- 
2
+++ actions/clean-containers
3
@@ -0,0 +1,5 @@
4
+#!/bin/bash
5
+
6
+# Destructive action - removes all non-running containers from the host
7
+
8
+docker rm $(docker ps -aq)
Back to file index

actions/clean-images

 1
--- 
 2
+++ actions/clean-images
 3
@@ -0,0 +1,25 @@
 4
+#!/bin/bash
 5
+
 6
+# Destructive action - Destroys images on the host that are not running
 7
+
 8
+untagged=$(action-get untagged)
 9
+images=$(docker images | grep "^<none>" | awk "{print $3}")
10
+all_images=$(docker images -aq)
11
+
12
+if [[ ! -z "$images" && "$untagged" == "True" ]]; then
13
+    echo "Removing untagged images"
14
+    docker rmi $images
15
+    exit 0
16
+fi
17
+
18
+if [[ ! -z "$all_images" && "$untagged" ]]; then
19
+    echo "Removing all non-running images"
20
+    $(docker rmi $all_images)
21
+    ret=$?
22
+    if [ $ret > 0 ]; then
23
+    echo "Not all containers removed, perhaps you need to juju action do $JUJU_UNIT_NAME clean-containers first?"
24
+    action-set response.msg="Not all containers removed, perhaps you need to juju action do $JUJU_UNIT_NAME clean-containers first?"
25
+        action-set response.result=$(docker rmi $all_images)
26
+    fi
27
+    exit 0
28
+fi
Back to file index

actions/debug

 1
--- 
 2
+++ actions/debug
 3
@@ -0,0 +1,92 @@
 4
+#!/usr/bin/python3
 5
+
 6
+import os
 7
+import subprocess
 8
+import tarfile
 9
+import tempfile
10
+import traceback
11
+from contextlib import contextmanager
12
+from datetime import datetime
13
+from charmhelpers.core.hookenv import action_set, local_unit
14
+
15
+archive_dir = None
16
+log_file = None
17
+
18
+
19
+@contextmanager
20
+def archive_context():
21
+    """ Open a context with a new temporary directory.
22
+
23
+    When the context closes, the directory is archived, and the archive
24
+    location is added to Juju action output. """
25
+    global archive_dir
26
+    global log_file
27
+    with tempfile.TemporaryDirectory() as temp_dir:
28
+        name = "debug-" + datetime.now().strftime("%Y%m%d%H%M%S")
29
+        archive_dir = os.path.join(temp_dir, name)
30
+        os.makedirs(archive_dir)
31
+        with open("%s/debug.log" % archive_dir, "w") as log_file:
32
+            yield
33
+        os.chdir(temp_dir)
34
+        tar_path = "/home/ubuntu/%s.tar.gz" % name
35
+        with tarfile.open(tar_path, "w:gz") as f:
36
+            f.add(name)
37
+        action_set({
38
+            "path": tar_path,
39
+            "command": "juju scp %s:%s ." % (local_unit(), tar_path),
40
+            "message": " ".join([
41
+                "Archive has been created on unit %s." % local_unit(),
42
+                "Use the juju scp command to copy it to your local machine."
43
+            ])
44
+        })
45
+
46
+
47
+def log(msg):
48
+    """ Log a message that will be included in the debug archive.
49
+
50
+    Must be run within archive_context """
51
+    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
52
+    for line in str(msg).splitlines():
53
+        log_file.write(timestamp + " | " + line.rstrip() + "\n")
54
+
55
+
56
+def run_script(script):
57
+    """ Run a single script. Must be run within archive_context """
58
+    log("Running script: " + script)
59
+    script_dir = os.path.join(archive_dir, script)
60
+    os.makedirs(script_dir)
61
+    env = os.environ.copy()
62
+    env["PYTHONPATH"] = "lib"  # allow same imports as reactive code
63
+    env["DEBUG_SCRIPT_DIR"] = script_dir
64
+    with open(script_dir + "/stdout", "w") as stdout:
65
+        with open(script_dir + "/stderr", "w") as stderr:
66
+            process = subprocess.Popen(
67
+                "debug-scripts/" + script,
68
+                stdout=stdout, stderr=stderr, env=env
69
+            )
70
+            exit_code = process.wait()
71
+    if exit_code != 0:
72
+        log("ERROR: %s failed with exit code %d" % (script, exit_code))
73
+
74
+
75
+def run_all_scripts():
76
+    """ Run all scripts. For the sake of robustness, log and ignore any
77
+    exceptions that occur.
78
+
79
+    Must be run within archive_context """
80
+    scripts = os.listdir("debug-scripts")
81
+    for script in scripts:
82
+        try:
83
+            run_script(script)
84
+        except:
85
+            log(traceback.format_exc())
86
+
87
+
88
+def main():
89
+    """ Open an archive context and run all scripts. """
90
+    with archive_context():
91
+        run_all_scripts()
92
+
93
+
94
+if __name__ == "__main__":
95
+    main()
Back to file index

actions/microbot

 1
--- 
 2
+++ actions/microbot
 3
@@ -0,0 +1,73 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Copyright 2015 The Kubernetes Authors.
 7
+#
 8
+# Licensed under the Apache License, Version 2.0 (the "License");
 9
+# you may not use this file except in compliance with the License.
10
+# You may obtain a copy of the License at
11
+#
12
+#     http://www.apache.org/licenses/LICENSE-2.0
13
+#
14
+# Unless required by applicable law or agreed to in writing, software
15
+# distributed under the License is distributed on an "AS IS" BASIS,
16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+# See the License for the specific language governing permissions and
18
+# limitations under the License.
19
+
20
+import os
21
+import sys
22
+
23
+from charmhelpers.core.hookenv import action_get
24
+from charmhelpers.core.hookenv import action_set
25
+from charmhelpers.core.hookenv import unit_public_ip
26
+from charms.templating.jinja2 import render
27
+from subprocess import call
28
+
29
+os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
30
+
31
+context = {}
32
+context['replicas'] = action_get('replicas')
33
+context['delete'] = action_get('delete')
34
+context['public_address'] = unit_public_ip()
35
+
36
+if not context['replicas']:
37
+    context['replicas'] = 3
38
+
39
+# Declare a kubectl template when invoking kubectl
40
+kubectl = ['kubectl', '--kubeconfig=/root/cdk/kubeconfig']
41
+
42
+# Remove deployment if requested
43
+if context['delete']:
44
+    service_del = kubectl + ['delete', 'svc', 'microbot']
45
+    service_response = call(service_del)
46
+    deploy_del = kubectl + ['delete', 'deployment', 'microbot']
47
+    deploy_response = call(deploy_del)
48
+    ingress_del = kubectl + ['delete', 'ing', 'microbot-ingress']
49
+    ingress_response = call(ingress_del)
50
+
51
+    if ingress_response != 0:
52
+        action_set({'microbot-ing':
53
+                   'Failed removal of microbot ingress resource.'})
54
+    if deploy_response != 0:
55
+        action_set({'microbot-deployment':
56
+                   'Failed removal of microbot deployment resource.'})
57
+    if service_response != 0:
58
+        action_set({'microbot-service':
59
+                   'Failed removal of microbot service resource.'})
60
+    sys.exit(0)
61
+
62
+# Creation request
63
+
64
+render('microbot-example.yaml', '/root/cdk/addons/microbot.yaml',
65
+       context)
66
+
67
+create_command = kubectl + ['create', '-f',
68
+                            '/root/cdk/addons/microbot.yaml']
69
+
70
+create_response = call(create_command)
71
+
72
+if create_response == 0:
73
+    action_set({'address':
74
+               'microbot.{}.xip.io'.format(context['public_address'])})
75
+else:
76
+    action_set({'microbot-create': 'Failed microbot creation.'})
Back to file index

actions/pause

 1
--- 
 2
+++ actions/pause
 3
@@ -0,0 +1,28 @@
 4
+#!/bin/bash
 5
+
 6
+set -ex
 7
+
 8
+export PATH=$PATH:/snap/bin
 9
+
10
+DELETE_LOCAL_DATA=$(action-get delete-local-data)
11
+FORCE=$(action-get force)
12
+
13
+# placeholder for additional flags to the command
14
+export EXTRA_FLAGS=""
15
+
16
+# Determine if we have extra flags
17
+if [[ "${DELETE_LOCAL_DATA}" == "True" || "${DELETE_LOCAL_DATA}" == "true" ]]; then
18
+  EXTRA_FLAGS="${EXTRA_FLAGS} --delete-local-data=true"
19
+fi
20
+
21
+if [[ "${FORCE}" == "True" || "${FORCE}" == "true" ]]; then
22
+  EXTRA_FLAGS="${EXTRA_FLAGS} --force"
23
+fi
24
+
25
+
26
+# Cordon and drain the unit
27
+kubectl --kubeconfig=/root/cdk/kubeconfig cordon $(hostname)
28
+kubectl --kubeconfig=/root/cdk/kubeconfig drain $(hostname) ${EXTRA_FLAGS}
29
+
30
+# Set status to indicate the unit is paused and under maintenance.
31
+status-set 'waiting' 'Kubernetes unit paused'
Back to file index

actions/registry

  1
--- 
  2
+++ actions/registry
  3
@@ -0,0 +1,136 @@
  4
+#!/usr/bin/python3
  5
+#
  6
+# For a usage examples, see README.md
  7
+#
  8
+# TODO
  9
+#
 10
+# - make the action idempotent (i.e. if you run it multiple times, the first
 11
+# run will create/delete the registry, and the reset will be a no-op and won't
 12
+# error out)
 13
+#
 14
+# - take only a plain authentication file, and create the encrypted version in
 15
+# the action
 16
+#
 17
+# - validate the parameters (make sure tlscert is a certificate, that tlskey is a
 18
+# proper key, etc)
 19
+#
 20
+# - when https://bugs.launchpad.net/juju/+bug/1661015 is fixed, handle the
 21
+# base64 encoding the parameters in the action itself
 22
+
 23
+import os
 24
+import sys
 25
+
 26
+from base64 import b64encode
 27
+
 28
+from charmhelpers.core.hookenv import action_get
 29
+from charmhelpers.core.hookenv import action_set
 30
+from charms.templating.jinja2 import render
 31
+from subprocess import call
 32
+
 33
+os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
 34
+
 35
+deletion = action_get('delete')
 36
+
 37
+context = {}
 38
+
 39
+# These config options must be defined in the case of a creation
 40
+param_error = False
 41
+for param in ('tlscert', 'tlskey', 'domain', 'htpasswd', 'htpasswd-plain'):
 42
+    value = action_get(param)
 43
+    if not value and not deletion:
 44
+        key = "registry-create-parameter-{}".format(param)
 45
+        error = "failure, parameter {} is required".format(param)
 46
+        action_set({key: error})
 47
+        param_error = True
 48
+
 49
+    context[param] = value
 50
+
 51
+# Create the dockercfg template variable
 52
+dockercfg = '{"%s": {"auth": "%s", "email": "root@localhost"}}' % \
 53
+            (context['domain'], context['htpasswd-plain'])
 54
+context['dockercfg'] = b64encode(dockercfg.encode()).decode('ASCII')
 55
+
 56
+if param_error:
 57
+    sys.exit(0)
 58
+
 59
+# This one is either true or false, no need to check if it has a "good" value.
 60
+context['ingress'] = action_get('ingress')
 61
+
 62
+# Declare a kubectl template when invoking kubectl
 63
+kubectl = ['kubectl', '--kubeconfig=/root/cdk/kubeconfig']
 64
+
 65
+# Remove deployment if requested
 66
+if deletion:
 67
+    resources = ['svc/kube-registry', 'rc/kube-registry-v0', 'secrets/registry-tls-data',
 68
+                 'secrets/registry-auth-data', 'secrets/registry-access']
 69
+
 70
+    if action_get('ingress'):
 71
+        resources.append('ing/registry-ing')
 72
+
 73
+    delete_command = kubectl + ['delete', '--ignore-not-found=true'] + resources
 74
+    delete_response = call(delete_command)
 75
+    if delete_response == 0:
 76
+        action_set({'registry-delete': 'success'})
 77
+    else:
 78
+        action_set({'registry-delete': 'failure'})
 79
+
 80
+    sys.exit(0)
 81
+
 82
+# Creation request
 83
+render('registry.yaml', '/root/cdk/addons/registry.yaml',
 84
+       context)
 85
+
 86
+create_command = kubectl + ['create', '-f',
 87
+                            '/root/cdk/addons/registry.yaml']
 88
+
 89
+create_response = call(create_command)
 90
+
 91
+if create_response == 0:
 92
+    action_set({'registry-create': 'success'})
 93
+
 94
+    # Create a ConfigMap if it doesn't exist yet, else patch it.
 95
+    # A ConfigMap is needed to change the default value for nginx' client_max_body_size.
 96
+    # The default is 1MB, and this is the maximum size of images that can be
 97
+    # pushed on the registry. 1MB images aren't useful, so we bump this value to 1024MB.
 98
+    cm_name = 'nginx-load-balancer-conf'
 99
+    check_cm_command = kubectl + ['get', 'cm', cm_name]
100
+    check_cm_response = call(check_cm_command)
101
+
102
+    if check_cm_response == 0:
103
+        # There is an existing ConfigMap, patch it
104
+        patch = '{"data":{"body-size":"1024m"}}'
105
+        patch_cm_command = kubectl + ['patch', 'cm', cm_name, '-p', patch]
106
+        patch_cm_response = call(patch_cm_command)
107
+
108
+        if patch_cm_response == 0:
109
+            action_set({'configmap-patch': 'success'})
110
+        else:
111
+            action_set({'configmap-patch': 'failure'})
112
+
113
+    else:
114
+        # No existing ConfigMap, create it
115
+        render('registry-configmap.yaml', '/root/cdk/addons/registry-configmap.yaml',
116
+               context)
117
+        create_cm_command = kubectl + ['create', '-f', '/root/cdk/addons/registry-configmap.yaml']
118
+        create_cm_response = call(create_cm_command)
119
+
120
+        if create_cm_response == 0:
121
+            action_set({'configmap-create': 'success'})
122
+        else:
123
+            action_set({'configmap-create': 'failure'})
124
+
125
+    # Patch the "default" serviceaccount with an imagePullSecret.
126
+    # This will allow the docker daemons to authenticate to our private
127
+    # registry automatically
128
+    patch = '{"imagePullSecrets":[{"name":"registry-access"}]}'
129
+    patch_sa_command = kubectl + ['patch', 'sa', 'default', '-p', patch]
130
+    patch_sa_response = call(patch_sa_command)
131
+
132
+    if patch_sa_response == 0:
133
+        action_set({'serviceaccount-patch': 'success'})
134
+    else:
135
+        action_set({'serviceaccount-patch': 'failure'})
136
+
137
+
138
+else:
139
+    action_set({'registry-create': 'failure'})
Back to file index

actions/resume

 1
--- 
 2
+++ actions/resume
 3
@@ -0,0 +1,8 @@
 4
+#!/bin/bash
 5
+
 6
+set -ex
 7
+
 8
+export PATH=$PATH:/snap/bin
 9
+
10
+kubectl --kubeconfig=/root/cdk/kubeconfig uncordon $(hostname)
11
+status-set 'active' 'Kubernetes unit resumed'
Back to file index

actions/upgrade

1
--- 
2
+++ actions/upgrade
3
@@ -0,0 +1,5 @@
4
+#!/bin/sh
5
+set -eux
6
+
7
+charms.reactive set_state kubernetes-worker.snaps.upgrade-specified
8
+exec hooks/config-changed
Back to file index

bin/layer_option

 1
--- 
 2
+++ bin/layer_option
 3
@@ -0,0 +1,24 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+import sys
 7
+sys.path.append('lib')
 8
+
 9
+import argparse
10
+from charms.layer import options
11
+
12
+
13
+parser = argparse.ArgumentParser(description='Access layer options.')
14
+parser.add_argument('section',
15
+                    help='the section, or layer, the option is from')
16
+parser.add_argument('option',
17
+                    help='the option to access')
18
+
19
+args = parser.parse_args()
20
+value = options(args.section).get(args.option, '')
21
+if isinstance(value, bool):
22
+    sys.exit(0 if value else 1)
23
+elif isinstance(value, list):
24
+    for val in value:
25
+        print(val)
26
+else:
27
+    print(value)
Back to file index

config.yaml

  1
--- 
  2
+++ config.yaml
  3
@@ -0,0 +1,112 @@
  4
+# Copyright 2016 Canonical Ltd.
  5
+#
  6
+# This file is part of the Snap layer for Juju.
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#  http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+"options":
 20
+  "snap_proxy":
 21
+    "description": "HTTP/HTTPS web proxy for Snappy to use when accessing the snap\
 22
+      \ store.\n"
 23
+    "type": "string"
 24
+    "default": ""
 25
+  "nagios_context":
 26
+    "default": "juju"
 27
+    "type": "string"
 28
+    "description": |
 29
+      Used by the nrpe subordinate charms.
 30
+      A string that will be prepended to instance name to set the host name
 31
+      in nagios. So for instance the hostname would be something like:
 32
+          juju-myservice-0
 33
+      If you're running multiple environments with the same services in them
 34
+      this allows you to differentiate between them.
 35
+  "nagios_servicegroups":
 36
+    "default": ""
 37
+    "type": "string"
 38
+    "description": |
 39
+      A comma-separated list of nagios servicegroups.
 40
+      If left empty, the nagios_context will be used as the servicegroup
 41
+  "docker-opts":
 42
+    "type": "string"
 43
+    "default": ""
 44
+    "description": |
 45
+      Extra options to pass to the docker daemon. e.g. --insecure-registry
 46
+  "enable-cgroups":
 47
+    "type": "boolean"
 48
+    "default": !!bool "false"
 49
+    "description": |
 50
+      Enable GRUB cgroup overrides cgroup_enable=memory swapaccount=1. WARNING
 51
+      changing this option will reboot the host - use with caution on production
 52
+      services
 53
+  "http_proxy":
 54
+    "type": "string"
 55
+    "default": ""
 56
+    "description": |
 57
+      URL to use for HTTP_PROXY to be used by Docker. Only useful in closed
 58
+      environments where a proxy is the only option for routing to the
 59
+      registry to pull images
 60
+  "https_proxy":
 61
+    "type": "string"
 62
+    "default": ""
 63
+    "description": |
 64
+      URL to use for HTTPS_PROXY to be used by Docker. Only useful in closed
 65
+      environments where a proxy is the only option for routing to the
 66
+      registry to pull images
 67
+  "no_proxy":
 68
+    "type": "string"
 69
+    "default": ""
 70
+    "description": |
 71
+      Comma-separated list of destinations (either domain names or IP
 72
+      addresses) that should be directly accessed, by opposition of going
 73
+      through the proxy defined above.
 74
+  "cuda-version":
 75
+    "type": "string"
 76
+    "default": "8.0.61-1"
 77
+    "description": |
 78
+      The version of CUDA to be installed.
 79
+  "install-cuda":
 80
+    "type": "boolean"
 81
+    "default": !!bool "true"
 82
+    "description": |
 83
+      Install the CUDA binaries if capable hardware is present.
 84
+  "ingress":
 85
+    "type": "boolean"
 86
+    "default": !!bool "true"
 87
+    "description": |
 88
+      Deploy the default http backend and ingress controller to handle
 89
+      ingress requests.
 90
+  "labels":
 91
+    "type": "string"
 92
+    "default": ""
 93
+    "description": |
 94
+      Labels can be used to organize and to select subsets of nodes in the
 95
+      cluster. Declare node labels in key=value format, separated by spaces.
 96
+  "allow-privileged":
 97
+    "type": "string"
 98
+    "default": "auto"
 99
+    "description": |
100
+      Allow privileged containers to run on worker nodes. Supported values are
101
+      "true", "false", and "auto". If "true", kubelet will run in privileged
102
+      mode by default. If "false", kubelet will never run in privileged mode.
103
+      If "auto", kubelet will not run in privileged mode by default, but will
104
+      switch to privileged mode if gpu hardware is detected.
105
+  "channel":
106
+    "type": "string"
107
+    "default": "stable"
108
+    "description": |
109
+      Snap channel to install Kubernetes worker services from
110
+  "require-manual-upgrade":
111
+    "type": "boolean"
112
+    "default": !!bool "true"
113
+    "description": |
114
+      When true, worker services will not be upgraded until the user triggers
115
+      it manually by running the upgrade action.
Back to file index

copyright

 1
--- 
 2
+++ copyright
 3
@@ -0,0 +1,13 @@
 4
+Copyright 2016 The Kubernetes Authors.
 5
+
 6
+ Licensed under the Apache License, Version 2.0 (the "License");
 7
+ you may not use this file except in compliance with the License.
 8
+ You may obtain a copy of the License at
 9
+
10
+     http://www.apache.org/licenses/LICENSE-2.0
11
+
12
+ Unless required by applicable law or agreed to in writing, software
13
+ distributed under the License is distributed on an "AS IS" BASIS,
14
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ See the License for the specific language governing permissions and
16
+ limitations under the License.
Back to file index

debug-scripts/charm-unitdata

 1
--- 
 2
+++ debug-scripts/charm-unitdata
 3
@@ -0,0 +1,12 @@
 4
+#!/usr/bin/python3
 5
+
 6
+import debug_script
 7
+import json
 8
+from charmhelpers.core import unitdata
 9
+
10
+kv = unitdata.kv()
11
+data = kv.getrange("")
12
+
13
+with debug_script.open_file("unitdata.json", "w") as f:
14
+  json.dump(data, f, indent=2)
15
+  f.write("\n")
Back to file index

debug-scripts/docker

 1
--- 
 2
+++ debug-scripts/docker
 3
@@ -0,0 +1,12 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+docker version > $DEBUG_SCRIPT_DIR/docker-version
 8
+docker info > $DEBUG_SCRIPT_DIR/docker-info
 9
+docker ps -a > $DEBUG_SCRIPT_DIR/docker-ps
10
+docker images -a > $DEBUG_SCRIPT_DIR/docker-images
11
+
12
+mkdir $DEBUG_SCRIPT_DIR/container-logs
13
+for container in $(docker ps -a --format '{{.Names}}'); do
14
+  docker logs $container > $DEBUG_SCRIPT_DIR/container-logs/$container 2>&1
15
+done
Back to file index

debug-scripts/filesystem

 1
--- 
 2
+++ debug-scripts/filesystem
 3
@@ -0,0 +1,17 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+# report file system disk space usage
 8
+df -h > $DEBUG_SCRIPT_DIR/df-h
 9
+# estimate file space usage
10
+du -h / 2>&1 > $DEBUG_SCRIPT_DIR/du-h
11
+# list the mounted filesystems
12
+mount > $DEBUG_SCRIPT_DIR/mount
13
+# list the mounted systems with ascii trees
14
+findmnt -A > $DEBUG_SCRIPT_DIR/findmnt
15
+# list block devices
16
+lsblk > $DEBUG_SCRIPT_DIR/lsblk
17
+# list open files
18
+lsof 2>&1 > $DEBUG_SCRIPT_DIR/lsof
19
+# list local system locks
20
+lslocks > $DEBUG_SCRIPT_DIR/lslocks
Back to file index

debug-scripts/inotify

 1
--- 
 2
+++ debug-scripts/inotify
 3
@@ -0,0 +1,8 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+# We had to bump inotify limits once in the past, hence why this oddly specific
 8
+# script lives here in kubernetes-worker.
 9
+
10
+sysctl fs.inotify > $DEBUG_SCRIPT_DIR/sysctl-limits
11
+ls -l /proc/*/fd/* | grep inotify > $DEBUG_SCRIPT_DIR/inotify-instances
Back to file index

debug-scripts/juju-logs

1
--- 
2
+++ debug-scripts/juju-logs
3
@@ -0,0 +1,4 @@
4
+#!/bin/sh
5
+set -ux
6
+
7
+cp -v /var/log/juju/* $DEBUG_SCRIPT_DIR
Back to file index

debug-scripts/kubectl

 1
--- 
 2
+++ debug-scripts/kubectl
 3
@@ -0,0 +1,15 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+export PATH=$PATH:/snap/bin
 8
+
 9
+alias kubectl="kubectl --kubeconfig=/root/cdk/kubeconfig"
10
+
11
+kubectl cluster-info > $DEBUG_SCRIPT_DIR/cluster-info
12
+kubectl cluster-info dump > $DEBUG_SCRIPT_DIR/cluster-info-dump
13
+for obj in pods svc ingress secrets pv pvc rc; do
14
+  kubectl describe $obj --all-namespaces > $DEBUG_SCRIPT_DIR/describe-$obj
15
+done
16
+for obj in nodes; do
17
+  kubectl describe $obj > $DEBUG_SCRIPT_DIR/describe-$obj
18
+done
Back to file index

debug-scripts/kubernetes-worker-services

 1
--- 
 2
+++ debug-scripts/kubernetes-worker-services
 3
@@ -0,0 +1,9 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+for service in kubelet kube-proxy; do
 8
+  systemctl status snap.$service.daemon > $DEBUG_SCRIPT_DIR/$service-systemctl-status
 9
+  journalctl -u snap.$service.daemon > $DEBUG_SCRIPT_DIR/$service-journal
10
+done
11
+
12
+# FIXME: get the snap config or something
Back to file index

debug-scripts/network

 1
--- 
 2
+++ debug-scripts/network
 3
@@ -0,0 +1,11 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+ifconfig -a > $DEBUG_SCRIPT_DIR/ifconfig
 8
+cp -v /etc/resolv.conf $DEBUG_SCRIPT_DIR/resolv.conf
 9
+cp -v /etc/network/interfaces $DEBUG_SCRIPT_DIR/interfaces
10
+netstat -planut > $DEBUG_SCRIPT_DIR/netstat
11
+route -n > $DEBUG_SCRIPT_DIR/route
12
+iptables-save > $DEBUG_SCRIPT_DIR/iptables-save
13
+dig google.com > $DEBUG_SCRIPT_DIR/dig-google
14
+ping -w 2 -i 0.1 google.com > $DEBUG_SCRIPT_DIR/ping-google
Back to file index

debug-scripts/packages

 1
--- 
 2
+++ debug-scripts/packages
 3
@@ -0,0 +1,7 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+dpkg --list > $DEBUG_SCRIPT_DIR/dpkg-list
 8
+snap list > $DEBUG_SCRIPT_DIR/snap-list
 9
+pip2 list > $DEBUG_SCRIPT_DIR/pip2-list
10
+pip3 list > $DEBUG_SCRIPT_DIR/pip3-list
Back to file index

debug-scripts/sysctl

1
--- 
2
+++ debug-scripts/sysctl
3
@@ -0,0 +1,4 @@
4
+#!/bin/sh
5
+set -ux
6
+
7
+sysctl -a > $DEBUG_SCRIPT_DIR/sysctl
Back to file index

debug-scripts/systemd

 1
--- 
 2
+++ debug-scripts/systemd
 3
@@ -0,0 +1,10 @@
 4
+#!/bin/sh
 5
+set -ux
 6
+
 7
+systemctl --all > $DEBUG_SCRIPT_DIR/systemctl
 8
+journalctl > $DEBUG_SCRIPT_DIR/journalctl
 9
+systemd-analyze time > $DEBUG_SCRIPT_DIR/systemd-analyze-time
10
+systemd-analyze blame > $DEBUG_SCRIPT_DIR/systemd-analyze-blame
11
+systemd-analyze critical-chain > $DEBUG_SCRIPT_DIR/systemd-analyze-critical-chain
12
+systemd-analyze plot > $DEBUG_SCRIPT_DIR/systemd-analyze-plot.svg
13
+systemd-analyze dump > $DEBUG_SCRIPT_DIR/systemd-analyze-dump
Back to file index

exec.d/docker-compose/charm-pre-install

1
--- 
2
+++ exec.d/docker-compose/charm-pre-install
3
@@ -0,0 +1,2 @@
4
+# This stubs out charm-pre-install coming from layer-docker as a workaround for 
5
+# offline installs until https://github.com/juju/charm-tools/issues/301 is fixed.
Back to file index

exec.d/vmware-patch/charm-pre-install

 1
--- 
 2
+++ exec.d/vmware-patch/charm-pre-install
 3
@@ -0,0 +1,17 @@
 4
+#!/bin/bash
 5
+MY_HOSTNAME=$(hostname)
 6
+
 7
+: ${JUJU_UNIT_NAME:=`uuidgen`}
 8
+
 9
+
10
+if [ "${MY_HOSTNAME}" == "ubuntuguest" ]; then
11
+    juju-log "Detected broken vsphere integration. Applying hostname override"
12
+
13
+    FRIENDLY_HOSTNAME=$(echo $JUJU_UNIT_NAME | tr / -)
14
+    juju-log "Setting hostname to $FRIENDLY_HOSTNAME"
15
+    if [ ! -f /etc/hostname.orig ]; then
16
+      mv /etc/hostname /etc/hostname.orig
17
+    fi
18
+    echo "${FRIENDLY_HOSTNAME}" > /etc/hostname
19
+    hostname $FRIENDLY_HOSTNAME
20
+fi
Back to file index

hooks/certificates-relation-broken

 1
--- 
 2
+++ hooks/certificates-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/certificates-relation-changed

 1
--- 
 2
+++ hooks/certificates-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/certificates-relation-departed

 1
--- 
 2
+++ hooks/certificates-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/certificates-relation-joined

 1
--- 
 2
+++ hooks/certificates-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/cni-relation-broken

 1
--- 
 2
+++ hooks/cni-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/cni-relation-changed

 1
--- 
 2
+++ hooks/cni-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/cni-relation-departed

 1
--- 
 2
+++ hooks/cni-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/cni-relation-joined

 1
--- 
 2
+++ hooks/cni-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/collect-metrics

 1
--- 
 2
+++ hooks/collect-metrics
 3
@@ -0,0 +1,38 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+import yaml
11
+import os
12
+from subprocess import check_output, check_call
13
+
14
+
15
+def build_command(doc):
16
+    values = {}
17
+    metrics = doc.get("metrics", {})
18
+    for metric, mdoc in metrics.items():
19
+        cmd = mdoc.get("command")
20
+        if cmd:
21
+            value = check_output(cmd, shell=True, universal_newlines=True)
22
+            value = value.strip()
23
+            if value:
24
+                values[metric] = value
25
+
26
+    if not values:
27
+        return None
28
+    command = ["add-metric"]
29
+    for metric, value in values.items():
30
+        command.append("%s=%s" % (metric, value))
31
+    return command
32
+
33
+
34
+if __name__ == '__main__':
35
+    charm_dir = os.path.dirname(os.path.abspath(os.path.join(__file__, "..")))
36
+    metrics_yaml = os.path.join(charm_dir, "metrics.yaml")
37
+    with open(metrics_yaml) as f:
38
+        doc = yaml.load(f)
39
+        command = build_command(doc)
40
+        if command:
41
+            check_call(command)
Back to file index

hooks/config-changed

 1
--- 
 2
+++ hooks/config-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/dockerhost-relation-broken

 1
--- 
 2
+++ hooks/dockerhost-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/dockerhost-relation-changed

 1
--- 
 2
+++ hooks/dockerhost-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/dockerhost-relation-departed

 1
--- 
 2
+++ hooks/dockerhost-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/dockerhost-relation-joined

 1
--- 
 2
+++ hooks/dockerhost-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/hook.template

 1
--- 
 2
+++ hooks/hook.template
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/install

 1
--- 
 2
+++ hooks/install
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-api-endpoint-relation-broken

 1
--- 
 2
+++ hooks/kube-api-endpoint-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-api-endpoint-relation-changed

 1
--- 
 2
+++ hooks/kube-api-endpoint-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-api-endpoint-relation-departed

 1
--- 
 2
+++ hooks/kube-api-endpoint-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-api-endpoint-relation-joined

 1
--- 
 2
+++ hooks/kube-api-endpoint-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-control-relation-broken

 1
--- 
 2
+++ hooks/kube-control-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-control-relation-changed

 1
--- 
 2
+++ hooks/kube-control-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-control-relation-departed

 1
--- 
 2
+++ hooks/kube-control-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-control-relation-joined

 1
--- 
 2
+++ hooks/kube-control-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-dns-relation-broken

 1
--- 
 2
+++ hooks/kube-dns-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-dns-relation-changed

 1
--- 
 2
+++ hooks/kube-dns-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-dns-relation-departed

 1
--- 
 2
+++ hooks/kube-dns-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/kube-dns-relation-joined

 1
--- 
 2
+++ hooks/kube-dns-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-elected

 1
--- 
 2
+++ hooks/leader-elected
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/leader-settings-changed

 1
--- 
 2
+++ hooks/leader-settings-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/nrpe-external-master-relation-broken

 1
--- 
 2
+++ hooks/nrpe-external-master-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/nrpe-external-master-relation-changed

 1
--- 
 2
+++ hooks/nrpe-external-master-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/nrpe-external-master-relation-departed

 1
--- 
 2
+++ hooks/nrpe-external-master-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/nrpe-external-master-relation-joined

 1
--- 
 2
+++ hooks/nrpe-external-master-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/relations/dockerhost/interface.yaml

1
--- 
2
+++ hooks/relations/dockerhost/interface.yaml
3
@@ -0,0 +1,4 @@
4
+name: dockerhost
5
+summary: Connection details for docker on a local unit
6
+version: 1
7
+repo: https://github.com/juju-solutions/interface-dockerhost.git
Back to file index

hooks/relations/dockerhost/provides.py

 1
--- 
 2
+++ hooks/relations/dockerhost/provides.py
 3
@@ -0,0 +1,24 @@
 4
+
 5
+from charms.reactive import hook
 6
+from charms.reactive import RelationBase
 7
+from charms.reactive import scopes
 8
+
 9
+
10
+class ProvidesDockerHost(RelationBase):
11
+    scope = scopes.GLOBAL
12
+
13
+    @hook('{provides:dockerhost}-relation-{joined,changed}')
14
+    def changed(self):
15
+        self.set_state('{relation_name}.connected')
16
+
17
+    @hook('{provides:dockerhost}-relation-{broken,departed}')
18
+    def broken(self):
19
+        self.remove_state('{relation_name}.connected')
20
+
21
+    def configure(self, url):
22
+        relation_info = {
23
+            'url': url,
24
+        }
25
+
26
+        self.set_remote(**relation_info)
27
+        self.set_state('{relation_name}.configured')
Back to file index

hooks/relations/dockerhost/requires.py

 1
--- 
 2
+++ hooks/relations/dockerhost/requires.py
 3
@@ -0,0 +1,25 @@
 4
+
 5
+from charms.reactive import hook
 6
+from charms.reactive import RelationBase
 7
+from charms.reactive import scopes
 8
+
 9
+
10
+class RequiresDockerHost(RelationBase):
11
+    scope = scopes.GLOBAL
12
+
13
+    auto_accessors = ['url']
14
+
15
+    @hook('{requires:dockerhost}-relation-{joined,changed}')
16
+    def changed(self):
17
+        conv = self.conversation()
18
+        if conv.get_remote('url'):
19
+            conv.set_state('{relation_name}.available')
20
+
21
+    @hook('{requires:dockerhost}-relation-{departed,broken}')
22
+    def broken(self):
23
+        conv = self.conversation()
24
+        conv.remove_state('{relation_name}.available')
25
+
26
+    def configuration(self):
27
+        conv = self.conversation()
28
+        return {k: conv.get_remote(k) for k in self.auto_accessors}
Back to file index

hooks/relations/http/README.md

 1
--- 
 2
+++ hooks/relations/http/README.md
 3
@@ -0,0 +1,68 @@
 4
+# Overview
 5
+
 6
+This interface layer implements the basic form of the `http` interface protocol,
 7
+which is used for things such as reverse-proxies, load-balanced servers, REST
 8
+service discovery, et cetera.
 9
+
10
+# Usage
11
+
12
+## Provides
13
+
14
+By providing the `http` interface, your charm is providing an HTTP server that
15
+can be load-balanced, reverse-proxied, used as a REST endpoint, etc.
16
+
17
+Your charm need only provide the port on which it is serving its content, as
18
+soon as the `{relation_name}.available` state is set:
19
+
20
+```python
21
+@when('website.available')
22
+def configure_website(website):
23
+    website.configure(port=hookenv.config('port'))
24
+```
25
+
26
+## Requires
27
+
28
+By requiring the `http` interface, your charm is consuming one or more HTTP
29
+servers, as a REST endpoint, to load-balance a set of servers, etc.
30
+
31
+Your charm should respond to the `{relation_name}.available` state, which
32
+indicates that there is at least one HTTP server connected.
33
+
34
+The `services()` method returns a list of available HTTP services and their
35
+associated hosts and ports.
36
+
37
+The return value is a list of dicts of the following form:
38
+
39
+```python
40
+[
41
+    {
42
+        'service_name': name_of_service,
43
+        'hosts': [
44
+            {
45
+                'hostname': address_of_host,
46
+                'port': port_for_host,
47
+            },
48
+            # ...
49
+        ],
50
+    },
51
+    # ...
52
+]
53
+```
54
+
55
+A trivial example of handling this interface would be:
56
+
57
+```python
58
+from charms.reactive.helpers import data_changed
59
+
60
+@when('reverseproxy.available')
61
+def update_reverse_proxy_config(reverseproxy):
62
+    services = reverseproxy.services()
63
+    if not data_changed('reverseproxy.services', services):
64
+        return
65
+    for service in services:
66
+        for host in service['hosts']:
67
+            hookenv.log('{} has a unit {}:{}'.format(
68
+                services['service_name'],
69
+                host['hostname'],
70
+                host['port']))
71
+```
Back to file index

hooks/relations/http/interface.yaml

1
--- 
2
+++ hooks/relations/http/interface.yaml
3
@@ -0,0 +1,4 @@
4
+name: http
5
+summary: Basic HTTP interface
6
+version: 1
7
+repo: https://git.launchpad.net/~bcsaller/charms/+source/http
Back to file index

hooks/relations/http/provides.py

 1
--- 
 2
+++ hooks/relations/http/provides.py
 3
@@ -0,0 +1,28 @@
 4
+from charmhelpers.core import hookenv
 5
+from charms.reactive import hook
 6
+from charms.reactive import RelationBase
 7
+from charms.reactive import scopes
 8
+
 9
+
10
+class HttpProvides(RelationBase):
11
+    scope = scopes.GLOBAL
12
+
13
+    @hook('{provides:http}-relation-{joined,changed}')
14
+    def changed(self):
15
+        self.set_state('{relation_name}.available')
16
+
17
+    @hook('{provides:http}-relation-{broken,departed}')
18
+    def broken(self):
19
+        self.remove_state('{relation_name}.available')
20
+
21
+    def configure(self, port, private_address=None, hostname=None):
22
+        if not hostname:
23
+            hostname = hookenv.unit_get('private-address')
24
+        if not private_address:
25
+            private_address = hookenv.unit_get('private-address')
26
+        relation_info = {
27
+            'hostname': hostname,
28
+            'private-address': private_address,
29
+            'port': port,
30
+        }
31
+        self.set_remote(**relation_info)
Back to file index

hooks/relations/http/requires.py

 1
--- 
 2
+++ hooks/relations/http/requires.py
 3
@@ -0,0 +1,58 @@
 4
+from charms.reactive import hook
 5
+from charms.reactive import RelationBase
 6
+from charms.reactive import scopes
 7
+
 8
+
 9
+class HttpRequires(RelationBase):
10
+    scope = scopes.UNIT
11
+
12
+    @hook('{requires:http}-relation-{joined,changed}')
13
+    def changed(self):
14
+        conv = self.conversation()
15
+        if conv.get_remote('port'):
16
+            # this unit's conversation has a port, so
17
+            # it is part of the set of available units
18
+            conv.set_state('{relation_name}.available')
19
+
20
+    @hook('{requires:http}-relation-{departed,broken}')
21
+    def broken(self):
22
+        conv = self.conversation()
23
+        conv.remove_state('{relation_name}.available')
24
+
25
+    def services(self):
26
+        """
27
+        Returns a list of available HTTP services and their associated hosts
28
+        and ports.
29
+
30
+        The return value is a list of dicts of the following form::
31
+
32
+            [
33
+                {
34
+                    'service_name': name_of_service,
35
+                    'hosts': [
36
+                        {
37
+                            'hostname': address_of_host,
38
+                            'port': port_for_host,
39
+                        },
40
+                        # ...
41
+                    ],
42
+                },
43
+                # ...
44
+            ]
45
+        """
46
+        services = {}
47
+        for conv in self.conversations():
48
+            service_name = conv.scope.split('/')[0]
49
+            service = services.setdefault(service_name, {
50
+                'service_name': service_name,
51
+                'hosts': [],
52
+            })
53
+            host = conv.get_remote('hostname') or \
54
+                conv.get_remote('private-address')
55
+            port = conv.get_remote('port')
56
+            if host and port:
57
+                service['hosts'].append({
58
+                    'hostname': host,
59
+                    'port': port,
60
+                })
61
+        return [s for s in services.values() if s['hosts']]
Back to file index

hooks/relations/kube-control/README.md

  1
--- 
  2
+++ hooks/relations/kube-control/README.md
  3
@@ -0,0 +1,146 @@
  4
+# kube-control interface
  5
+
  6
+This interface provides communication between master and workers in a
  7
+Kubernetes cluster.
  8
+
  9
+
 10
+## Provides (kubernetes-master side)
 11
+
 12
+
 13
+### States
 14
+
 15
+* `kube-control.connected`
 16
+
 17
+  Enabled when a worker has joined the relation.
 18
+
 19
+* `kube-control.gpu.available`
 20
+
 21
+  Enabled when any worker has indicated that it is running in gpu mode.
 22
+
 23
+* `kube-control.departed`
 24
+
 25
+  Enabled when any worker has indicated that it is leaving the cluster.
 26
+
 27
+
 28
+* `kube-control.auth.requested`
 29
+
 30
+  Enabled when an authentication credential is requested. This state is
 31
+  temporary and will be removed once the units authentication request has
 32
+  been fulfilled.
 33
+
 34
+### Methods
 35
+
 36
+* `kube_control.set_dns(port, domain, sdn_ip)`
 37
+
 38
+  Sends DNS info to the connected worker(s).
 39
+
 40
+
 41
+* `kube_control.auth_user()`
 42
+
 43
+  Returns the requested username and group requested for authentication.
 44
+
 45
+* `kube_control.sign_auth_request(kubelet_token, proxy_token, client_token)`
 46
+
 47
+  Sends authentication tokens to the requesting unit for the requested user
 48
+  and kube-proxy services.
 49
+
 50
+* `kube_control.flush_departed()`
 51
+
 52
+  Returns the unit departing the kube_control relationship so you can do any
 53
+  post removal cleanup. Such as removing authentication tokens for the unit.
 54
+  Invoking this method will also remove the `kube-control.departed` state
 55
+
 56
+### Examples
 57
+
 58
+```python
 59
+
 60
+@when('kube-control.connected')
 61
+def send_dns(kube_control):
 62
+    # send port, domain, sdn_ip to the remote side
 63
+    kube_control.set_dns(53, "cluster.local", "10.1.0.10")
 64
+
 65
+@when('kube-control.gpu.available')
 66
+def on_gpu_available(kube_control):
 67
+    # The remote side is gpu-enable, handle it somehow
 68
+    assert kube_control.get_gpu() == True
 69
+
 70
+
 71
+@when('kube-control.departed')
 72
+@when('leadership.is_leader')
 73
+def flush_auth_for_departed(kube_control):
 74
+    ''' Unit has left the cluster and needs to have its authentication
 75
+    tokens removed from the token registry '''
 76
+    departing_unit = kube_control.flush_departed()
 77
+
 78
+```
 79
+
 80
+## Requires (kubernetes-worker side)
 81
+
 82
+
 83
+### States
 84
+
 85
+* `kube-control.connected`
 86
+
 87
+  Enabled when a master has joined the relation.
 88
+
 89
+* `kube-control.dns.available`
 90
+
 91
+  Enabled when DNS info is available from the master.
 92
+
 93
+* `kube-control.auth.available`
 94
+
 95
+  Enabled when authentication credentials are present from the master.
 96
+
 97
+### Methods
 98
+
 99
+* `kube_control.get_dns()`
100
+
101
+  Returns a dictionary of DNS info sent by the master. The keys in the
102
+  dict are: domain, private-address, sdn-ip, port.
103
+
104
+* `kube_control.set_gpu(enabled=True)`
105
+
106
+  Tell the master that we are gpu-enabled.
107
+
108
+*  `kube_control.get_auth_credentials()`
109
+
110
+  Returns a dict with the returned authentication credentials.
111
+
112
+*  `set_auth_request(kubelet, group='system:nodes')`
113
+
114
+  Issue an authentication request against the master to receive token based
115
+  auth credentials in return.
116
+
117
+### Examples
118
+
119
+```python
120
+
121
+@when('kube-control.dns.available')
122
+def on_dns_available(kube_control):
123
+    # Remote side has sent DNS info
124
+    dns = kube_control.get_dns()
125
+    print(context['domain'])
126
+    print(context['private-address'])
127
+    print(context['sdn-ip'])
128
+    print(context['port'])
129
+
130
+@when('kube-control.connected')
131
+def send_gpu(kube_control):
132
+    # Tell the master that we're gpu-enabled
133
+    kube_control.set_gpu(True)
134
+
135
+@when('kube-control.auth.available')
136
+def display_auth_tokens(kube_control):
137
+    # Remote side has sent auth info
138
+    auth = kube_control.get_auth_credentials()
139
+    print(auth['kubelet_token'])
140
+    print(auth['proxy_token'])
141
+    print(auth['client_token'])
142
+
143
+@when('kube-control.connected')
144
+@when_not('kube-control.auth.available')
145
+def request_auth_credentials(kube_control):
146
+    # Request an admin user with sudo level access named 'root'
147
+    kube_control.set_auth_request('root', group='system:masters')
148
+
149
+```
Back to file index

hooks/relations/kube-control/interface.yaml

1
--- 
2
+++ hooks/relations/kube-control/interface.yaml
3
@@ -0,0 +1,4 @@
4
+name: kube-control
5
+summary: Provides master-worker communication.
6
+version: 1
7
+maintainer: "Tim Van Steenburgh <tim.van.steenburgh@canonical.com>"
Back to file index

hooks/relations/kube-control/provides.py

  1
--- 
  2
+++ hooks/relations/kube-control/provides.py
  3
@@ -0,0 +1,106 @@
  4
+#!/usr/bin/python
  5
+# Licensed under the Apache License, Version 2.0 (the "License");
  6
+# you may not use this file except in compliance with the License.
  7
+# You may obtain a copy of the License at
  8
+#
  9
+#     http://www.apache.org/licenses/LICENSE-2.0
 10
+#
 11
+# Unless required by applicable law or agreed to in writing, software
 12
+# distributed under the License is distributed on an "AS IS" BASIS,
 13
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 14
+# See the License for the specific language governing permissions and
 15
+# limitations under the License.
 16
+
 17
+from charms.reactive import RelationBase
 18
+from charms.reactive import hook
 19
+from charms.reactive import scopes
 20
+
 21
+from charmhelpers.core import hookenv
 22
+
 23
+
 24
+class KubeControlProvider(RelationBase):
 25
+    """Implements the kubernetes-master side of the kube-control interface.
 26
+
 27
+    """
 28
+    scope = scopes.UNIT
 29
+
 30
+    @hook('{provides:kube-control}-relation-{joined,changed}')
 31
+    def joined_or_changed(self):
 32
+        conv = self.conversation()
 33
+        conv.set_state('{relation_name}.connected')
 34
+
 35
+        hookenv.log('Checking for gpu-enabled workers')
 36
+        if self._get_gpu():
 37
+            conv.set_state('{relation_name}.gpu.available')
 38
+        else:
 39
+            conv.remove_state('{relation_name}.gpu.available')
 40
+
 41
+        if self._has_auth_request():
 42
+            conv.set_state('{relation_name}.auth.requested')
 43
+
 44
+    @hook('{provides:kube-control}-relation-departed')
 45
+    def departed(self):
 46
+        """Remove all states.
 47
+
 48
+        """
 49
+        conv = self.conversation()
 50
+        conv.remove_state('{relation_name}.connected')
 51
+        conv.remove_state('{relation_name}.gpu.available')
 52
+        conv.set_state('{relation_name}.departed')
 53
+
 54
+    def flush_departed(self):
 55
+        """Remove the signal state that we have a unit departing the
 56
+        relationship. Additionally return the unit departing so the host can
 57
+        do any cleanup logic required. """
 58
+        conv = self.conversation()
 59
+        conv.remove_state('{relation_name}.departed')
 60
+        return conv.scope
 61
+
 62
+    def set_dns(self, port, domain, sdn_ip):
 63
+        """Send DNS info to the remote units.
 64
+
 65
+        We'll need the port, domain, and sdn_ip of the dns service. If
 66
+        sdn_ip is not required in your deployment, the units private-ip
 67
+        is available implicitly.
 68
+
 69
+        """
 70
+        credentials = {
 71
+            'port': port,
 72
+            'domain': domain,
 73
+            'sdn-ip': sdn_ip,
 74
+        }
 75
+        for conv in self.conversations():
 76
+            conv.set_remote(data=credentials)
 77
+
 78
+    def auth_user(self):
 79
+        """ return the kubelet_user value on the wire from the requestor """
 80
+        conv = self.conversation()
 81
+        return (conv.scope, {'user': conv.get_remote('kubelet_user'),
 82
+                             'group': conv.get_remote('auth_group')})
 83
+
 84
+    def sign_auth_request(self, kubelet_token, proxy_token, client_token):
 85
+        """Send authorization tokens to the requesting unit """
 86
+        conv = self.conversation()
 87
+        conv.set_remote(data={'kubelet_token': kubelet_token,
 88
+                              'proxy_token': proxy_token,
 89
+                              'client_token': client_token})
 90
+        conv.remove_state('{relation_name}.auth.requested')
 91
+
 92
+    def _get_gpu(self):
 93
+        """Return True if any remote worker is gpu-enabled.
 94
+
 95
+        """
 96
+        for conv in self.conversations():
 97
+            if conv.get_remote('gpu') == 'True':
 98
+                hookenv.log('Unit {} has gpu enabled'.format(conv.scope))
 99
+                return True
100
+        return False
101
+
102
+    def _has_auth_request(self):
103
+        """Check if there's a kubelet user on the wire requesting auth. This
104
+        action implies requested kube-proxy auth as well, as kube-proxy should
105
+        be run everywhere there is a kubelet.
106
+        """
107
+        conv = self.conversation()
108
+        if conv.get_remote('kubelet_user'):
109
+            return conv.get_remote('kubelet_user')
Back to file index

hooks/relations/kube-control/requires.py

  1
--- 
  2
+++ hooks/relations/kube-control/requires.py
  3
@@ -0,0 +1,108 @@
  4
+#!/usr/bin/python
  5
+# Licensed under the Apache License, Version 2.0 (the "License");
  6
+# you may not use this file except in compliance with the License.
  7
+# You may obtain a copy of the License at
  8
+#
  9
+#     http://www.apache.org/licenses/LICENSE-2.0
 10
+#
 11
+# Unless required by applicable law or agreed to in writing, software
 12
+# distributed under the License is distributed on an "AS IS" BASIS,
 13
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 14
+# See the License for the specific language governing permissions and
 15
+# limitations under the License.
 16
+
 17
+from charms.reactive import RelationBase
 18
+from charms.reactive import hook
 19
+from charms.reactive import scopes
 20
+
 21
+from charmhelpers.core import hookenv
 22
+
 23
+
 24
+class KubeControlRequireer(RelationBase):
 25
+    """Implements the kubernetes-worker side of the kube-control interface.
 26
+
 27
+    """
 28
+    scope = scopes.GLOBAL
 29
+
 30
+    @hook('{requires:kube-control}-relation-{joined,changed}')
 31
+    def joined_or_changed(self):
 32
+        """Set states corresponding to the data we have.
 33
+
 34
+        """
 35
+        conv = self.conversation()
 36
+        conv.set_state('{relation_name}.connected')
 37
+
 38
+        if self.dns_ready():
 39
+            conv.set_state('{relation_name}.dns.available')
 40
+        else:
 41
+            conv.remove_state('{relation_name}.dns.available')
 42
+
 43
+        if self._has_auth_credentials():
 44
+            conv.set_state('{relation_name}.auth.available')
 45
+        else:
 46
+            conv.remove_state('{relation_name}.auth.available')
 47
+
 48
+    @hook('{requires:kube-control}-relation-{broken,departed}')
 49
+    def departed(self):
 50
+        """Remove all states.
 51
+
 52
+        """
 53
+        conv = self.conversation()
 54
+        conv.remove_state('{relation_name}.connected')
 55
+        conv.remove_state('{relation_name}.dns.available')
 56
+
 57
+    def get_auth_credentials(self):
 58
+        """ Return the authentication credentials.
 59
+
 60
+        """
 61
+        conv = self.conversation()
 62
+
 63
+        return {
 64
+            'kubelet_token': conv.get_remote('kubelet_token'),
 65
+            'proxy_token': conv.get_remote('proxy_token'),
 66
+            'client_token': conv.get_remote('client_token')
 67
+        }
 68
+
 69
+    def get_dns(self):
 70
+        """Return DNS info provided by the master.
 71
+
 72
+        """
 73
+        conv = self.conversation()
 74
+
 75
+        return {
 76
+            'private-address': conv.get_remote('private-address'),
 77
+            'port': conv.get_remote('port'),
 78
+            'domain': conv.get_remote('domain'),
 79
+            'sdn-ip': conv.get_remote('sdn-ip'),
 80
+        }
 81
+
 82
+    def dns_ready(self):
 83
+        """Return True if we have all DNS info from the master.
 84
+
 85
+        """
 86
+        return all(self.get_dns().values())
 87
+
 88
+    def set_auth_request(self, kubelet, group='system:nodes'):
 89
+        """ Tell the master that we are requesting auth, and to use this
 90
+        hostname for the kubelet system account.
 91
+
 92
+        Param groups - Determines the level of eleveted privleges of the
 93
+        requested user. Can be overridden to request sudo level access on the
 94
+        cluster via changing to system:masters """
 95
+        conv = self.conversation()
 96
+        conv.set_remote(data={'kubelet_user': kubelet,
 97
+                              'auth_group': group})
 98
+
 99
+    def set_gpu(self, enabled=True):
100
+        """Tell the master that we're gpu-enabled (or not).
101
+
102
+        """
103
+        hookenv.log('Setting gpu={} on kube-control relation'.format(enabled))
104
+        conv = self.conversation()
105
+        conv.set_remote(gpu=enabled)
106
+
107
+    def _has_auth_credentials(self):
108
+        """Predicate method to signal we have authentication credentials """
109
+        conv = self.conversation()
110
+        if conv.get_remote('kubelet_token') and conv.get_remote('proxy_token'):
111
+            return True
Back to file index

hooks/relations/kube-dns/README.md

 1
--- 
 2
+++ hooks/relations/kube-dns/README.md
 3
@@ -0,0 +1,38 @@
 4
+# Kube-DNS
 5
+
 6
+This interface provides the DNS details for a Kubernetes cluster.
 7
+
 8
+The majority of kubernetes services will expect the following values:
 9
+
10
+```
11
+--cluster-dns $IP_OF_DNS_SERVER
12
+--cluster-domain $DOMAIN
13
+```
14
+
15
+
16
+# Provides
17
+
18
+Kubernetes API credentials are sent in the following dict structure:
19
+
20
+```python
21
+{"private-address": "",
22
+ "port": "53",
23
+ "domain": "cluster.local",
24
+ "sdn_ip": "10.1.0.10"
25
+}
26
+
27
+```
28
+
29
+# Requires
30
+
31
+```python
32
+@when('kube-dns.available')
33
+def save_dns_credentials(kube_dns):
34
+    context = kube_dns.details()
35
+    print(context['domain'])
36
+    print(context['private-address'])
37
+    print(context['sdn-ip'])
38
+    print(context['port'])
39
+```
40
+
41
+
Back to file index

hooks/relations/kube-dns/interface.yaml

1
--- 
2
+++ hooks/relations/kube-dns/interface.yaml
3
@@ -0,0 +1,4 @@
4
+name: kube-dns
5
+summary: provides the kubernetes dns settings
6
+version: 1
7
+maintainer: "Charles Butler <charles.butler@canonical.com>"
Back to file index

hooks/relations/kube-dns/provides.py

 1
--- 
 2
+++ hooks/relations/kube-dns/provides.py
 3
@@ -0,0 +1,40 @@
 4
+#!/usr/bin/python
 5
+# Licensed under the Apache License, Version 2.0 (the "License");
 6
+# you may not use this file except in compliance with the License.
 7
+# You may obtain a copy of the License at
 8
+#
 9
+#     http://www.apache.org/licenses/LICENSE-2.0
10
+#
11
+# Unless required by applicable law or agreed to in writing, software
12
+# distributed under the License is distributed on an "AS IS" BASIS,
13
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+# See the License for the specific language governing permissions and
15
+# limitations under the License.
16
+
17
+from charms.reactive import RelationBase
18
+from charms.reactive import hook
19
+from charms.reactive import scopes
20
+
21
+
22
+class KubeDNSProvider(RelationBase):
23
+    scope = scopes.GLOBAL
24
+
25
+    @hook('{provides:kube-dns}-relation-{joined,changed}')
26
+    def joined_or_changed(self):
27
+        conv = self.conversation()
28
+        conv.set_state('{relation_name}.connected')
29
+
30
+    @hook('{provides:kube-dns}-relation-{departed}')
31
+    def departed(self):
32
+        conv = self.conversation()
33
+        conv.remove_state('{relation_name}.connected')
34
+
35
+    def set_dns_info(self, port, domain, sdn_ip):
36
+        ''' We will need the domain, sdn_ip, and port of the dns service, if
37
+            sdn_ip is not required in your deployment, the units private-ip
38
+            is availble implicitly.'''
39
+        credentials = {'port': port,
40
+                       'domain': domain,
41
+                       'sdn-ip': sdn_ip}
42
+        conv = self.conversation()
43
+        conv.set_remote(data=credentials)
Back to file index

hooks/relations/kube-dns/requires.py

 1
--- 
 2
+++ hooks/relations/kube-dns/requires.py
 3
@@ -0,0 +1,47 @@
 4
+#!/usr/bin/python
 5
+# Licensed under the Apache License, Version 2.0 (the "License");
 6
+# you may not use this file except in compliance with the License.
 7
+# You may obtain a copy of the License at
 8
+#
 9
+#     http://www.apache.org/licenses/LICENSE-2.0
10
+#
11
+# Unless required by applicable law or agreed to in writing, software
12
+# distributed under the License is distributed on an "AS IS" BASIS,
13
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+# See the License for the specific language governing permissions and
15
+# limitations under the License.
16
+
17
+from charms.reactive import RelationBase
18
+from charms.reactive import hook
19
+from charms.reactive import scopes
20
+
21
+
22
+class KubeDNSRequireer(RelationBase):
23
+    scope = scopes.GLOBAL
24
+
25
+    @hook('{requires:kube-dns}-relation-{joined,changed}')
26
+    def joined_or_changed(self):
27
+        ''' Set the available state if we have the minimum credentials '''
28
+        if self.has_info():
29
+            conv = self.conversation()
30
+            conv.set_state('{relation_name}.available')
31
+
32
+    def details(self):
33
+        ''' Return a small subnet of the data '''
34
+        return {'private-address': self._get_value('private-address'),
35
+                'port': self._get_value('port'),
36
+                'domain': self._get_value('domain'),
37
+                'sdn-ip': self._get_value('sdn-ip')}
38
+
39
+    def has_info(self):
40
+        ''' Determine if we have a hostname and a port and domain '''
41
+        to_find = ['private-address', 'port', 'domain', 'sdn-ip']
42
+        # Iterate through our services and verify we have values
43
+        for value in to_find:
44
+            if not self._get_value(value):
45
+                return False
46
+        return True
47
+
48
+    def _get_value(self, key):
49
+        conv = self.conversation()
50
+        return conv.get_remote(key)
Back to file index

hooks/relations/kubernetes-cni/.gitignore

1
--- 
2
+++ hooks/relations/kubernetes-cni/.gitignore
3
@@ -0,0 +1 @@
4
+.DS_Store
Back to file index

hooks/relations/kubernetes-cni/interface.yaml

1
--- 
2
+++ hooks/relations/kubernetes-cni/interface.yaml
3
@@ -0,0 +1,4 @@
4
+name: kubernetes-cni
5
+summary: Interface for relating various CNI implementations
6
+version: 0
7
+maintainer: "Rye Terrell <rye.terrell@canonical.com>"
Back to file index

hooks/relations/kubernetes-cni/provides.py

 1
--- 
 2
+++ hooks/relations/kubernetes-cni/provides.py
 3
@@ -0,0 +1,43 @@
 4
+#!/usr/bin/python
 5
+
 6
+from charms.reactive import RelationBase
 7
+from charms.reactive import hook
 8
+from charms.reactive import scopes
 9
+
10
+
11
+class CNIPluginProvider(RelationBase):
12
+    scope = scopes.GLOBAL
13
+
14
+    @hook('{provides:kubernetes-cni}-relation-{joined,changed}')
15
+    def joined_or_changed(self):
16
+        ''' Set the connected state from the provides side of the relation. '''
17
+        self.set_state('{relation_name}.connected')
18
+        if self.config_available():
19
+            self.set_state('{relation_name}.available')
20
+
21
+    @hook('{provides:kubernetes-cni}-relation-{departed}')
22
+    def broken_or_departed(self):
23
+        '''Remove connected state from the provides side of the relation. '''
24
+        self.remove_state('{relation_name}.connected')
25
+        self.remove_state('{relation_name}.available')
26
+        self.remove_state('{relation_name}.configured')
27
+
28
+    def set_config(self, is_master, kubeconfig_path):
29
+        ''' Relays a dict of kubernetes configuration information. '''
30
+        self.set_remote(data={
31
+            'is_master': is_master,
32
+            'kubeconfig_path': kubeconfig_path
33
+        })
34
+        self.set_state('{relation_name}.configured')
35
+
36
+    def config_available(self):
37
+        ''' Ensures all config from the CNI plugin is available. '''
38
+        if not self.get_remote('cidr'):
39
+            return False
40
+        return True
41
+
42
+    def get_config(self):
43
+        ''' Returns all config from the CNI plugin. '''
44
+        return {
45
+            'cidr': self.get_remote('cidr'),
46
+        }
Back to file index

hooks/relations/kubernetes-cni/requires.py

 1
--- 
 2
+++ hooks/relations/kubernetes-cni/requires.py
 3
@@ -0,0 +1,40 @@
 4
+#!/usr/bin/python
 5
+
 6
+from charms.reactive import RelationBase
 7
+from charms.reactive import hook
 8
+from charms.reactive import scopes
 9
+
10
+
11
+class CNIPluginClient(RelationBase):
12
+    scope = scopes.GLOBAL
13
+
14
+    @hook('{requires:kubernetes-cni}-relation-{joined,changed}')
15
+    def changed(self):
16
+        ''' Indicate the relation is connected, and if the relation data is
17
+        set it is also available. '''
18
+        self.set_state('{relation_name}.connected')
19
+        config = self.get_config()
20
+        if config['is_master'] == 'True':
21
+            self.set_state('{relation_name}.is-master')
22
+        elif config['is_master'] == 'False':
23
+            self.set_state('{relation_name}.is-worker')
24
+
25
+    @hook('{requires:kubernetes-cni}-relation-{departed}')
26
+    def broken(self):
27
+        ''' Indicate the relation is no longer available and not connected. '''
28
+        self.remove_state('{relation_name}.connected')
29
+        self.remove_state('{relation_name}.is-master')
30
+        self.remove_state('{relation_name}.is-worker')
31
+
32
+    def get_config(self):
33
+        ''' Get the kubernetes configuration information. '''
34
+        return {
35
+            'is_master': self.get_remote('is_master'),
36
+            'kubeconfig_path': self.get_remote('kubeconfig_path')
37
+        }
38
+
39
+    def set_config(self, cidr):
40
+        ''' Sets the CNI configuration information. '''
41
+        self.set_remote(data={
42
+            'cidr': cidr
43
+        })
Back to file index

hooks/relations/nrpe-external-master/README.md

 1
--- 
 2
+++ hooks/relations/nrpe-external-master/README.md
 3
@@ -0,0 +1,65 @@
 4
+# nrpe-external-master interface
 5
+
 6
+Use this interface to register nagios checks in your charm layers.
 7
+
 8
+## Purpose
 9
+
10
+This interface is designed to interoperate with the
11
+[nrpe-external-master](https://jujucharms.com/nrpe-external-master) subordinate charm.
12
+
13
+## How to use in your layers
14
+
15
+The event handler for `nrpe-external-master.available` is called with an object
16
+through which you can register your own custom nagios checks, when a relation
17
+is established with `nrpe-external-master:nrpe-external-master`.
18
+
19
+This object provides a method,
20
+
21
+_add_check_(args, name=_check_name_, description=_description_, context=_context_, unit=_unit_)
22
+
23
+which is called to register a nagios plugin check for your service.
24
+
25
+All arguments are required.
26
+
27
+*args* is a list of nagios plugin command line arguments, starting with the path to the plugin executable.
28
+
29
+*name* is the name of the check registered in nagios
30
+
31
+*description* is some text that describes what the check is for and what it does
32
+
33
+*context* is the nagios context name, something that identifies your application
34
+
35
+*unit* is `hookenv.local_unit()`
36
+
37
+The nrpe subordinate installs `check_http`, so you can use it like this:
38
+
39
+```
40
+@when('nrpe-external-master.available')
41
+def setup_nagios(nagios):
42
+    config = hookenv.config()
43
+    unit_name = hookenv.local_unit()
44
+    nagios.add_check(['/usr/lib/nagios/plugins/check_http',
45
+            '-I', '127.0.0.1', '-p', str(config['port']),
46
+            '-e', " 200 OK", '-u', '/publickey'],
47
+        name="check_http",
48
+        description="Verify my awesome service is responding",
49
+        context=config["nagios_context"],
50
+        unit=unit_name,
51
+    )
52
+```
53
+
54
+Consult the nagios documentation for more information on [how to write your own
55
+plugins](https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html)
56
+or [find one](https://www.nagios.org/projects/nagios-plugins/) that does what you need.
57
+
58
+## Example deployment
59
+
60
+```
61
+$ juju deploy your-awesome-charm
62
+$ juju deploy nrpe-external-master --config site-nagios.yaml
63
+$ juju add-relation your-awesome-charm nrpe-external-master
64
+```
65
+
66
+where `site-nagios.yaml` has the necessary configuration settings for the
67
+subordinate to connect to nagios.
68
+
Back to file index

hooks/relations/nrpe-external-master/interface.yaml

1
--- 
2
+++ hooks/relations/nrpe-external-master/interface.yaml
3
@@ -0,0 +1,3 @@
4
+name: nrpe-external-master
5
+summary: Nagios interface
6
+version: 1
Back to file index

hooks/relations/nrpe-external-master/provides.py

 1
--- 
 2
+++ hooks/relations/nrpe-external-master/provides.py
 3
@@ -0,0 +1,62 @@
 4
+import datetime
 5
+
 6
+from charms.reactive import hook
 7
+from charms.reactive import RelationBase
 8
+from charms.reactive import scopes
 9
+
10
+
11
+class NrpeExternalMasterProvides(RelationBase):
12
+    scope = scopes.GLOBAL
13
+
14
+    @hook('{provides:nrpe-external-master}-relation-{joined,changed}')
15
+    def changed_nrpe(self):
16
+        self.set_state('{relation_name}.available')
17
+
18
+    @hook('{provides:nrpe-external-master}-relation-{broken,departed}')
19
+    def broken_nrpe(self):
20
+        self.remove_state('{relation_name}.available')
21
+
22
+    def add_check(self, args, name=None, description=None, context=None,
23
+                  servicegroups=None, unit=None):
24
+        unit = unit.replace('/', '-')
25
+        check_tmpl = """
26
+#---------------------------------------------------
27
+# This file is Juju managed
28
+#---------------------------------------------------
29
+command[%(check_name)s]=%(check_args)s
30
+"""
31
+        service_tmpl = """
32
+#---------------------------------------------------
33
+# This file is Juju managed
34
+#---------------------------------------------------
35
+define service {
36
+    use                             active-service
37
+    host_name                       %(context)s-%(unit_name)s
38
+    service_description             %(description)s
39
+    check_command                   check_nrpe!%(check_name)s
40
+    servicegroups                   %(servicegroups)s
41
+}
42
+"""
43
+        check_filename = "/etc/nagios/nrpe.d/%s.cfg" % (name)
44
+        with open(check_filename, "w") as fh:
45
+            fh.write(check_tmpl % {
46
+                'check_args': ' '.join(args),
47
+                'check_name': name,
48
+            })
49
+        service_filename = "/var/lib/nagios/export/service__%s_%s.cfg" % (
50
+                           unit, name)
51
+        with open(service_filename, "w") as fh:
52
+            fh.write(service_tmpl % {
53
+                'servicegroups': servicegroups or context,
54
+                'context': context,
55
+                'description': description,
56
+                'check_name': name,
57
+                'unit_name': unit,
58
+            })
59
+
60
+    def updated(self):
61
+        relation_info = {
62
+            'timestamp': datetime.datetime.now().isoformat(),
63
+        }
64
+        self.set_remote(**relation_info)
65
+        self.remove_state('{relation_name}.available')
Back to file index

hooks/relations/sdn-plugin/.gitignore

1
--- 
2
+++ hooks/relations/sdn-plugin/.gitignore
3
@@ -0,0 +1 @@
4
+.DS_Store
Back to file index

hooks/relations/sdn-plugin/README.md

 1
--- 
 2
+++ hooks/relations/sdn-plugin/README.md
 3
@@ -0,0 +1,91 @@
 4
+# Overview
 5
+
 6
+This interface layer handles the communication with SDN providers like flannel via the `sdn-plugin` interface.
 7
+
 8
+# Usage
 9
+
10
+## Requires
11
+
12
+This interface layer will set the following states, as appropriate:
13
+
14
+  * `{relation_name}.connected` The relation is established, but the sdn
15
+    may not yet have provided any connection or service information.
16
+
17
+  * `{relation_name}.available` the SDN provider has provided its
18
+    configuration information.
19
+    The provided information can be accessed via the following methods:
20
+      * `sdn-plugin.get_configuration()`
21
+
22
+
23
+
24
+For example, a common application for this is configuring an applications
25
+SDN configuration, like Kubernetes.
26
+
27
+```python
28
+@when('sdn-plugin.available', 'docker.available')
29
+def container_sdn_setup(sdn):
30
+    sdn_config = sdn.get_configuration()
31
+
32
+    with open('/etc/default/docker', 'w') as stream:
33
+      stream.write('DOCKER_OPTS=bip={0},mtu={1]}'.format(sdn_config['subnet'], sdn_config['mtu']))
34
+
35
+```
36
+
37
+
38
+## Provides
39
+
40
+A charm providing this interface is plugging into its related principal charm.
41
+
42
+This interface layer will set the following states, as appropriate:
43
+
44
+  * `{relation_name}.connected` One or more clients of any type have
45
+    been related. The charm should call the following methods to provide the
46
+    appropriate information to the clients:
47
+
48
+    * `{relation_name}.set_configuration(mtu=mtu, subnet=subnet, cidr=cidr)`
49
+
50
+Example:
51
+
52
+> Note, this example will use the Flannel subnet.env file, which has a format like follows:
53
+
54
+```shell
55
+FLANNEL_NETWORK=10.1.0.0/16
56
+FLANNEL_SUBNET=10.1.8.1/24
57
+FLANNEL_MTU=1410
58
+FLANNEL_IPMASQ=false
59
+```
60
+
61
+And the consuming python code:
62
+
63
+```python
64
+@when('flannel.sdn.configured', 'sdn-plugin.connected')
65
+def relay_sdn_configuration(host):
66
+
67
+  config = hookenv.config()
68
+
69
+  with open('/var/run/flannel/subnet.env') as f:
70
+      flannel_config = f.readlines()
71
+
72
+  for f in flannel_config:
73
+      if "FLANNEL_SUBNET" in f:
74
+          value = f.split('=')[-1].strip()
75
+          subnet = value
76
+      if "FLANNEL_MTU" in f:
77
+          value = f.split('=')[1].strip()
78
+          mtu = value
79
+
80
+    host.send_sdn_info(mtu, subnet, hookenv.config('cidr'))
81
+```
82
+
83
+
84
+# Contact Information
85
+
86
+### Maintainer
87
+- Charles Butler <charles.butler@canonical.com>
88
+
89
+
90
+# Etcd
91
+
92
+- [Flannel](https://coreos.com/flannel/docs/latest/) home page
93
+- [Flannel bug trackers](https://github.com/coreos/flannel/issues)
94
+- [Flannel Juju Charm](http://jujucharms.com/?text=flannel)
Back to file index

hooks/relations/sdn-plugin/interface.yaml

1
--- 
2
+++ hooks/relations/sdn-plugin/interface.yaml
3
@@ -0,0 +1,4 @@
4
+name: sdn-plugin
5
+summary: Interface for relating various SDN implementations
6
+version: 2
7
+maintainer: "Charles Butler <charles.butler@canonical.com>"
Back to file index

hooks/relations/sdn-plugin/provides.py

 1
--- 
 2
+++ hooks/relations/sdn-plugin/provides.py
 3
@@ -0,0 +1,51 @@
 4
+#!/usr/bin/python
 5
+# Licensed under the Apache License, Version 2.0 (the "License");
 6
+# you may not use this file except in compliance with the License.
 7
+# You may obtain a copy of the License at
 8
+#
 9
+#     http://www.apache.org/licenses/LICENSE-2.0
10
+#
11
+# Unless required by applicable law or agreed to in writing, software
12
+# distributed under the License is distributed on an "AS IS" BASIS,
13
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+# See the License for the specific language governing permissions and
15
+# limitations under the License.
16
+
17
+from charms.reactive import RelationBase
18
+from charms.reactive import hook
19
+from charms.reactive import scopes
20
+
21
+
22
+class SDNPluginProvider(RelationBase):
23
+    scope = scopes.GLOBAL
24
+
25
+    @hook('{provides:sdn-plugin}-relation-{joined,changed}')
26
+    def joined_or_changed(self):
27
+        ''' Set the connected state from the provides side of the relation. '''
28
+        conv = self.conversation()
29
+        conv.set_state('{relation_name}.connected')
30
+
31
+        config = self.get_sdn_config()
32
+        # Ensure we have the expected data points from the sdn provider
33
+        # to ensure we have everything expected by the assumptions being
34
+        # made of the .available state
35
+        if config['mtu'] and config['subnet'] and config['cidr']:
36
+            conv.set_state('{relation_name}.available')
37
+        else:
38
+            conv.remove_state('{relation_name}.available')
39
+
40
+    @hook('{provides:sdn-plugin}-relation-{departed}')
41
+    def broken_or_departed(self):
42
+        '''Remove connected state from the provides side of the relation. '''
43
+        conv = self.conversation()
44
+        conv.remove_state('{relation_name}.connected')
45
+        conv.remove_state('{relation_name}.available')
46
+
47
+    def get_sdn_config(self):
48
+        ''' Return a dict of the SDN configuration. '''
49
+        config = {}
50
+        conv = self.conversation()
51
+        config['mtu'] = conv.get_remote('mtu')
52
+        config['subnet'] = conv.get_remote('subnet')
53
+        config['cidr'] = conv.get_remote('cidr')
54
+        return config
Back to file index

hooks/relations/sdn-plugin/requires.py

 1
--- 
 2
+++ hooks/relations/sdn-plugin/requires.py
 3
@@ -0,0 +1,39 @@
 4
+#!/usr/bin/python
 5
+# Licensed under the Apache License, Version 2.0 (the "License");
 6
+# you may not use this file except in compliance with the License.
 7
+# You may obtain a copy of the License at
 8
+#
 9
+#     http://www.apache.org/licenses/LICENSE-2.0
10
+#
11
+# Unless required by applicable law or agreed to in writing, software
12
+# distributed under the License is distributed on an "AS IS" BASIS,
13
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+# See the License for the specific language governing permissions and
15
+# limitations under the License.
16
+
17
+
18
+from charms.reactive import RelationBase
19
+from charms.reactive import hook
20
+from charms.reactive import scopes
21
+
22
+
23
+class SDNPluginClient(RelationBase):
24
+    scope = scopes.GLOBAL
25
+
26
+    @hook('{requires:sdn-plugin}-relation-{joined,changed}')
27
+    def changed(self):
28
+        ''' Indicate the relation is connected, and if the relation data is
29
+        set it is also available. '''
30
+        conv = self.conversation()
31
+        conv.set_state('{relation_name}.connected')
32
+
33
+    @hook('{requires:sdn-plugin}-relation-{departed}')
34
+    def broken(self):
35
+        ''' Indicate the relation is no longer available and not connected. '''
36
+        self.remove_state('{relation_name}.available')
37
+        self.remove_state('{relation_name}.connected')
38
+
39
+    def set_configuration(self, mtu, subnet, cidr):
40
+        ''' Set the configuration keys on the wire '''
41
+        conv = self.conversation()
42
+        conv.set_remote(data={'mtu': mtu, 'subnet': subnet, 'cidr': cidr})
Back to file index

hooks/relations/tls-certificates/README.md

 1
--- 
 2
+++ hooks/relations/tls-certificates/README.md
 3
@@ -0,0 +1,80 @@
 4
+# tls-certificates
 5
+
 6
+This is a [Juju](https://jujucharms.com) interface layer that handles the
 7
+transport layer security (TLS) for charms. Using relations between charms.  
 8
+Meaning the charms that use this layer can communicate securely
 9
+with each other based on TLS certificates.
10
+
11
+To get started please read the
12
+[Introduction to PKI](https://github.com/OpenVPN/easy-rsa/blob/master/doc/Intro-To-PKI.md)
13
+which defines some PKI terms, concepts and processes used in this document.
14
+
15
+> **NOTE**: It is important to point out that this interface does not do the 
16
+actual work of issuing certificates. The interface layer only handles the 
17
+communication between the peers and the charm layer must react to the states 
18
+correctly for this interface to work.  
19
+
20
+The [layer-tls](https://github.com/mbruzek/layer-tls) charm layer was created
21
+to implement this using the [easy-rsa](https://github.com/OpenVPN/easy-rsa)
22
+project.  This interface could be implemented with other PKI technology tools
23
+(such as openssl commands) in other charm layers.
24
+
25
+# States
26
+
27
+The interface layer emits several reactive states that a charm layer can respond
28
+to:
29
+
30
+## {relation_name}.available
31
+This is the start state that is generated when the relation is joined.
32
+A charm layer responding to this state should get the common name, a list of 
33
+Subject Alt Names, and the certificate_name call 
34
+`request_server_cert(common_name, sans, certificate_name)` on the relation 
35
+object.
36
+
37
+## {relation_name}.ca.available
38
+The Certificate Authority is available on the relation object when the 
39
+"{relation_name}.ca.available" state is set. The charm layer can retrieve the
40
+CA by calling `get_ca()` method on the relationship object.
41
+
42
+```python
43
+from charms.reactive import when
44
+@when('certificates.ca.available')
45
+def store_ca(tls):
46
+    certificate_authority = tls.get_ca()
47
+```
48
+
49
+## {relation_name}.server.cert.available
50
+Once the server certificate is set on the relation the interface layer will
51
+emit the "{relation_name}.server.cert.available" state, indicating that the 
52
+server certificate is available from the relationship object.  The charm layer 
53
+can retrieve the certificate and use it in the code by calling the
54
+`get_server_cert()` method on the relationship object.
55
+
56
+```python
57
+from charms.reactive import when
58
+@when('certificates.server.cert.available')
59
+def get_server(tls):
60
+    server_cert, server_key = tls.get_server_cert()
61
+```
62
+
63
+## {relation_name}.client.cert.available
64
+Once the client certificate is set on the relation the interface layer will
65
+emit the "{relation_name}.client.cert.available" state, indicated that the
66
+server certificates is available from the relationship object.  The charm layer
67
+can retrieve the certificate and use it in the code by calling the
68
+`get_client_cert()` method on the relationship object.
69
+
70
+```python
71
+from charms.reactive import when
72
+@when('certificates.client.cert.available')
73
+def store_client(tls):
74
+    client_cert, client_key = tls.get_client_cert()
75
+```
76
+
77
+# Contact Information
78
+
79
+Interface author: Matt Bruzek &lt;Matthew.Bruzek@canonical.com&gt; 
80
+
81
+Contributor: Charles Butler &lt;Charles.Butler@canonical.com&gt; 
82
+
83
+Contributor: Cory Johns &lt;Cory.Johns@canonical.com&gt; 
Back to file index

hooks/relations/tls-certificates/interface.yaml

1
--- 
2
+++ hooks/relations/tls-certificates/interface.yaml
3
@@ -0,0 +1,6 @@
4
+name: tls-certificates
5
+summary: |
6
+  A Transport Layer Security (TLS) charm layer that uses requires and provides
7
+  to exchange certifcates.
8
+version: 1
9
+repo: https://github.com/juju-solutions/interface-tls-certificates
Back to file index

hooks/relations/tls-certificates/provides.py

 1
--- 
 2
+++ hooks/relations/tls-certificates/provides.py
 3
@@ -0,0 +1,82 @@
 4
+import json
 5
+
 6
+from charms.reactive import hook
 7
+from charms.reactive import scopes
 8
+from charms.reactive import RelationBase
 9
+
10
+
11
+class TlsProvides(RelationBase):
12
+    '''The class that provides a TLS interface other units.'''
13
+    scope = scopes.UNIT
14
+
15
+    @hook('{provides:tls-certificates}-relation-joined')
16
+    def joined(self):
17
+        '''When a unit joins, set the available state.'''
18
+        # Get the conversation scoped to the unit name.
19
+        conversation = self.conversation()
20
+        conversation.set_state('{relation_name}.available')
21
+
22
+    @hook('{provides:tls-certificates}-relation-changed')
23
+    def changed(self):
24
+        '''When a unit relation changes, check for a server certificate request
25
+        and set the server.cert.requested state.'''
26
+        conversation = self.conversation()
27
+        cn = conversation.get_remote('common_name')
28
+        sans = conversation.get_remote('sans')
29
+        name = conversation.get_remote('certificate_name')
30
+        # When the relation has all three values set the server.cert.requested.
31
+        if cn and sans and name:
32
+            conversation.set_state('{relation_name}.server.cert.requested')
33
+
34
+    @hook('{provides:tls-certificates}-relation-{broken,departed}')
35
+    def broken_or_departed(self):
36
+        '''Remove the available state from the unit as we are leaving.'''
37
+        conversation = self.conversation()
38
+        conversation.remove_state('{relation_name}.available')
39
+
40
+    def set_ca(self, certificate_authority):
41
+        '''Set the CA on all the conversations in the relation data.'''
42
+        # Iterate over all conversations of this type.
43
+        for conversation in self.conversations():
44
+            # All the clients get the same CA, so send it to them.
45
+            conversation.set_remote(data={'ca': certificate_authority})
46
+
47
+    def set_client_cert(self, cert, key):
48
+        '''Set the client cert and key on the relation data.'''
49
+        # Iterate over all conversations of this type.
50
+        for conversation in self.conversations():
51
+            client = {}
52
+            client['client.cert'] = cert
53
+            client['client.key'] = key
54
+            # Send the client cert and key to the unit using the conversation.
55
+            conversation.set_remote(data=client)
56
+
57
+    def set_server_cert(self, scope, cert, key):
58
+        '''Set the server cert and key on the relation data.'''
59
+        # Get the coversation scoped to the unit.
60
+        conversation = self.conversation(scope)
61
+        server = {}
62
+        # The scope is the unit name, replace the slash with underscore.
63
+        name = scope.replace('/', '_')
64
+        # Prefix the key with name so each unit can get a unique cert and key.
65
+        server['{0}.server.cert'.format(name)] = cert
66
+        server['{0}.server.key'.format(name)] = key
67
+        # Send the server cert and key to the unit using the conversation.
68
+        conversation.set_remote(data=server)
69
+        # Remove the server.cert.requested state as it is no longer needed.
70
+        conversation.remove_state('{relation_name}.server.cert.requested')
71
+
72
+    def get_server_requests(self):
73
+        '''One provider can have many requests to generate server certificates.
74
+        Return a map of all server request objects indexed by the scope
75
+        which is essentially unit name.'''
76
+        request_map = {}
77
+        for conversation in self.conversations():
78
+            scope = conversation.scope
79
+            request = {}
80
+            request['common_name'] = conversation.get_remote('common_name')
81
+            request['sans'] = json.loads(conversation.get_remote('sans'))
82
+            request['certificate_name'] = conversation.get_remote('certificate_name')  # noqa
83
+            # Create a map indexed by scope.
84
+            request_map[scope] = request
85
+        return request_map
Back to file index

hooks/relations/tls-certificates/requires.py

 1
--- 
 2
+++ hooks/relations/tls-certificates/requires.py
 3
@@ -0,0 +1,76 @@
 4
+import json
 5
+
 6
+from charmhelpers.core import hookenv
 7
+
 8
+from charms.reactive import hook
 9
+from charms.reactive import scopes
10
+from charms.reactive import RelationBase
11
+
12
+
13
+class TlsRequires(RelationBase):
14
+    '''The class that requires a TLS relationship to another unit.'''
15
+    # Use the gloabal scope for requires relation.
16
+    scope = scopes.GLOBAL
17
+
18
+    @hook('{requires:tls-certificates}-relation-joined')
19
+    def joined(self):
20
+        '''When joining with a TLS provider request a certificate..'''
21
+        # Get the conversation scoped to the unit.
22
+        conversation = self.conversation()
23
+        conversation.set_state('{relation_name}.available')
24
+
25
+    @hook('{requires:tls-certificates}-relation-changed')
26
+    def changed(self):
27
+        '''Only the leader should change the state to sign the request. '''
28
+        # Get the global scoped conversation.
29
+        conversation = self.conversation()
30
+        # When the conversation has a CA set notify that the ca is available.
31
+        if conversation.get_remote('ca'):
32
+            conversation.set_state('{relation_name}.ca.available')
33
+        # When the client.cert has a value notify that the client is available.
34
+        if conversation.get_remote('client.cert'):
35
+            conversation.set_state('{relation_name}.client.cert.available')
36
+        # Get the name of the unit this code is running on.
37
+        name = hookenv.local_unit().replace('/', '_')
38
+        # Prefix the key with the name so each unit is notified cert available.
39
+        if conversation.get_remote('{0}.server.cert'.format(name)):
40
+            conversation.set_state('{relation_name}.server.cert.available')
41
+
42
+    @hook('{provides:tls-certificates}-relation-{broken,departed}')
43
+    def broken_or_departed(self):
44
+        '''Remove the states that were set.'''
45
+        conversation = self.conversation()
46
+        conversation.remove_state('{relation_name}.available')
47
+
48
+    def get_ca(self):
49
+        '''Return the certificate authority from the relation object.'''
50
+        # Get the global scoped conversation.
51
+        conversation = self.conversation()
52
+        # Find the certificate authority by key, and return the value.
53
+        return conversation.get_remote('ca')
54
+
55
+    def get_client_cert(self):
56
+        '''Return the client certificate and key from the relation object.'''
57
+        conversation = self.conversation()
58
+        client_cert = conversation.get_remote('client.cert')
59
+        client_key = conversation.get_remote('client.key')
60
+        return client_cert, client_key
61
+
62
+    def get_server_cert(self):
63
+        '''Return the server certificate and key from the relation objects.'''
64
+        conversation = self.conversation()
65
+        # Get the name of the unit this code is running on.
66
+        name = hookenv.local_unit().replace('/', '_')
67
+        # Prefix the keys with name so each unit can get unique certs and keys.
68
+        server_cert = conversation.get_remote('{0}.server.cert'.format(name))
69
+        server_key = conversation.get_remote('{0}.server.key'.format(name))
70
+        return server_cert, server_key
71
+
72
+    def request_server_cert(self, cn, sans, cert_name):
73
+        '''Set the CN, list of sans, and certifiicate name on the relation to
74
+        request a server certificate.'''
75
+        conversation = self.conversation()
76
+        # A server certificate requires a CN, sans, and a certificate name.
77
+        conversation.set_remote('common_name', cn)
78
+        conversation.set_remote('sans', json.dumps(sans))
79
+        conversation.set_remote('certificate_name', cert_name)
Back to file index

hooks/sdn-plugin-relation-broken

 1
--- 
 2
+++ hooks/sdn-plugin-relation-broken
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/sdn-plugin-relation-changed

 1
--- 
 2
+++ hooks/sdn-plugin-relation-changed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/sdn-plugin-relation-departed

 1
--- 
 2
+++ hooks/sdn-plugin-relation-departed
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/sdn-plugin-relation-joined

 1
--- 
 2
+++ hooks/sdn-plugin-relation-joined
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/start

 1
--- 
 2
+++ hooks/start
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/stop

 1
--- 
 2
+++ hooks/stop
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/update-status

 1
--- 
 2
+++ hooks/update-status
 3
@@ -0,0 +1,19 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import sys
 8
+sys.path.append('lib')
 9
+
10
+from charms.layer import basic
11
+basic.bootstrap_charm_deps()
12
+basic.init_config_states()
13
+
14
+
15
+# This will load and run the appropriate @hook and other decorated
16
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
17
+# and $JUJU_CHARM_DIR/hooks/relations.
18
+#
19
+# See https://jujucharms.com/docs/stable/authors-charm-building
20
+# for more information on this pattern.
21
+from charms.reactive import main
22
+main()
Back to file index

hooks/upgrade-charm

 1
--- 
 2
+++ hooks/upgrade-charm
 3
@@ -0,0 +1,28 @@
 4
+#!/usr/bin/env python3
 5
+
 6
+# Load modules from $JUJU_CHARM_DIR/lib
 7
+import os
 8
+import sys
 9
+sys.path.append('lib')
10
+
11
+# This is an upgrade-charm context, make sure we install latest deps
12
+if not os.path.exists('wheelhouse/.upgrade'):
13
+    open('wheelhouse/.upgrade', 'w').close()
14
+    if os.path.exists('wheelhouse/.bootstrapped'):
15
+        os.unlink('wheelhouse/.bootstrapped')
16
+else:
17
+    os.unlink('wheelhouse/.upgrade')
18
+
19
+from charms.layer import basic
20
+basic.bootstrap_charm_deps()
21
+basic.init_config_states()
22
+
23
+
24
+# This will load and run the appropriate @hook and other decorated
25
+# handlers from $JUJU_CHARM_DIR/reactive, $JUJU_CHARM_DIR/hooks/reactive,
26
+# and $JUJU_CHARM_DIR/hooks/relations.
27
+#
28
+# See https://jujucharms.com/docs/stable/authors-charm-building
29
+# for more information on this pattern.
30
+from charms.reactive import main
31
+main()
Back to file index

icon.svg

  1
--- 
  2
+++ icon.svg
  3
@@ -0,0 +1,362 @@
  4
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
  5
+<!-- Created with Inkscape (http://www.inkscape.org/) -->
  6
+
  7
+<svg
  8
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
  9
+   xmlns:cc="http://creativecommons.org/ns#"
 10
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
 11
+   xmlns:svg="http://www.w3.org/2000/svg"
 12
+   xmlns="http://www.w3.org/2000/svg"
 13
+   xmlns:xlink="http://www.w3.org/1999/xlink"
 14
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
 15
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
 16
+   width="96"
 17
+   height="96"
 18
+   id="svg6517"
 19
+   version="1.1"
 20
+   inkscape:version="0.91 r13725"
 21
+   sodipodi:docname="kubernetes-worker_circle.svg"
 22
+   viewBox="0 0 96 96">
 23
+  <defs
 24
+     id="defs6519">
 25
+    <linearGradient
 26
+       id="Background">
 27
+      <stop
 28
+         id="stop4178"
 29
+         offset="0"
 30
+         style="stop-color:#22779e;stop-opacity:1" />
 31
+      <stop
 32
+         id="stop4180"
 33
+         offset="1"
 34
+         style="stop-color:#2991c0;stop-opacity:1" />
 35
+    </linearGradient>
 36
+    <filter
 37
+       style="color-interpolation-filters:sRGB"
 38
+       inkscape:label="Inner Shadow"
 39
+       id="filter1121">
 40
+      <feFlood
 41
+         flood-opacity="0.59999999999999998"
 42
+         flood-color="rgb(0,0,0)"
 43
+         result="flood"
 44
+         id="feFlood1123" />
 45
+      <feComposite
 46
+         in="flood"
 47
+         in2="SourceGraphic"
 48
+         operator="out"
 49
+         result="composite1"
 50
+         id="feComposite1125" />
 51
+      <feGaussianBlur
 52
+         in="composite1"
 53
+         stdDeviation="1"
 54
+         result="blur"
 55
+         id="feGaussianBlur1127" />
 56
+      <feOffset
 57
+         dx="0"
 58
+         dy="2"
 59
+         result="offset"
 60
+         id="feOffset1129" />
 61
+      <feComposite
 62
+         in="offset"
 63
+         in2="SourceGraphic"
 64
+         operator="atop"
 65
+         result="composite2"
 66
+         id="feComposite1131" />
 67
+    </filter>
 68
+    <filter
 69
+       style="color-interpolation-filters:sRGB"
 70
+       inkscape:label="Drop Shadow"
 71
+       id="filter950">
 72
+      <feFlood
 73
+         flood-opacity="0.25"
 74
+         flood-color="rgb(0,0,0)"
 75
+         result="flood"
 76
+         id="feFlood952" />
 77
+      <feComposite
 78
+         in="flood"
 79
+         in2="SourceGraphic"
 80
+         operator="in"
 81
+         result="composite1"
 82
+         id="feComposite954" />
 83
+      <feGaussianBlur
 84
+         in="composite1"
 85
+         stdDeviation="1"
 86
+         result="blur"
 87
+         id="feGaussianBlur956" />
 88
+      <feOffset
 89
+         dx="0"
 90
+         dy="1"
 91
+         result="offset"
 92
+         id="feOffset958" />
 93
+      <feComposite
 94
+         in="SourceGraphic"
 95
+         in2="offset"
 96
+         operator="over"
 97
+         result="composite2"
 98
+         id="feComposite960" />
 99
+    </filter>
100
+    <clipPath
101
+       clipPathUnits="userSpaceOnUse"
102
+       id="clipPath873">
103
+      <g
104
+         transform="matrix(0,-0.66666667,0.66604479,0,-258.25992,677.00001)"
105
+         id="g875"
106
+         inkscape:label="Layer 1"
107
+         style="display:inline;fill:#ff00ff;fill-opacity:1;stroke:none">
108
+        <path
109
+           style="display:inline;fill:#ff00ff;fill-opacity:1;stroke:none"
110
+           d="M 46.702703,898.22775 H 97.297297 C 138.16216,898.22775 144,904.06497 144,944.92583 v 50.73846 c 0,40.86071 -5.83784,46.69791 -46.702703,46.69791 H 46.702703 C 5.8378378,1042.3622 0,1036.525 0,995.66429 v -50.73846 c 0,-40.86086 5.8378378,-46.69808 46.702703,-46.69808 z"
111
+           id="path877"
112
+           inkscape:connector-curvature="0"
113
+           sodipodi:nodetypes="sssssssss" />
114
+      </g>
115
+    </clipPath>
116
+    <style
117
+       id="style867"
118
+       type="text/css"><![CDATA[
119
+    .fil0 {fill:#1F1A17}
120
+   ]]></style>
121
+    <clipPath
122
+       id="clipPath16">
123
+      <path
124
+         id="path18"
125
+         d="M -9,-9 H 605 V 222 H -9 Z"
126
+         inkscape:connector-curvature="0" />
127
+    </clipPath>
128
+    <clipPath
129
+       id="clipPath116">
130
+      <path
131
+         id="path118"
132
+         d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
133
+         inkscape:connector-curvature="0" />
134
+    </clipPath>
135
+    <clipPath
136
+       id="clipPath128">
137
+      <path
138
+         id="path130"
139
+         d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
140
+         inkscape:connector-curvature="0" />
141
+    </clipPath>
142
+    <linearGradient
143
+       id="linearGradient3850"
144
+       inkscape:collect="always">
145
+      <stop
146
+         id="stop3852"
147
+         offset="0"
148
+         style="stop-color:#000000;stop-opacity:1;" />
149
+      <stop
150
+         id="stop3854"
151
+         offset="1"
152
+         style="stop-color:#000000;stop-opacity:0;" />
153
+    </linearGradient>
154
+    <clipPath
155
+       clipPathUnits="userSpaceOnUse"
156
+       id="clipPath3095">
157
+      <path
158
+         d="M 976.648,389.551 H 134.246 V 1229.55 H 976.648 V 389.551"
159
+         id="path3097"
160
+         inkscape:connector-curvature="0" />
161
+    </clipPath>
162
+    <clipPath
163
+       clipPathUnits="userSpaceOnUse"
164
+       id="clipPath3195">
165
+      <path
166
+         d="m 611.836,756.738 -106.34,105.207 c -8.473,8.289 -13.617,20.102 -13.598,33.379 L 598.301,790.207 c -0.031,-13.418 5.094,-25.031 13.535,-33.469"
167
+         id="path3197"
168
+         inkscape:connector-curvature="0" />
169
+    </clipPath>
170
+    <clipPath
171
+       clipPathUnits="userSpaceOnUse"
172
+       id="clipPath3235">
173
+      <path
174
+         d="m 1095.64,1501.81 c 35.46,-35.07 70.89,-70.11 106.35,-105.17 4.4,-4.38 7.11,-10.53 7.11,-17.55 l -106.37,105.21 c 0,7 -2.71,13.11 -7.09,17.51"
175
+         id="path3237"
176
+         inkscape:connector-curvature="0" />
177
+    </clipPath>
178
+    <clipPath
179
+       id="clipPath4591"
180
+       clipPathUnits="userSpaceOnUse">
181
+      <path
182
+         inkscape:connector-curvature="0"
183
+         d="m 1106.6009,730.43734 -0.036,21.648 c -0.01,3.50825 -2.8675,6.61375 -6.4037,6.92525 l -83.6503,7.33162 c -3.5205,0.30763 -6.3812,-2.29987 -6.3671,-5.8145 l 0.036,-21.6475 20.1171,-1.76662 -0.011,4.63775 c 0,1.83937 1.4844,3.19925 3.3262,3.0395 l 49.5274,-4.33975 c 1.8425,-0.166 3.3425,-1.78125 3.3538,-3.626 l 0.01,-4.63025 20.1,-1.7575"
184
+         style="fill:#ff00ff;fill-opacity:1;fill-rule:nonzero;stroke:none"
185
+         id="path4593" />
186
+    </clipPath>
187
+    <radialGradient
188
+       gradientUnits="userSpaceOnUse"
189
+       gradientTransform="matrix(-1.4333926,-2.2742838,1.1731823,-0.73941125,-174.08025,98.374394)"
190
+       r="20.40658"
191
+       fy="93.399292"
192
+       fx="-26.508606"
193
+       cy="93.399292"
194
+       cx="-26.508606"
195
+       id="radialGradient3856"
196
+       xlink:href="#linearGradient3850"
197
+       inkscape:collect="always" />
198
+    <linearGradient
199
+       gradientTransform="translate(-318.48033,212.32022)"
200
+       gradientUnits="userSpaceOnUse"
201
+       y2="993.19702"
202
+       x2="-51.879555"
203
+       y1="593.11615"
204
+       x1="348.20132"
205
+       id="linearGradient3895"
206
+       xlink:href="#linearGradient3850"
207
+       inkscape:collect="always" />
208
+    <clipPath
209
+       id="clipPath3906"
210
+       clipPathUnits="userSpaceOnUse">
211
+      <rect
212
+         transform="scale(1,-1)"
213
+         style="color:#000000;display:inline;overflow:visible;visibility:visible;opacity:0.8;fill:#ff00ff;stroke:none;stroke-width:4;marker:none;enable-background:accumulate"
214
+         id="rect3908"
215
+         width="1019.1371"
216
+         height="1019.1371"
217
+         x="357.9816"
218
+         y="-1725.8152" />
219
+    </clipPath>
220
+  </defs>
221
+  <sodipodi:namedview
222
+     id="base"
223
+     pagecolor="#ffffff"
224
+     bordercolor="#666666"
225
+     borderopacity="1.0"
226
+     inkscape:pageopacity="0.0"
227
+     inkscape:pageshadow="2"
228
+     inkscape:zoom="3.2596288"
229
+     inkscape:cx="-385.69157"
230
+     inkscape:cy="34.733722"
231
+     inkscape:document-units="px"
232
+     inkscape:current-layer="layer1"
233
+     showgrid="false"
234
+     fit-margin-top="0"
235
+     fit-margin-left="0"
236
+     fit-margin-right="0"
237
+     fit-margin-bottom="0"
238
+     inkscape:window-width="1920"
239
+     inkscape:window-height="1029"
240
+     inkscape:window-x="0"
241
+     inkscape:window-y="24"
242
+     inkscape:window-maximized="1"
243
+     showborder="true"
244
+     showguides="false"
245
+     inkscape:guide-bbox="true"
246
+     inkscape:showpageshadow="false"
247
+     inkscape:snap-global="false"
248
+     inkscape:snap-bbox="true"
249
+     inkscape:bbox-paths="true"
250
+     inkscape:bbox-nodes="true"
251
+     inkscape:snap-bbox-edge-midpoints="true"
252
+     inkscape:snap-bbox-midpoints="true"
253
+     inkscape:object-paths="true"
254
+     inkscape:snap-intersection-paths="true"
255
+     inkscape:object-nodes="true"
256
+     inkscape:snap-smooth-nodes="true"
257
+     inkscape:snap-midpoints="true"
258
+     inkscape:snap-object-midpoints="true"
259
+     inkscape:snap-center="true"
260
+     inkscape:snap-nodes="true"
261
+     inkscape:snap-others="true"
262
+     inkscape:snap-page="true">
263
+    <inkscape:grid
264
+       type="xygrid"
265
+       id="grid821" />
266
+    <sodipodi:guide
267
+       orientation="1,0"
268
+       position="16,48"
269
+       id="guide823"
270
+       inkscape:locked="false" />
271
+    <sodipodi:guide
272
+       orientation="0,1"
273
+       position="64,80"
274
+       id="guide825"
275
+       inkscape:locked="false" />
276
+    <sodipodi:guide
277
+       orientation="1,0"
278
+       position="80,40"
279
+       id="guide827"
280
+       inkscape:locked="false" />
281
+    <sodipodi:guide
282
+       orientation="0,1"
283
+       position="64,16"
284
+       id="guide829"
285
+       inkscape:locked="false" />
286
+  </sodipodi:namedview>
287
+  <metadata
288
+     id="metadata6522">
289
+    <rdf:RDF>
290
+      <cc:Work
291
+         rdf:about="">
292
+        <dc:format>image/svg+xml</dc:format>
293
+        <dc:type
294
+           rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
295
+        <dc:title></dc:title>
296
+      </cc:Work>
297
+    </rdf:RDF>
298
+  </metadata>
299
+  <g
300
+     inkscape:label="BACKGROUND"
301
+     inkscape:groupmode="layer"
302
+     id="layer1"
303
+     transform="translate(268,-635.29076)"
304
+     style="display:inline">
305
+    <path
306
+       style="display:inline;fill:#ffffff;fill-opacity:1;stroke:none"
307
+       d="M 48 0 A 48 48 0 0 0 0 48 A 48 48 0 0 0 48 96 A 48 48 0 0 0 96 48 A 48 48 0 0 0 48 0 z "
308
+       id="path6455"
309
+       transform="translate(-268,635.29076)" />
310
+    <path
311
+       inkscape:connector-curvature="0"
312
+       style="display:inline;fill:#326de6;fill-opacity:1;stroke:none"
313
+       d="m -220,635.29076 a 48,48 0 0 0 -48,48 48,48 0 0 0 48,48 48,48 0 0 0 48,-48 48,48 0 0 0 -48,-48 z"
314
+       id="path6455-3" />
315
+    <path
316
+       inkscape:connector-curvature="0"
317
+       style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:#326de6;fill-opacity:1;fill-rule:nonzero;stroke:#ffffff;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
318
+       d="m -257.18545,693.54003 a 5.0524169,5.01107 0 0 0 0.28787,0.39638 l 18.28736,22.73877 a 5.0524169,5.01107 0 0 0 3.95007,1.88616 l 29.32654,-0.003 a 5.0524169,5.01107 0 0 0 3.94943,-1.88675 l 18.28255,-22.74294 a 5.0524169,5.01107 0 0 0 0.97485,-4.2391 l -6.52857,-28.3566 a 5.0524169,5.01107 0 0 0 -2.73381,-3.39906 l -26.4238,-12.61752 a 5.0524169,5.01107 0 0 0 -4.38381,4.3e-4 l -26.42114,12.62305 a 5.0524169,5.01107 0 0 0 -2.73296,3.39983 l -6.52262,28.35798 a 5.0524169,5.01107 0 0 0 0.68804,3.84268 z"
319
+       id="path4809" />
320
+    <path
321
+       inkscape:connector-curvature="0"
322
+       style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:#ffffff;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:162.01495361;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
323
+       d="m -220.00092,654.47612 a 1.7570372,1.5813881 89.992522 0 0 -1.58016,1.7337 l 0,0 c 0,0.002 0,0.004 0,0.006 a 1.7570372,1.5813881 89.992522 0 0 -0.001,0.0173 1.7570372,1.5813881 89.992522 0 0 0,0.0131 c -0.001,0.13631 -0.006,0.31246 -0.001,0.43612 0.0208,0.55969 0.14346,0.9889 0.21689,1.50455 0.13304,1.10367 0.24493,2.01854 0.17614,2.86887 -0.0625,0.42557 -0.30904,0.59289 -0.51489,0.78987 l 0.0397,0.0262 -0.0412,0 -0.038,0.67223 c -0.95591,0.0788 -1.91024,0.22371 -2.85599,0.43845 -3.98544,0.90489 -7.57612,2.97226 -10.34179,5.89656 l -0.54193,-0.38396 -0.0253,0.0319 0.004,-0.0469 c -0.28236,0.0381 -0.56709,0.1263 -0.93877,-0.0902 -0.70771,-0.47639 -1.35312,-1.13418 -2.13306,-1.92631 -0.35738,-0.37891 -0.61639,-0.74248 -1.04099,-1.10772 -0.1016,-0.0874 -0.26199,-0.20657 -0.37064,-0.29335 l 0,4.4e-4 a 1.7570372,1.5813881 38.563515 0 0 -2.3411,0.15501 1.7570372,1.5813881 38.563515 0 0 0.37023,2.31639 c 0,3.2e-4 0.001,8.6e-4 0.002,0.001 a 1.7570372,1.5813881 38.563515 0 0 0.0163,0.0141 1.7570372,1.5813881 38.563515 0 0 0.0131,0.01 c 0.10575,0.0854 0.23888,0.19822 0.33769,0.27118 0.45058,0.33267 0.86259,0.50424 1.31153,0.76833 0.94583,0.5841 1.73078,1.06695 2.35271,1.65091 0.29378,0.31418 0.27106,0.61128 0.29673,0.89504 l 0.0454,-0.0148 -0.0258,0.0323 0.49758,0.44499 c -0.10015,0.15062 -0.19867,0.30255 -0.29503,0.45618 -2.58821,4.1258 -3.61033,9.02674 -2.92083,13.81975 l -0.66231,0.19113 0.009,0.0397 -0.034,-0.0325 c -0.14624,0.24452 -0.25455,0.52211 -0.65555,0.67772 -0.8137,0.2563 -1.73054,0.35084 -2.83613,0.46674 -0.51908,0.0432 -0.96461,0.0189 -1.51491,0.12313 -0.125,0.0237 -0.30319,0.0703 -0.43716,0.10138 a 1.5813881,1.7570372 77.134524 0 0 -0.001,2.1e-4 1.5813881,1.7570372 77.134524 0 0 -0.002,6.5e-4 c -0.006,0.001 -0.0142,0.004 -0.0201,0.005 l 0,4.3e-4 a 1.5813881,1.7570372 77.134524 0 0 -1.33834,1.92694 1.5813881,1.7570372 77.134524 0 0 2.04183,1.15481 l 0,2.2e-4 c 0.002,-4.3e-4 0.005,-0.001 0.007,-0.001 a 1.5813881,1.7570372 77.134524 0 0 0.0167,-0.003 1.5813881,1.7570372 77.134524 0 0 0.0133,-0.003 c 0.13297,-0.0295 0.30563,-0.0637 0.42493,-0.0957 0.54102,-0.14485 0.932,-0.36002 1.41839,-0.54636 1.04638,-0.3753 1.91325,-0.68798 2.75757,-0.81014 0.4288,-0.0338 0.64706,0.16923 0.88491,0.32608 l 0.0167,-0.0444 0.009,0.0399 0.69102,-0.11722 c 1.48124,4.58169 4.5304,8.52505 8.63978,11.10991 0.15635,0.0983 0.31414,0.19399 0.47265,0.28786 l -0.28025,0.67794 0.0365,0.0175 -0.0465,0.006 c 0.10001,0.26679 0.24935,0.52463 0.12101,0.93517 -0.30694,0.79598 -0.80469,1.5717 -1.40339,2.50836 -0.28989,0.43274 -0.58657,0.76603 -0.84816,1.26126 -0.0626,0.1185 -0.14316,0.30159 -0.20359,0.42682 l 0,2.2e-4 a 1.5813881,1.7570372 25.705524 0 0 0.67223,2.24796 1.5813881,1.7570372 25.705524 0 0 2.17573,-0.87646 l 0,0 a 1.5813881,1.7570372 25.705524 0 0 0.011,-0.0207 1.5813881,1.7570372 25.705524 0 0 0.006,-0.0139 c 0.0598,-0.12223 0.14038,-0.27762 0.18965,-0.3905 0.22406,-0.5133 0.29968,-0.95318 0.45724,-1.44964 0.35898,-1.05209 0.65505,-1.92476 1.08597,-2.66105 0.24093,-0.35632 0.53563,-0.40034 0.80655,-0.4885 l -0.0245,-0.041 0.0372,0.0179 0.35313,-0.63823 c 3.7664,1.43844 7.901,1.74996 11.88393,0.84563 0.92881,-0.21089 1.83569,-0.48616 2.71618,-0.81944 l 0.32482,0.58691 0.0365,-0.0177 -0.0241,0.0403 c 0.27093,0.0882 0.56581,0.13198 0.80677,0.48828 0.43094,0.73627 0.72692,1.60898 1.08596,2.66106 0.15759,0.49644 0.23336,0.93635 0.45745,1.44964 0.0509,0.11657 0.13584,0.2797 0.19599,0.40338 a 1.7570372,1.5813881 64.276524 0 0 4.3e-4,0.001 1.7570372,1.5813881 64.276524 0 0 0.001,0.003 c 0.003,0.005 0.006,0.0125 0.009,0.0177 l 4.3e-4,-2.1e-4 a 1.7570372,1.5813881 64.276524 0 0 2.17636,0.87625 1.7570372,1.5813881 64.276524 0 0 0.6716,-2.24775 c -3.2e-4,-7.6e-4 -7.5e-4,-0.002 -0.001,-0.003 a 1.7570372,1.5813881 64.276524 0 0 -0.008,-0.019 1.7570372,1.5813881 64.276524 0 0 -0.007,-0.0141 c -0.0582,-0.12287 -0.12948,-0.28243 -0.18691,-0.39113 -0.26162,-0.49522 -0.55825,-0.82854 -0.84816,-1.26126 -0.59875,-0.93663 -1.09661,-1.71219 -1.4036,-2.50815 -0.12837,-0.41053 0.021,-0.66838 0.12101,-0.93517 l -0.0471,-0.007 0.0372,-0.0177 -0.25407,-0.61416 c 2.7978,-1.64833 5.19025,-3.95266 6.94747,-6.7538 0.93056,-1.48338 1.65783,-3.06733 2.17699,-4.71007 l 0.6452,0.10919 0.009,-0.0393 0.0165,0.0437 c 0.23784,-0.15687 0.45611,-0.36007 0.88491,-0.3263 0.84433,0.12212 1.71117,0.43469 2.75757,0.80993 0.4864,0.18632 0.87757,0.40154 1.4186,0.54636 0.123,0.0329 0.30378,0.0683 0.43802,0.0984 a 1.7570372,1.5813881 12.847522 0 0 6.5e-4,2.2e-4 1.7570372,1.5813881 12.847522 0 0 0.002,4.3e-4 c 0.006,0.001 0.0148,0.003 0.0209,0.004 l 0,-4.3e-4 a 1.7570372,1.5813881 12.847522 0 0 2.04205,-1.15566 1.7570372,1.5813881 12.847522 0 0 -1.33856,-1.9261 l 0,-2.1e-4 c -0.001,-3.3e-4 -0.003,-7.6e-4 -0.005,-0.001 a 1.7570372,1.5813881 12.847522 0 0 -0.0182,-0.005 1.7570372,1.5813881 12.847522 0 0 -0.0142,-0.003 c -0.13255,-0.0311 -0.30248,-0.0751 -0.42366,-0.098 -0.55029,-0.10421 -0.99583,-0.0802 -1.5149,-0.12333 -1.1056,-0.11585 -2.02242,-0.21028 -2.83614,-0.46653 -0.401,-0.1556 -0.50951,-0.43321 -0.65576,-0.67773 l -0.0344,0.0327 0.009,-0.0399 -0.63971,-0.18458 c 0.33339,-2.4373 0.2306,-4.93876 -0.33116,-7.38443 -0.56693,-2.46819 -1.57919,-4.78593 -2.96306,-6.85201 l 0.54214,-0.48511 -0.0254,-0.0317 0.0448,0.0146 c 0.0256,-0.28376 0.003,-0.58085 0.29652,-0.89505 0.6219,-0.58398 1.40691,-1.06696 2.35271,-1.65112 0.44892,-0.26411 0.86096,-0.43584 1.31152,-0.76854 0.10781,-0.0796 0.25975,-0.20947 0.36853,-0.29609 l -4.3e-4,-4.3e-4 a 1.5813881,1.7570372 51.418518 0 0 0.3698,-2.31681 1.5813881,1.7570372 51.418518 0 0 -2.34067,-0.15438 l -2.2e-4,-2.2e-4 c -0.002,0.001 -0.004,0.003 -0.005,0.004 a 1.5813881,1.7570372 51.418518 0 0 -0.0133,0.01 1.5813881,1.7570372 51.418518 0 0 -0.0101,0.009 c -0.10704,0.0844 -0.24794,0.19008 -0.34171,0.27075 -0.42458,0.36526 -0.68363,0.729 -1.04098,1.10793 -0.7799,0.79217 -1.42517,1.44988 -2.13286,1.92631 -0.37167,0.2165 -0.65641,0.12829 -0.93876,0.0902 l 0.004,0.0473 -0.0256,-0.0321 -0.5981,0.42408 c -1.17349,-1.2354 -2.50418,-2.33114 -3.97046,-3.25345 -2.79201,-1.75621 -5.93598,-2.79809 -9.15911,-3.08176 l -0.0386,-0.68237 -0.041,0 0.0395,-0.026 c -0.20585,-0.19698 -0.45241,-0.3643 -0.5149,-0.78987 -0.0688,-0.85033 0.0431,-1.7652 0.17614,-2.86887 0.0734,-0.51565 0.19607,-0.94486 0.2169,-1.50455 0.005,-0.13393 -0.002,-0.3336 -0.002,-0.47265 l -4.3e-4,0 a 1.7570372,1.5813881 89.992522 0 0 -1.5808,-1.7337 z m -1.9808,12.26789 -0.46864,8.29215 -0.0351,0.0169 0,0.001 c -0.0315,0.74124 -0.64131,1.33328 -1.39029,1.33328 -0.3068,0 -0.59017,-0.0987 -0.82029,-0.26674 l -0.001,-6.5e-4 -0.0167,0.008 -6.79033,-4.81208 c 2.14503,-2.10939 4.84984,-3.60368 7.83449,-4.28134 0.55966,-0.12707 1.12309,-0.22308 1.68787,-0.2904 z m 3.96392,0.0104 c 2.36344,0.29447 4.65849,1.1027 6.71558,2.39664 0.99832,0.62796 1.91746,1.35671 2.74743,2.16897 l -6.74557,4.78293 -0.0239,-0.0114 0.001,-0.003 -0.004,0.003 c -0.59912,0.43759 -1.4422,0.32976 -1.9092,-0.25575 -0.19131,-0.23987 -0.29078,-0.52268 -0.30285,-0.8074 l 0,-0.001 -0.009,-0.004 -0.46907,-8.26807 z m -15.95367,7.66299 6.19921,5.54387 -0.008,0.0351 0.001,8.6e-4 c 0.55991,0.48676 0.64227,1.33271 0.17529,1.91829 -0.1913,0.23988 -0.44485,0.39986 -0.71975,0.47497 l -0.001,2.2e-4 -0.007,0.0289 -7.95973,2.29801 c -0.38458,-3.58212 0.42104,-7.20565 2.31976,-10.30019 z m 27.89759,0.001 c 0.94083,1.51982 1.63722,3.19567 2.04478,4.97005 0.40284,1.75377 0.50993,3.54284 0.33475,5.29824 l -7.98908,-2.30435 -0.007,-0.0304 0.003,-0.001 -0.004,-0.001 c -0.71566,-0.19559 -1.1569,-0.922 -0.99029,-1.65218 0.0683,-0.29912 0.22737,-0.55328 0.44245,-0.74023 l 6.5e-4,-8.7e-4 -0.004,-0.0177 6.16921,-5.52 z m -11.47465,0.60781 0.006,0.003 c 0.0123,0.28499 0.11201,0.56815 0.30349,0.80824 0.46787,0.5866 1.31226,0.6948 1.91279,0.25702 l 0.0226,0.011 -1.03,0.73031 -0.42493,-0.53073 -0.71721,0 -0.0727,-1.27879 z m -4.90204,0.0139 -0.0716,1.26401 -0.728,-2.2e-4 -0.43991,0.54932 -1.02029,-0.72313 0.0133,-0.006 c 0.23053,0.16811 0.51397,0.26695 0.82112,0.26695 0.75044,0 1.36158,-0.59285 1.39368,-1.33539 l 0.0317,-0.0152 z m 10.20451,4.90078 0.003,0.0141 c -0.21516,0.18735 -0.37433,0.44187 -0.44266,0.7413 -0.16691,0.73147 0.27498,1.45891 0.99156,1.65555 l 0.007,0.0291 -1.18101,-0.34066 0.15629,-0.68131 -0.46716,-0.58332 0.93305,-0.83486 z m -15.52324,0.0228 0.92461,0.82704 -0.45365,0.56621 0.1641,0.7168 -1.19368,0.34467 0.006,-0.0253 c 0.27515,-0.0754 0.52909,-0.23548 0.72059,-0.47562 0.46781,-0.58664 0.38525,-1.43399 -0.17508,-1.92208 l 0.007,-0.0317 z m 6.49381,0.42471 2.54089,4.4e-4 1.58418,1.97847 -0.56579,2.46676 -2.28957,1.09737 -2.28935,-1.09822 -0.56494,-2.46697 1.58458,-1.97763 0,-2.1e-4 z m 7.2799,6.56732 1.21014,0.20486 -0.0131,0.0163 c -0.28066,-0.0514 -0.57901,-0.0173 -0.85576,0.11595 -0.67606,0.32562 -0.96952,1.12491 -0.67625,1.80783 l -0.01,0.0122 -0.47117,-1.13855 0.65407,-0.31341 0.16177,-0.70518 z m -12.01277,0.0334 0.15353,0.66991 0.67689,0.32482 -0.48638,1.17635 -0.022,-0.0279 c 0.11254,-0.26213 0.14564,-0.56039 0.0773,-0.85978 -0.16699,-0.73155 -0.88085,-1.1955 -1.61184,-1.06168 l -0.0114,-0.0141 1.22387,-0.2076 z m 12.99736,0.16874 c 0.0717,0.002 0.14281,0.01 0.21288,0.0224 l 8.7e-4,2.2e-4 0.0156,-0.0192 8.2195,1.39135 c -0.39575,1.11054 -0.91289,2.18329 -1.54912,3.19749 -1.29522,2.0647 -3.02131,3.79124 -5.03594,5.07882 l -3.19158,-7.71242 0.0106,-0.0133 0.003,0.002 -0.002,-0.005 c -0.29328,-0.68147 -6.4e-4,-1.47924 0.67414,-1.80424 0.20732,-0.0998 0.42681,-0.14402 0.64182,-0.13812 z m -13.93887,0.0334 c 0.62751,0.009 1.19144,0.44302 1.33729,1.08195 0.0683,0.29912 0.0354,0.59717 -0.0773,0.85893 l 0,0.001 0.0241,0.0304 -3.15631,7.633 c -3.04627,-1.95514 -5.34809,-4.84823 -6.57344,-8.2159 l 8.15909,-1.38333 0.0137,0.0171 0.001,-2.2e-4 c 0.0912,-0.0168 0.18217,-0.0243 0.27182,-0.023 z m 7.5874,2.96476 0.61731,1.11553 -0.0291,-2.2e-4 c -0.13477,-0.25151 -0.34754,-0.46335 -0.62429,-0.59662 -0.67604,-0.32555 -1.48371,-0.0566 -1.83487,0.59831 l -0.004,0 0.61163,-1.1056 0.61943,0.29715 0.64394,-0.30856 z m -0.69125,0.38479 c 0.21862,-0.008 0.44235,0.0355 0.65323,0.13707 0.27643,0.13312 0.48883,0.3447 0.62323,0.59599 l 6.5e-4,8.7e-4 0.0334,2.1e-4 4.022,7.2668 c -0.52538,0.17723 -1.06144,0.32888 -1.6072,0.4528 -2.98163,0.67698 -6.06756,0.49925 -8.91854,-0.47266 l 4.00763,-7.24525 0.004,0 0.003,0 0,-0.003 0.001,0.002 c 0.24058,-0.44975 0.69686,-0.71712 1.17783,-0.73474 z"
324
+       id="path3847" />
325
+  </g>
326
+  <g
327
+     inkscape:groupmode="layer"
328
+     id="layer3"
329
+     inkscape:label="PLACE YOUR PICTOGRAM HERE"
330
+     style="display:inline">
331
+    <g
332
+       id="g4185" />
333
+  </g>
334
+  <style
335
+     id="style4217"
336
+     type="text/css">
337
+	.st0{fill:#419EDA;}
338
+</style>
339
+  <style
340
+     id="style4285"
341
+     type="text/css">
342
+	.st0{clip-path:url(#SVGID_2_);fill:#EFBF1B;}
343
+	.st1{clip-path:url(#SVGID_2_);fill:#40BEB0;}
344
+	.st2{clip-path:url(#SVGID_2_);fill:#0AA5DE;}
345
+	.st3{clip-path:url(#SVGID_2_);fill:#231F20;}
346
+	.st4{fill:#D7A229;}
347
+	.st5{fill:#009B8F;}
348
+</style>
349
+  <style
350
+     id="style4240"
351
+     type="text/css">
352
+	.st0{fill:#E8478B;}
353
+	.st1{fill:#40BEB0;}
354
+	.st2{fill:#37A595;}
355
+	.st3{fill:#231F20;}
356
+</style>
357
+  <style
358
+     id="style4812"
359
+     type="text/css">
360
+	.st0{fill:#0AA5DE;}
361
+	.st1{fill:#40BEB0;}
362
+	.st2{opacity:0.26;fill:#353535;}
363
+	.st3{fill:#231F20;}
364
+</style>
365
+</svg>
Back to file index

layer.yaml

 1
--- 
 2
+++ layer.yaml
 3
@@ -0,0 +1,51 @@
 4
+"options":
 5
+  "basic":
 6
+    "packages":
 7
+    - "git"
 8
+    - "build-essential"
 9
+    - "software-properties-common"
10
+    - "cifs-utils"
11
+    - "ceph-common"
12
+    - "nfs-common"
13
+    - "socat"
14
+    "use_venv": !!bool "false"
15
+    "include_system_packages": !!bool "false"
16
+  "tls-client":
17
+    "ca_certificate_path": "/root/cdk/ca.crt"
18
+    "server_certificate_path": "/root/cdk/server.crt"
19
+    "server_key_path": "/root/cdk/server.key"
20
+    "client_certificate_path": "/root/cdk/client.crt"
21
+    "client_key_path": "/root/cdk/client.key"
22
+  "nagios": {}
23
+  "nvidia-cuda": {}
24
+  "snap": {}
25
+  "debug": {}
26
+  "docker":
27
+    "skip-install": !!bool "false"
28
+  "kubernetes-worker": {}
29
+"includes":
30
+- "layer:basic"
31
+- "interface:nrpe-external-master"
32
+- "layer:debug"
33
+- "layer:nagios"
34
+- "interface:dockerhost"
35
+- "interface:sdn-plugin"
36
+- "interface:tls-certificates"
37
+- "layer:snap"
38
+- "layer:docker"
39
+- "layer:metrics"
40
+- "layer:tls-client"
41
+- "layer:nvidia-cuda"
42
+- "interface:http"
43
+- "interface:kubernetes-cni"
44
+- "interface:kube-dns"
45
+- "interface:kube-control"
46
+"repo": "https://github.com/kubernetes/kubernetes.git"
47
+"exclude":
48
+- "LAYER_README.md"
49
+- "tests/10-deploy.py"
50
+- "tests/tests.yaml"
51
+"config":
52
+  "deletes":
53
+  - "install_from_upstream"
54
+"is": "kubernetes-worker"
Back to file index

lib/charms/kubernetes/common.py

 1
--- 
 2
+++ lib/charms/kubernetes/common.py
 3
@@ -0,0 +1,35 @@
 4
+#!/usr/bin/env python
 5
+
 6
+# Copyright 2015 The Kubernetes Authors.
 7
+#
 8
+# Licensed under the Apache License, Version 2.0 (the "License");
 9
+# you may not use this file except in compliance with the License.
10
+# You may obtain a copy of the License at
11
+#
12
+#     http://www.apache.org/licenses/LICENSE-2.0
13
+#
14
+# Unless required by applicable law or agreed to in writing, software
15
+# distributed under the License is distributed on an "AS IS" BASIS,
16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17
+# See the License for the specific language governing permissions and
18
+# limitations under the License.
19
+
20
+import re
21
+import subprocess
22
+
23
+
24
+def get_version(bin_name):
25
+    """Get the version of an installed Kubernetes binary.
26
+
27
+    :param str bin_name: Name of binary
28
+    :return: 3-tuple version (maj, min, patch)
29
+
30
+    Example::
31
+
32
+        >>> `get_version('kubelet')
33
+        (1, 6, 0)
34
+
35
+    """
36
+    cmd = '{} --version'.format(bin_name).split()
37
+    version_string = subprocess.check_output(cmd).decode('utf-8')
38
+    return tuple(int(q) for q in re.findall("[0-9]+", version_string)[:3])
Back to file index

lib/charms/kubernetes/flagmanager.py

  1
--- 
  2
+++ lib/charms/kubernetes/flagmanager.py
  3
@@ -0,0 +1,149 @@
  4
+#!/usr/bin/env python
  5
+
  6
+# Copyright 2015 The Kubernetes Authors.
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#     http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+
 20
+from charmhelpers.core import unitdata
 21
+
 22
+
 23
+class FlagManager:
 24
+    '''
 25
+    FlagManager - A Python class for managing the flags to pass to an
 26
+    application without remembering what's been set previously.
 27
+
 28
+    This is a blind class assuming the operator knows what they are doing.
 29
+    Each instance of this class should be initialized with the intended
 30
+    application to manage flags. Flags are then appended to a data-structure
 31
+    and cached in unitdata for later recall.
 32
+
 33
+    THe underlying data-provider is backed by a SQLITE database on each unit,
 34
+    tracking the dictionary, provided from the 'charmhelpers' python package.
 35
+    Summary:
 36
+    opts = FlagManager('docker')
 37
+    opts.add('bip', '192.168.22.2')
 38
+    opts.to_s()
 39
+    '''
 40
+
 41
+    def __init__(self, daemon, opts_path=None):
 42
+        self.db = unitdata.kv()
 43
+        self.daemon = daemon
 44
+        if not self.db.get(daemon):
 45
+            self.data = {}
 46
+        else:
 47
+            self.data = self.db.get(daemon)
 48
+
 49
+    def __save(self):
 50
+        self.db.set(self.daemon, self.data)
 51
+
 52
+    def add(self, key, value, strict=False):
 53
+        '''
 54
+        Adds data to the map of values for the DockerOpts file.
 55
+        Supports single values, or "multiopt variables". If you
 56
+        have a flag only option, like --tlsverify, set the value
 57
+        to None. To preserve the exact value, pass strict
 58
+        eg:
 59
+        opts.add('label', 'foo')
 60
+        opts.add('label', 'foo, bar, baz')
 61
+        opts.add('flagonly', None)
 62
+        opts.add('cluster-store', 'consul://a:4001,b:4001,c:4001/swarm',
 63
+                 strict=True)
 64
+        '''
 65
+        if strict:
 66
+            self.data['{}-strict'.format(key)] = value
 67
+            self.__save()
 68
+            return
 69
+
 70
+        if value:
 71
+            values = [x.strip() for x in value.split(',')]
 72
+            # handle updates
 73
+            if key in self.data and self.data[key] is not None:
 74
+                item_data = self.data[key]
 75
+                for c in values:
 76
+                    c = c.strip()
 77
+                    if c not in item_data:
 78
+                        item_data.append(c)
 79
+                self.data[key] = item_data
 80
+            else:
 81
+                # handle new
 82
+                self.data[key] = values
 83
+        else:
 84
+            # handle flagonly
 85
+            self.data[key] = None
 86
+        self.__save()
 87
+
 88
+    def remove(self, key, value):
 89
+        '''
 90
+        Remove a flag value from the DockerOpts manager
 91
+        Assuming the data is currently {'foo': ['bar', 'baz']}
 92
+        d.remove('foo', 'bar')
 93
+        > {'foo': ['baz']}
 94
+        :params key:
 95
+        :params value:
 96
+        '''
 97
+        self.data[key].remove(value)
 98
+        self.__save()
 99
+
100
+    def destroy(self, key, strict=False):
101
+        '''
102
+        Destructively remove all values and key from the FlagManager
103
+        Assuming the data is currently {'foo': ['bar', 'baz']}
104
+        d.wipe('foo')
105
+        >{}
106
+        :params key:
107
+        :params strict:
108
+        '''
109
+        try:
110
+            if strict:
111
+                self.data.pop('{}-strict'.format(key))
112
+            else:
113
+                self.data.pop(key)
114
+            self.__save()
115
+        except KeyError:
116
+            pass
117
+
118
+    def get(self, key, default=None):
119
+        """Return the value for ``key``, or the default if ``key`` doesn't exist.
120
+
121
+        """
122
+        return self.data.get(key, default)
123
+
124
+    def destroy_all(self):
125
+        '''
126
+        Destructively removes all data from the FlagManager.
127
+        '''
128
+        self.data.clear()
129
+        self.__save()
130
+
131
+    def to_s(self):
132
+        '''
133
+        Render the flags to a single string, prepared for the Docker
134
+        Defaults file. Typically in /etc/default/docker
135
+        d.to_s()
136
+        > "--foo=bar --foo=baz"
137
+        '''
138
+        flags = []
139
+        for key in self.data:
140
+            if self.data[key] is None:
141
+                # handle flagonly
142
+                flags.append("{}".format(key))
143
+            elif '-strict' in key:
144
+                # handle strict values, and do it in 2 steps.
145
+                # If we rstrip -strict it strips a tailing s
146
+                proper_key = key.rstrip('strict').rstrip('-')
147
+                flags.append("{}={}".format(proper_key, self.data[key]))
148
+            else:
149
+                # handle multiopt and typical flags
150
+                for item in self.data[key]:
151
+                    flags.append("{}={}".format(key, item))
152
+        return ' '.join(flags)
Back to file index

lib/charms/layer/__init__.py

 1
--- 
 2
+++ lib/charms/layer/__init__.py
 3
@@ -0,0 +1,21 @@
 4
+import os
 5
+
 6
+
 7
+class LayerOptions(dict):
 8
+    def __init__(self, layer_file, section=None):
 9
+        import yaml  # defer, might not be available until bootstrap
10
+        with open(layer_file) as f:
11
+            layer = yaml.safe_load(f.read())
12
+        opts = layer.get('options', {})
13
+        if section and section in opts:
14
+            super(LayerOptions, self).__init__(opts.get(section))
15
+        else:
16
+            super(LayerOptions, self).__init__(opts)
17
+
18
+
19
+def options(section=None, layer_file=None):
20
+    if not layer_file:
21
+        base_dir = os.environ.get('JUJU_CHARM_DIR', os.getcwd())
22
+        layer_file = os.path.join(base_dir, 'layer.yaml')
23
+
24
+    return LayerOptions(layer_file, section)
Back to file index

lib/charms/layer/basic.py

  1
--- 
  2
+++ lib/charms/layer/basic.py
  3
@@ -0,0 +1,205 @@
  4
+import os
  5
+import sys
  6
+import shutil
  7
+from glob import glob
  8
+from subprocess import check_call, CalledProcessError
  9
+from time import sleep
 10
+
 11
+from charms.layer.execd import execd_preinstall
 12
+
 13
+
 14
+def lsb_release():
 15
+    """Return /etc/lsb-release in a dict"""
 16
+    d = {}
 17
+    with open('/etc/lsb-release', 'r') as lsb:
 18
+        for l in lsb:
 19
+            k, v = l.split('=')
 20
+            d[k.strip()] = v.strip()
 21
+    return d
 22
+
 23
+
 24
+def bootstrap_charm_deps():
 25
+    """
 26
+    Set up the base charm dependencies so that the reactive system can run.
 27
+    """
 28
+    # execd must happen first, before any attempt to install packages or
 29
+    # access the network, because sites use this hook to do bespoke
 30
+    # configuration and install secrets so the rest of this bootstrap
 31
+    # and the charm itself can actually succeed. This call does nothing
 32
+    # unless the operator has created and populated $JUJU_CHARM_DIR/exec.d.
 33
+    execd_preinstall()
 34
+    # ensure that $JUJU_CHARM_DIR/bin is on the path, for helper scripts
 35
+    charm_dir = os.environ['JUJU_CHARM_DIR']
 36
+    os.environ['PATH'] += ':%s' % os.path.join(charm_dir, 'bin')
 37
+    venv = os.path.abspath('../.venv')
 38
+    vbin = os.path.join(venv, 'bin')
 39
+    vpip = os.path.join(vbin, 'pip')
 40
+    vpy = os.path.join(vbin, 'python')
 41
+    if os.path.exists('wheelhouse/.bootstrapped'):
 42
+        activate_venv()
 43
+        return
 44
+    # bootstrap wheelhouse
 45
+    if os.path.exists('wheelhouse'):
 46
+        with open('/root/.pydistutils.cfg', 'w') as fp:
 47
+            # make sure that easy_install also only uses the wheelhouse
 48
+            # (see https://github.com/pypa/pip/issues/410)
 49
+            fp.writelines([
 50
+                "[easy_install]\n",
 51
+                "allow_hosts = ''\n",
 52
+                "find_links = file://{}/wheelhouse/\n".format(charm_dir),
 53
+            ])
 54
+        apt_install([
 55
+            'python3-pip',
 56
+            'python3-setuptools',
 57
+            'python3-yaml',
 58
+            'python3-dev',
 59
+        ])
 60
+        from charms import layer
 61
+        cfg = layer.options('basic')
 62
+        # include packages defined in layer.yaml
 63
+        apt_install(cfg.get('packages', []))
 64
+        # if we're using a venv, set it up
 65
+        if cfg.get('use_venv'):
 66
+            if not os.path.exists(venv):
 67
+                series = lsb_release()['DISTRIB_CODENAME']
 68
+                if series in ('precise', 'trusty'):
 69
+                    apt_install(['python-virtualenv'])
 70
+                else:
 71
+                    apt_install(['virtualenv'])
 72
+                cmd = ['virtualenv', '-ppython3', '--never-download', venv]
 73
+                if cfg.get('include_system_packages'):
 74
+                    cmd.append('--system-site-packages')
 75
+                check_call(cmd)
 76
+            os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
 77
+            pip = vpip
 78
+        else:
 79
+            pip = 'pip3'
 80
+            # save a copy of system pip to prevent `pip3 install -U pip`
 81
+            # from changing it
 82
+            if os.path.exists('/usr/bin/pip'):
 83
+                shutil.copy2('/usr/bin/pip', '/usr/bin/pip.save')
 84
+        # need newer pip, to fix spurious Double Requirement error:
 85
+        # https://github.com/pypa/pip/issues/56
 86
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse',
 87
+                    'pip'])
 88
+        # install the rest of the wheelhouse deps
 89
+        check_call([pip, 'install', '-U', '--no-index', '-f', 'wheelhouse'] +
 90
+                   glob('wheelhouse/*'))
 91
+        if not cfg.get('use_venv'):
 92
+            # restore system pip to prevent `pip3 install -U pip`
 93
+            # from changing it
 94
+            if os.path.exists('/usr/bin/pip.save'):
 95
+                shutil.copy2('/usr/bin/pip.save', '/usr/bin/pip')
 96
+                os.remove('/usr/bin/pip.save')
 97
+        os.remove('/root/.pydistutils.cfg')
 98
+        # flag us as having already bootstrapped so we don't do it again
 99
+        open('wheelhouse/.bootstrapped', 'w').close()
100
+        # Ensure that the newly bootstrapped libs are available.
101
+        # Note: this only seems to be an issue with namespace packages.
102
+        # Non-namespace-package libs (e.g., charmhelpers) are available
103
+        # without having to reload the interpreter. :/
104
+        reload_interpreter(vpy if cfg.get('use_venv') else sys.argv[0])
105
+
106
+
107
+def activate_venv():
108
+    """
109
+    Activate the venv if enabled in ``layer.yaml``.
110
+
111
+    This is handled automatically for normal hooks, but actions might
112
+    need to invoke this manually, using something like:
113
+
114
+        # Load modules from $JUJU_CHARM_DIR/lib
115
+        import sys
116
+        sys.path.append('lib')
117
+
118
+        from charms.layer.basic import activate_venv
119
+        activate_venv()
120
+
121
+    This will ensure that modules installed in the charm's
122
+    virtual environment are available to the action.
123
+    """
124
+    venv = os.path.abspath('../.venv')
125
+    vbin = os.path.join(venv, 'bin')
126
+    vpy = os.path.join(vbin, 'python')
127
+    from charms import layer
128
+    cfg = layer.options('basic')
129
+    if cfg.get('use_venv') and '.venv' not in sys.executable:
130
+        # activate the venv
131
+        os.environ['PATH'] = ':'.join([vbin, os.environ['PATH']])
132
+        reload_interpreter(vpy)
133
+
134
+
135
+def reload_interpreter(python):
136
+    """
137
+    Reload the python interpreter to ensure that all deps are available.
138
+
139
+    Newly installed modules in namespace packages sometimes seemt to
140
+    not be picked up by Python 3.
141
+    """
142
+    os.execve(python, [python] + list(sys.argv), os.environ)
143
+
144
+
145
+def apt_install(packages):
146
+    """
147
+    Install apt packages.
148
+
149
+    This ensures a consistent set of options that are often missed but
150
+    should really be set.
151
+    """
152
+    if isinstance(packages, (str, bytes)):
153
+        packages = [packages]
154
+
155
+    env = os.environ.copy()
156
+
157
+    if 'DEBIAN_FRONTEND' not in env:
158
+        env['DEBIAN_FRONTEND'] = 'noninteractive'
159
+
160
+    cmd = ['apt-get',
161
+           '--option=Dpkg::Options::=--force-confold',
162
+           '--assume-yes',
163
+           'install']
164
+    for attempt in range(3):
165
+        try:
166
+            check_call(cmd + packages, env=env)
167
+        except CalledProcessError:
168
+            if attempt == 2:  # third attempt
169
+                raise
170
+            sleep(5)
171
+        else:
172
+            break
173
+
174
+
175
+def init_config_states():
176
+    import yaml
177
+    from charmhelpers.core import hookenv
178
+    from charms.reactive import set_state
179
+    from charms.reactive import toggle_state
180
+    config = hookenv.config()
181
+    config_defaults = {}
182
+    config_defs = {}
183
+    config_yaml = os.path.join(hookenv.charm_dir(), 'config.yaml')
184
+    if os.path.exists(config_yaml):
185
+        with open(config_yaml) as fp:
186
+            config_defs = yaml.safe_load(fp).get('options', {})
187
+            config_defaults = {key: value.get('default')
188
+                               for key, value in config_defs.items()}
189
+    for opt in config_defs.keys():
190
+        if config.changed(opt):
191
+            set_state('config.changed')
192
+            set_state('config.changed.{}'.format(opt))
193
+        toggle_state('config.set.{}'.format(opt), config.get(opt))
194
+        toggle_state('config.default.{}'.format(opt),
195
+                     config.get(opt) == config_defaults[opt])
196
+    hookenv.atexit(clear_config_states)
197
+
198
+
199
+def clear_config_states():
200
+    from charmhelpers.core import hookenv, unitdata
201
+    from charms.reactive import remove_state
202
+    config = hookenv.config()
203
+    remove_state('config.changed')
204
+    for opt in config.keys():
205
+        remove_state('config.changed.{}'.format(opt))
206
+        remove_state('config.set.{}'.format(opt))
207
+        remove_state('config.default.{}'.format(opt))
208
+    unitdata.kv().flush()
Back to file index

lib/charms/layer/execd.py

  1
--- 
  2
+++ lib/charms/layer/execd.py
  3
@@ -0,0 +1,138 @@
  4
+# Copyright 2014-2016 Canonical Limited.
  5
+#
  6
+# This file is part of layer-basic, the reactive base layer for Juju.
  7
+#
  8
+# charm-helpers is free software: you can redistribute it and/or modify
  9
+# it under the terms of the GNU Lesser General Public License version 3 as
 10
+# published by the Free Software Foundation.
 11
+#
 12
+# charm-helpers is distributed in the hope that it will be useful,
 13
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
 14
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 15
+# GNU Lesser General Public License for more details.
 16
+#
 17
+# You should have received a copy of the GNU Lesser General Public License
 18
+# along with charm-helpers.  If not, see <http://www.gnu.org/licenses/>.
 19
+
 20
+# This module may only import from the Python standard library.
 21
+import os
 22
+import sys
 23
+import subprocess
 24
+import time
 25
+
 26
+'''
 27
+execd/preinstall
 28
+
 29
+It is often necessary to configure and reconfigure machines
 30
+after provisioning, but before attempting to run the charm.
 31
+Common examples are specialized network configuration, enabling
 32
+of custom hardware, non-standard disk partitioning and filesystems,
 33
+adding secrets and keys required for using a secured network.
 34
+
 35
+The reactive framework's base layer invokes this mechanism as
 36
+early as possible, before any network access is made or dependencies
 37
+unpacked or non-standard modules imported (including the charms.reactive
 38
+framework itself).
 39
+
 40
+Operators needing to use this functionality may branch a charm and
 41
+create an exec.d directory in it. The exec.d directory in turn contains
 42
+one or more subdirectories, each of which contains an executable called
 43
+charm-pre-install and any other required resources. The charm-pre-install
 44
+executables are run, and if successful, state saved so they will not be
 45
+run again.
 46
+
 47
+    $JUJU_CHARM_DIR/exec.d/mynamespace/charm-pre-install
 48
+
 49
+An alternative to branching a charm is to compose a new charm that contains
 50
+the exec.d directory, using the original charm as a layer,
 51
+
 52
+A charm author could also abuse this mechanism to modify the charm
 53
+environment in unusual ways, but for most purposes it is saner to use
 54
+charmhelpers.core.hookenv.atstart().
 55
+'''
 56
+
 57
+
 58
+def default_execd_dir():
 59
+    return os.path.join(os.environ['JUJU_CHARM_DIR'], 'exec.d')
 60
+
 61
+
 62
+def execd_module_paths(execd_dir=None):
 63
+    """Generate a list of full paths to modules within execd_dir."""
 64
+    if not execd_dir:
 65
+        execd_dir = default_execd_dir()
 66
+
 67
+    if not os.path.exists(execd_dir):
 68
+        return
 69
+
 70
+    for subpath in os.listdir(execd_dir):
 71
+        module = os.path.join(execd_dir, subpath)
 72
+        if os.path.isdir(module):
 73
+            yield module
 74
+
 75
+
 76
+def execd_submodule_paths(command, execd_dir=None):
 77
+    """Generate a list of full paths to the specified command within exec_dir.
 78
+    """
 79
+    for module_path in execd_module_paths(execd_dir):
 80
+        path = os.path.join(module_path, command)
 81
+        if os.access(path, os.X_OK) and os.path.isfile(path):
 82
+            yield path
 83
+
 84
+
 85
+def execd_sentinel_path(submodule_path):
 86
+    module_path = os.path.dirname(submodule_path)
 87
+    execd_path = os.path.dirname(module_path)
 88
+    module_name = os.path.basename(module_path)
 89
+    submodule_name = os.path.basename(submodule_path)
 90
+    return os.path.join(execd_path,
 91
+                        '.{}_{}.done'.format(module_name, submodule_name))
 92
+
 93
+
 94
+def execd_run(command, execd_dir=None, stop_on_error=True, stderr=None):
 95
+    """Run command for each module within execd_dir which defines it."""
 96
+    if stderr is None:
 97
+        stderr = sys.stdout
 98
+    for submodule_path in execd_submodule_paths(command, execd_dir):
 99
+        # Only run each execd once. We cannot simply run them in the
100
+        # install hook, as potentially storage hooks are run before that.
101
+        # We cannot rely on them being idempotent.
102
+        sentinel = execd_sentinel_path(submodule_path)
103
+        if os.path.exists(sentinel):
104
+            continue
105
+
106
+        try:
107
+            subprocess.check_call([submodule_path], stderr=stderr,
108
+                                  universal_newlines=True)
109
+            with open(sentinel, 'w') as f:
110
+                f.write('{} ran successfully {}\n'.format(submodule_path,
111
+                                                          time.ctime()))
112
+                f.write('Removing this file will cause it to be run again\n')
113
+        except subprocess.CalledProcessError as e:
114
+            # Logs get the details. We can't use juju-log, as the
115
+            # output may be substantial and exceed command line
116
+            # length limits.
117
+            print("ERROR ({}) running {}".format(e.returncode, e.cmd),
118
+                  file=stderr)
119
+            print("STDOUT<<EOM", file=stderr)
120
+            print(e.output, file=stderr)
121
+            print("EOM", file=stderr)
122
+
123
+            # Unit workload status gets a shorter fail message.
124
+            short_path = os.path.relpath(submodule_path)
125
+            block_msg = "Error ({}) running {}".format(e.returncode,
126
+                                                       short_path)
127
+            try:
128
+                subprocess.check_call(['status-set', 'blocked', block_msg],
129
+                                      universal_newlines=True)
130
+                if stop_on_error:
131
+                    sys.exit(0)  # Leave unit in blocked state.
132
+            except Exception:
133
+                pass  # We care about the exec.d/* failure, not status-set.
134
+
135
+            if stop_on_error:
136
+                sys.exit(e.returncode or 1)  # Error state for pre-1.24 Juju
137
+
138
+
139
+def execd_preinstall(execd_dir=None):
140
+    """Run charm-pre-install for each module within execd_dir."""
141
+    execd_run('charm-pre-install', execd_dir=execd_dir)
Back to file index

lib/charms/layer/snap.py

  1
--- 
  2
+++ lib/charms/layer/snap.py
  3
@@ -0,0 +1,194 @@
  4
+# Copyright 2016-2017 Canonical Ltd.
  5
+#
  6
+# This file is part of the Snap layer for Juju.
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#  http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+
 20
+import os
 21
+import subprocess
 22
+
 23
+from charmhelpers.core import hookenv
 24
+from charms import layer
 25
+from charms import reactive
 26
+from charms.reactive.helpers import any_file_changed, data_changed
 27
+from time import sleep
 28
+
 29
+
 30
+def install(snapname, **kw):
 31
+    '''Install a snap.
 32
+
 33
+    Snap will be installed from the coresponding resource if available,
 34
+    otherwise from the Snap Store.
 35
+
 36
+    Sets the snap.installed.{snapname} state.
 37
+
 38
+    If the snap.installed.{snapname} state is already set then the refresh()
 39
+    function is called.
 40
+    '''
 41
+    installed_state = 'snap.installed.{}'.format(snapname)
 42
+    if reactive.is_state(installed_state):
 43
+        refresh(snapname, **kw)
 44
+    else:
 45
+        if hookenv.has_juju_version('2.0'):
 46
+            res_path = _resource_get(snapname)
 47
+            if res_path is False:
 48
+                _install_store(snapname, **kw)
 49
+            else:
 50
+                _install_local(res_path, **kw)
 51
+        else:
 52
+            _install_store(snapname, **kw)
 53
+        reactive.set_state(installed_state)
 54
+
 55
+
 56
+def refresh(snapname, **kw):
 57
+    '''Update a snap.
 58
+
 59
+    Snap will be pulled from the coresponding resource if available
 60
+    and reinstalled if it has changed. Otherwise a 'snap refresh' is
 61
+    run updating the snap from the Snap Store, potentially switching
 62
+    channel and changing confinement options.
 63
+    '''
 64
+    # Note that once you upload a resource, you can't remove it.
 65
+    # This means we don't need to cope with an operator switching
 66
+    # from a resource provided to a store provided snap, because there
 67
+    # is no way for them to do that. Well, actually the operator could
 68
+    # upload a zero byte resource, but then we would need to uninstall
 69
+    # the snap before reinstalling from the store and that has the
 70
+    # potential for data loss.
 71
+    if hookenv.has_juju_version('2.0'):
 72
+        res_path = _resource_get(snapname)
 73
+        if res_path is False:
 74
+            _refresh_store(snapname, **kw)
 75
+        else:
 76
+            _install_local(res_path, **kw)
 77
+    else:
 78
+        _refresh_store(snapname, **kw)
 79
+
 80
+
 81
+def remove(snapname):
 82
+    hookenv.log('Removing snap {}'.format(snapname))
 83
+    subprocess.check_call(['snap', 'remove', snapname],
 84
+                          universal_newlines=True)
 85
+    reactive.remove_state('snap.installed.{}'.format(snapname))
 86
+
 87
+
 88
+def connect(plug, slot):
 89
+    '''Connect or reconnect a snap plug with a slot.
 90
+
 91
+    Each argument must be a two element tuple, corresponding to
 92
+    the two arguments to the 'snap connect' command.
 93
+    '''
 94
+    hookenv.log('Connecting {} to {}'.format(plug, slot), hookenv.DEBUG)
 95
+    subprocess.check_call(['snap', 'connect', plug, slot],
 96
+                          universal_newlines=True)
 97
+
 98
+
 99
+def connect_all():
100
+    '''Connect or reconnect all interface connections defined in layer.yaml.
101
+
102
+    This method will fail if called before all referenced snaps have been
103
+    installed.
104
+    '''
105
+    opts = layer.options('snap')
106
+    for snapname, snap_opts in opts.items():
107
+        for plug, slot in snap_opts.get('connect', []):
108
+            connect(plug, slot)
109
+
110
+
111
+def _snap_args(channel='stable', devmode=False, jailmode=False,
112
+               dangerous=False, force_dangerous=False, connect=None,
113
+               classic=False, revision=None):
114
+    if channel != 'stable':
115
+        yield '--channel={}'.format(channel)
116
+    if devmode is True:
117
+        yield '--devmode'
118
+    if jailmode is True:
119
+        yield '--jailmode'
120
+    if force_dangerous is True or dangerous is True:
121
+        yield '--dangerous'
122
+    if classic is True:
123
+        yield '--classic'
124
+    if revision is not None:
125
+        yield '--revision={}'.format(revision)
126
+
127
+
128
+def _install_local(path, **kw):
129
+    key = 'snap.local.{}'.format(path)
130
+    if (data_changed(key, kw) or any_file_changed([path])):
131
+        cmd = ['snap', 'install']
132
+        cmd.extend(_snap_args(**kw))
133
+        cmd.append('--dangerous')
134
+        cmd.append(path)
135
+        hookenv.log('Installing {} from local resource'.format(path))
136
+        subprocess.check_call(cmd, universal_newlines=True)
137
+
138
+
139
+def _install_store(snapname, **kw):
140
+    cmd = ['snap', 'install']
141
+    cmd.extend(_snap_args(**kw))
142
+    cmd.append(snapname)
143
+    hookenv.log('Installing {} from store'.format(snapname))
144
+    # Attempting the snap install 3 times to resolve unexpected EOF.
145
+    # This is a work around to lp:1677557. Stop doing this once it
146
+    # is resolved everywhere.
147
+    for attempt in range(3):
148
+        try:
149
+            out = subprocess.check_output(cmd, universal_newlines=True,
150
+                                          stderr=subprocess.STDOUT)
151
+            print(out)
152
+            break
153
+        except subprocess.CalledProcessError as x:
154
+            print(x.output)
155
+            # Per https://bugs.launchpad.net/bugs/1622782, we don't
156
+            # get a useful error code out of 'snap install', much like
157
+            # 'snap refresh' below. Remove this when we can rely on
158
+            # snap installs everywhere returning 0 for 'already insatlled'
159
+            if "already installed" in x.output:
160
+                break
161
+            if attempt == 2:
162
+                raise
163
+            sleep(5)
164
+
165
+
166
+def _refresh_store(snapname, **kw):
167
+    if not data_changed('snap.opts.{}'.format(snapname), kw):
168
+        return
169
+
170
+    cmd = ['snap', 'refresh']
171
+    cmd.extend(_snap_args(**kw))
172
+    cmd.append(snapname)
173
+    hookenv.log('Refreshing {} from store'.format(snapname))
174
+    # Per https://bugs.launchpad.net/layer-snap/+bug/1588322 we don't get
175
+    # a useful error code out of 'snap refresh'. We are forced to parse
176
+    # the output to see if it is a non-fatal error.
177
+    # subprocess.check_call(cmd, universal_newlines=True)
178
+    try:
179
+        out = subprocess.check_output(cmd, universal_newlines=True,
180
+                                      stderr=subprocess.STDOUT)
181
+        print(out)
182
+    except subprocess.CalledProcessError as x:
183
+        print(x.output)
184
+        if "has no updates available" not in x.output:
185
+            raise
186
+
187
+
188
+def _resource_get(snapname):
189
+    '''Used to fetch the resource path of the given name.
190
+
191
+    This wrapper obtains a resource path and adds an additional
192
+    check to return False if the resource is zero length.
193
+    '''
194
+    res_path = hookenv.resource_get(snapname)
195
+    if res_path and os.stat(res_path).st_size != 0:
196
+        return res_path
197
+    return False
Back to file index

lib/debug_script.py

 1
--- 
 2
+++ lib/debug_script.py
 3
@@ -0,0 +1,8 @@
 4
+import os
 5
+
 6
+dir = os.environ["DEBUG_SCRIPT_DIR"]
 7
+
 8
+
 9
+def open_file(path, *args, **kwargs):
10
+    """ Open a file within the debug script dir """
11
+    return open(os.path.join(dir, path), *args, **kwargs)
Back to file index

metadata.yaml

 1
--- 
 2
+++ metadata.yaml
 3
@@ -0,0 +1,57 @@
 4
+"name": "kubernetes-worker"
 5
+"summary": "The workload bearing units of a kubernetes cluster"
 6
+"maintainers":
 7
+- "Charles Butler <charles.butler@canonical.com>"
 8
+- "Matthew Bruzek <matthew.bruzek@canonical.com>"
 9
+"description": |
10
+  Kubernetes is an open-source platform for deploying, scaling, and operations
11
+  of application containers across a cluster of hosts. Kubernetes is portable
12
+  in that it works with public, private, and hybrid clouds. Extensible through
13
+  a pluggable infrastructure. Self healing in that it will automatically
14
+  restart and place containers on healthy nodes if a node ever goes away.
15
+"tags":
16
+- "misc"
17
+- "containers"
18
+- "layer"
19
+"series":
20
+- "xenial"
21
+"requires":
22
+  "certificates":
23
+    "interface": "tls-certificates"
24
+  "kube-api-endpoint":
25
+    "interface": "http"
26
+  "kube-dns":
27
+    "interface": "kube-dns"
28
+  "kube-control":
29
+    "interface": "kube-control"
30
+"provides":
31
+  "nrpe-external-master":
32
+    "interface": "nrpe-external-master"
33
+    "scope": "container"
34
+  "dockerhost":
35
+    "interface": "dockerhost"
36
+    "scope": "container"
37
+  "sdn-plugin":
38
+    "interface": "sdn-plugin"
39
+    "scope": "container"
40
+  "cni":
41
+    "interface": "kubernetes-cni"
42
+    "scope": "container"
43
+"resources":
44
+  "cni":
45
+    "type": "file"
46
+    "filename": "cni.tgz"
47
+    "description": "CNI plugins"
48
+  "kubectl":
49
+    "type": "file"
50
+    "filename": "kubectl.snap"
51
+    "description": "kubectl snap"
52
+  "kubelet":
53
+    "type": "file"
54
+    "filename": "kubelet.snap"
55
+    "description": "kubelet snap"
56
+  "kube-proxy":
57
+    "type": "file"
58
+    "filename": "kube-proxy.snap"
59
+    "description": "kube-proxy snap"
60
+"subordinate": !!bool "false"
Back to file index

metrics.yaml

1
--- 
2
+++ metrics.yaml
3
@@ -0,0 +1,2 @@
4
+metrics:
5
+  juju-units: {}
Back to file index

reactive/cuda.sh

  1
--- 
  2
+++ reactive/cuda.sh
  3
@@ -0,0 +1,258 @@
  4
+#!/bin/bash
  5
+set -ex
  6
+
  7
+source charms.reactive.sh
  8
+
  9
+CUDA_VERSION=$(config-get cuda-version | awk 'BEGIN{FS="-"}{print $1}')
 10
+CUDA_SUB_VERSION=$(config-get cuda-version | awk 'BEGIN{FS="-"}{print $2}')
 11
+SUPPORT_CUDA="$(lspci -nnk | grep -iA2 NVIDIA | wc -l)"
 12
+ROOT_URL="http://developer.download.nvidia.com/compute/cuda/repos"
 13
+
 14
+#####################################################################
 15
+#
 16
+# Basic Functions
 17
+#
 18
+#####################################################################
 19
+
 20
+function bash::lib::get_ubuntu_codename() {
 21
+    lsb_release -a 2>/dev/null | grep Codename | awk '{ print $2 }'
 22
+}
 23
+
 24
+UBUNTU_CODENAME="$(bash::lib::get_ubuntu_codename)"
 25
+
 26
+case "$(arch)" in
 27
+    "x86_64" | "amd64" )
 28
+        ARCH="x86_64"
 29
+    ;;
 30
+    "ppc64le" | "ppc64el" )
 31
+        ARCH="ppc64le"
 32
+    ;;
 33
+    * )
 34
+        juju-log "Your architecture is not supported. Exiting"
 35
+        exit 1
 36
+    ;;
 37
+esac
 38
+
 39
+case "${UBUNTU_CODENAME}" in
 40
+    "trusty" )
 41
+        LXC_CMD="$(running-in-container | grep lxc | wc -l)"
 42
+        UBUNTU_VERSION=ubuntu1404
 43
+    ;;
 44
+    "xenial" )
 45
+        LXC_CMD="$(systemd-detect-virt --container | grep lxc | wc -l)"
 46
+        UBUNTU_VERSION=ubuntu1604
 47
+    ;;
 48
+    * )
 49
+        juju-log "Your version of Ubuntu is not supported. Exiting"
 50
+        exit 1
 51
+    ;;
 52
+esac
 53
+
 54
+#####################################################################
 55
+#
 56
+# Install nvidia driver per architecture
 57
+#
 58
+#####################################################################
 59
+
 60
+function all:all:install_nvidia_driver() {
 61
+
 62
+    apt-get remove -yqq --purge nvidia-* libcuda1-*
 63
+    apt-get install -yqq --no-install-recommends \
 64
+        nvidia-375 \
 65
+        nvidia-375-dev \
 66
+        libcuda1-375
 67
+}
 68
+
 69
+function trusty::x86_64::install_nvidia_driver() {
 70
+    all:all:install_nvidia_driver
 71
+}
 72
+
 73
+function xenial::x86_64::install_nvidia_driver() {
 74
+    all:all:install_nvidia_driver
 75
+}
 76
+
 77
+function trusty::ppc64le::install_nvidia_driver() {
 78
+    bash::lib::log warn "This task is handled by the cuda installer"
 79
+}
 80
+
 81
+function xenial::ppc64le::install_nvidia_driver() {
 82
+    bash::lib::log info "This task is handled by the cuda installer"
 83
+}
 84
+
 85
+#####################################################################
 86
+#
 87
+# Install OpenBlas per architecture
 88
+#
 89
+#####################################################################
 90
+
 91
+function trusty::x86_64::install_openblas() {
 92
+    apt-get update -qq
 93
+    apt-get install -yqq --no-install-recommends \
 94
+        libopenblas-base \
 95
+        libopenblas-dev
 96
+}
 97
+
 98
+function xenial::x86_64::install_openblas() {
 99
+    juju-log "Not planned yet"
100
+    apt-get install -yqq --no-install-recommends \
101
+        libopenblas-base \
102
+        libopenblas-dev
103
+}
104
+
105
+function trusty::ppc64le::install_openblas() {
106
+    [ -d "/mnt/openblas" ] \
107
+        || git clone https://github.com/xianyi/OpenBLAS.git /mnt/openblas \
108
+        && { cd "/mnt/openblas" ; git pull ; cd - ; }
109
+        cd /mnt/openblas
110
+        make && make PREFIX=/usr install
111
+}
112
+
113
+function xenial::ppc64le::install_openblas() {
114
+    apt-get install -yqq --no-install-recommends \
115
+        libopenblas-base \
116
+        libopenblas-dev
117
+}
118
+
119
+#####################################################################
120
+#
121
+# Install CUDA per architecture
122
+#
123
+#####################################################################
124
+
125
+function all::x86_64::install_cuda() {
126
+    INSTALL_PKG="cuda-repo-${UBUNTU_VERSION}_${CUDA_VERSION}-${CUDA_SUB_VERSION}_amd64.deb"
127
+    cd /tmp
128
+    [ -f ${INSTALL_PKG} ] && rm -f ${INSTALL_PKG}
129
+    wget ${ROOT_URL}/${UBUNTU_VERSION}/x86_64/${INSTALL_PKG}
130
+    dpkg -i /tmp/${INSTALL_PKG}
131
+    apt-get update -qq && \
132
+    apt-get install -yqq --allow-downgrades --allow-remove-essential --allow-change-held-packages --no-install-recommends \
133
+        cuda
134
+    rm -f ${INSTALL_PKG}
135
+}
136
+
137
+function trusty::x86_64::install_cuda() {
138
+    all::x86_64::install_cuda
139
+}
140
+
141
+function xenial::x86_64::install_cuda() {
142
+    all::x86_64::install_cuda
143
+}
144
+
145
+function trusty::ppc64le::install_cuda() {
146
+    bash::lib::die This OS is not supported by nVidia for CUDA 8.0. Please upgrade to 16.04
147
+}
148
+
149
+function xenial::ppc64le::install_cuda() {
150
+    wget -c -p /tmp "${ROOT_URL}/${UBUNTU_VERSION}/ppc64el/cuda-repo-${UBUNTU_VERSION}_${CUDA_VERSION}-${CUDA_SUB_VERSION}_ppc64el.deb"
151
+    dpkg -i /tmp/cuda-repo-${UBUNTU_VERSION}_${CUDA_VERSION}-${CUDA_SUB_VERSION}_ppc64el.deb
152
+    apt-get update -qq && \
153
+    apt-get install -yqq --allow-downgrades --allow-remove-essential --allow-change-held-packages --no-install-recommends \
154
+            cuda
155
+}
156
+
157
+#####################################################################
158
+#
159
+# Add CUDA libraries & paths
160
+#
161
+#####################################################################
162
+
163
+function all::all::add_cuda_path() {
164
+    ln -sf "/usr/local/cuda-$CUDA_VERSION" "/usr/local/cuda"
165
+
166
+    # Configuring libraries
167
+    cat > /etc/ld.so.conf.d/cuda.conf << EOF
168
+/usr/local/cuda/lib
169
+/usr/local/cuda/lib64
170
+EOF
171
+
172
+    cat > /etc/ld.so.conf.d/nvidia.conf << EOF
173
+/usr/local/nvidia/lib
174
+/usr/local/nvidia/lib64
175
+EOF
176
+
177
+    ldconfig
178
+
179
+    cat > /etc/profile.d/cuda.sh << EOF
180
+export PATH=/usr/local/cuda/bin:${PATH}
181
+export LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/cuda/lib:${LD_LIBRARY_PATH}"
182
+EOF
183
+
184
+    cat > /etc/profile.d/nvidia.sh << EOF
185
+export PATH=/usr/local/nvidia/bin:${PATH}
186
+export LD_LIBRARY_PATH="/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}"
187
+EOF
188
+
189
+    echo "export PATH=\"/usr/local/cuda/bin:/usr/local/nvidia/bin:${PATH}\"" | tee -a ${HOME}/.bashrc
190
+    echo "export LD_LIBRARY_PATH=\"/usr/local/cuda/lib64:/usr/local/cuda/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64:${LD_LIBRARY_PATH}\"" | tee -a ${HOME}/.bashrc
191
+
192
+    export PATH="/usr/local/cuda/bin:/usr/local/nvidia/bin:${PATH}"
193
+
194
+    # fix "cannot find -lnvcuvid" when linking cuda programs
195
+    # see: https://devtalk.nvidia.com/default/topic/769578/cuda-setup-and-installation/cuda-6-5-cannot-find-lnvcuvid/2
196
+    if [ ! -f /usr/lib/libnvcuvid.so.1 ]; then
197
+        ln -s /usr/lib/nvidia-375/libnvcuvid.so.1 /usr/lib/libnvcuvid.so.1
198
+    fi
199
+    if [ ! -f /usr/lib/libnvcuvid.so ]; then
200
+        ln -s /usr/lib/nvidia-375/libnvcuvid.so /usr/lib/libnvcuvid.so
201
+    fi
202
+}
203
+
204
+@when_not 'cuda.supported'
205
+function check_cuda_support() {
206
+    case "${SUPPORT_CUDA}" in
207
+        "0" )
208
+            juju-log "This instance does not run an nVidia GPU."
209
+        ;;
210
+        * )
211
+            charms.reactive set_state 'cuda.supported'
212
+        ;;
213
+    esac
214
+}
215
+
216
+@when 'cuda.supported'
217
+@when_not 'cuda.installed'
218
+function install_cuda() {
219
+
220
+    INSTALL=$(config-get install-cuda)
221
+    if [ $INSTALL = False ]; then
222
+      juju-log "Skip cuda installation"
223
+      return
224
+    fi
225
+
226
+    apt-get update -qq
227
+    # apt-get upgrade -yqq
228
+    # In any case remove nouveau driver
229
+    apt-get remove -yqq --purge libdrm-nouveau*
230
+    # Here we also need to blacklist nouveau
231
+    apt-get install -yqq --no-install-recommends \
232
+        git \
233
+        curl \
234
+        wget \
235
+        build-essential
236
+
237
+    status-set maintenance "Installing CUDA"
238
+
239
+    # This is a hack as for some reason this package fails
240
+    dpkg --remove --force-remove-reinstreq grub-ieee1275 || juju-log "not installed yet, forcing not to install"
241
+    apt-get -yqq autoremove
242
+
243
+    juju-log "Installing common dependencies"
244
+    # latest kernel doesn't have image-extra so we try only
245
+    apt-get install -yqq  linux-image-extra-`uname -r` \
246
+        || juju-log "linux-image-extra-`uname -r` not available. Skipping"
247
+
248
+    # Install driver only on bare metal
249
+    [ "${LXC_CMD}" = "0" ] && \
250
+        ${UBUNTU_CODENAME}::${ARCH}::install_nvidia_driver || \
251
+        juju-log "Running in a container. No need for the nVidia Driver"
252
+
253
+
254
+    ${UBUNTU_CODENAME}::${ARCH}::install_openblas
255
+    ${UBUNTU_CODENAME}::${ARCH}::install_cuda
256
+    all::all::add_cuda_path
257
+
258
+    charms.reactive set_state 'cuda.installed'
259
+}
260
+
261
+reactive_handler_main
Back to file index

reactive/docker.py

  1
--- 
  2
+++ reactive/docker.py
  3
@@ -0,0 +1,379 @@
  4
+import os
  5
+from shlex import split
  6
+from subprocess import check_call
  7
+from subprocess import check_output
  8
+from subprocess import CalledProcessError
  9
+
 10
+from charmhelpers.core import host
 11
+from charmhelpers.core import hookenv
 12
+from charmhelpers.core.hookenv import status_set
 13
+from charmhelpers.core.hookenv import config
 14
+from charmhelpers.core.templating import render
 15
+from charmhelpers.fetch import apt_install
 16
+from charmhelpers.fetch import apt_purge
 17
+from charmhelpers.fetch import apt_update
 18
+from charmhelpers.fetch import filter_installed_packages
 19
+from charmhelpers.contrib.charmsupport import nrpe
 20
+
 21
+from charms.reactive import remove_state
 22
+from charms.reactive import set_state
 23
+from charms.reactive import when
 24
+from charms.reactive import when_any
 25
+from charms.reactive import when_not
 26
+from charms.reactive.helpers import data_changed
 27
+
 28
+from charms.docker import Docker
 29
+from charms.docker import DockerOpts
 30
+
 31
+from charms import layer
 32
+
 33
+# 2 Major events are emitted from this layer.
 34
+#
 35
+# `docker.ready` is an event intended to signal other layers that need to
 36
+# plug into the plumbing to extend the docker daemon. Such as fire up a
 37
+# bootstrap docker daemon, or predependency fetch + dockeropt rendering
 38
+#
 39
+# `docker.available` means the docker daemon setup has settled and is prepared
 40
+# to run workloads. This is a broad state that has large implications should
 41
+# you decide to remove it. Production workloads can be lost if no restart flag
 42
+# is provided.
 43
+
 44
+# Be sure you bind to it appropriately in your workload layer and
 45
+# react to the proper event.
 46
+
 47
+
 48
+@when_not('docker.ready')
 49
+def install():
 50
+    ''' Install the docker daemon, and supporting tooling '''
 51
+    # Often when building layer-docker based subordinates, you dont need to
 52
+    # incur the overhead of installing docker. This tuneable layer option
 53
+    # allows you to disable the exec of that install routine, and instead short
 54
+    # circuit immediately to docker.available, so you can charm away!
 55
+    layer_opts = layer.options('docker')
 56
+    if layer_opts['skip-install']:
 57
+        set_state('docker.available')
 58
+        set_state('docker.ready')
 59
+        return
 60
+
 61
+    status_set('maintenance', 'Installing AUFS and other tools.')
 62
+    kernel_release = check_output(['uname', '-r']).rstrip()
 63
+    packages = [
 64
+        'aufs-tools',
 65
+        'git',
 66
+        'linux-image-extra-{0}'.format(kernel_release.decode('utf-8')),
 67
+    ]
 68
+    apt_update()
 69
+    apt_install(packages)
 70
+    # Install docker-engine from apt.
 71
+    if config('install_from_upstream'):
 72
+        install_from_upstream_apt()
 73
+    else:
 74
+        install_from_archive_apt()
 75
+
 76
+    opts = DockerOpts()
 77
+    render('docker.defaults', '/etc/default/docker', {'opts': opts.to_s()})
 78
+    render('docker.systemd', '/lib/systemd/system/docker.service', config())
 79
+    reload_system_daemons()
 80
+
 81
+    hookenv.log('Docker installed, setting "docker.ready" state.')
 82
+    set_state('docker.ready')
 83
+
 84
+    # Make with the adding of the users to the groups
 85
+    check_call(['usermod', '-aG', 'docker', 'ubuntu'])
 86
+
 87
+
 88
+@when('config.changed.install_from_upstream', 'docker.ready')
 89
+def toggle_docker_daemon_source():
 90
+    ''' A disruptive toggleable action which will swap out the installed docker
 91
+    daemon for the configured source. If true, installs the latest available
 92
+    docker from the upstream PPA. Else installs docker from universe. '''
 93
+
 94
+    # this returns a list of packages not currently installed on the system
 95
+    # based on the parameters input. Use this to check if we have taken
 96
+    # prior action against a docker deb package.
 97
+    packages = filter_installed_packages(['docker.io', 'docker-engine'])
 98
+
 99
+    if 'docker.io' in packages and 'docker_engine' in packages:
100
+        # we have not reached installation phase, return until
101
+        # we can reasonably re-test this scenario
102
+        hookenv.log('Neither docker.io nor docker-engine are installed. Noop.')
103
+        return
104
+
105
+    install_ppa = config('install_from_upstream')
106
+
107
+    # Remove the inverse package from what is declared. Only take action if
108
+    # we meet having a package installed.
109
+    if install_ppa and 'docker.io' not in packages:
110
+        host.service_stop('docker')
111
+        hookenv.log('Removing docker.io package.')
112
+        apt_purge('docker.io')
113
+        remove_state('docker.ready')
114
+        remove_state('docker.available')
115
+    elif not install_ppa and 'docker-engine' not in packages:
116
+        host.service_stop('docker')
117
+        hookenv.log('Removing docker-engine package.')
118
+        apt_purge('docker-engine')
119
+        remove_state('docker.ready')
120
+        remove_state('docker.available')
121
+    else:
122
+        hookenv.log('Not touching packages.')
123
+
124
+
125
+@when_any('config.changed.http_proxy', 'config.changed.https_proxy',
126
+          'config.changed.no_proxy')
127
+@when('docker.ready')
128
+def proxy_changed():
129
+    '''The proxy information has changed, render templates and restart the
130
+    docker daemon.'''
131
+    recycle_daemon()
132
+
133
+
134
+def install_from_archive_apt():
135
+    status_set('maintenance', 'Installing docker.io from universe.')
136
+    apt_install(['docker.io'], fatal=True)
137
+
138
+
139
+def install_from_upstream_apt():
140
+    ''' Install docker from the apt repository. This is a pyton adaptation of
141
+    the shell script found at https://get.docker.com/ '''
142
+    status_set('maintenance', 'Installing docker-engine from upstream PPA.')
143
+    keyserver = 'hkp://p80.pool.sks-keyservers.net:80'
144
+    key = '58118E89F3A912897C070ADBF76221572C52609D'
145
+    # Enter the server and key in the apt-key management tool.
146
+    cmd = 'apt-key adv --keyserver {0} --recv-keys {1}'.format(keyserver, key)
147
+    # "apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80
148
+    # --recv-keys 58118E89F3A912897C070ADBF76221572C52609D"
149
+    check_call(split(cmd))
150
+    # The url to the server that contains the docker apt packages.
151
+    apt_url = 'https://apt.dockerproject.org'
152
+    # Get the package architecture (amd64), not the machine hardware (x86_64)
153
+    arch = check_output(split('dpkg --print-architecture'))
154
+    arch = arch.decode('utf-8').rstrip()
155
+    # Get the lsb information as a dictionary.
156
+    lsb = host.lsb_release()
157
+    # Ubuntu must be lowercased.
158
+    dist = lsb['DISTRIB_ID'].lower()
159
+    # The codename for the release.
160
+    code = lsb['DISTRIB_CODENAME']
161
+    # repo can be: main, testing or experimental
162
+    repo = 'main'
163
+    # deb [arch=amd64] https://apt.dockerproject.org/repo ubuntu-xenial main
164
+    deb = 'deb [arch={0}] {1}/repo {2}-{3} {4}'.format(
165
+            arch, apt_url, dist, code, repo)
166
+    # mkdir -p /etc/apt/sources.list.d
167
+    if not os.path.isdir('/etc/apt/sources.list.d'):
168
+        os.makedirs('/etc/apt/sources.list.d')
169
+    # Write the docker source file to the apt sources.list.d directory.
170
+    with(open('/etc/apt/sources.list.d/docker.list', 'w+')) as stream:
171
+        stream.write(deb)
172
+    apt_update(fatal=True)
173
+    # apt-get install -y -q docker-engine
174
+    apt_install(['docker-engine'], fatal=True)
175
+
176
+
177
+@when('docker.ready')
178
+@when_not('cgroups.modified')
179
+def enable_grub_cgroups():
180
+    cfg = config()
181
+    if cfg.get('enable-cgroups'):
182
+        hookenv.log('Calling enable_grub_cgroups.sh and rebooting machine.')
183
+        check_call(['scripts/enable_grub_cgroups.sh'])
184
+        set_state('cgroups.modified')
185
+
186
+
187
+@when('docker.ready')
188
+@when_not('docker.available')
189
+def signal_workloads_start():
190
+    ''' Signal to higher layers the container runtime is ready to run
191
+        workloads. At this time the only reasonable thing we can do
192
+        is determine if the container runtime is active. '''
193
+
194
+    # before we switch to active, probe the runtime to determine if
195
+    # it is available for workloads. Assumine response from daemon
196
+    # to be sufficient
197
+
198
+    if not _probe_runtime_availability():
199
+        status_set('waiting', 'Container runtime not available.')
200
+        return
201
+
202
+    status_set('active', 'Container runtime available.')
203
+    set_state('docker.available')
204
+
205
+
206
+@when('sdn-plugin.available', 'docker.available')
207
+def container_sdn_setup(sdn):
208
+    ''' Receive the information from the SDN plugin, and render the docker
209
+    engine options. '''
210
+    sdn_config = sdn.get_sdn_config()
211
+    bind_ip = sdn_config['subnet']
212
+    mtu = sdn_config['mtu']
213
+    if data_changed('bip', bind_ip) or data_changed('mtu', mtu):
214
+        status_set('maintenance', 'Configuring container runtime with SDN.')
215
+        opts = DockerOpts()
216
+        # This is a great way to misconfigure a docker daemon. Remove the
217
+        # existing bind ip and mtu values of the SDN
218
+        if opts.exists('bip'):
219
+            opts.pop('bip')
220
+        if opts.exists('mtu'):
221
+            opts.pop('mtu')
222
+        opts.add('bip', bind_ip)
223
+        opts.add('mtu', mtu)
224
+        _remove_docker_network_bridge()
225
+        set_state('docker.sdn.configured')
226
+
227
+
228
+@when_not('sdn-plugin.available')
229
+@when('docker.sdn.configured')
230
+def scrub_sdn_config():
231
+    ''' If this scenario of states is true, we have likely broken a
232
+    relationship to our once configured SDN provider. This necessitates a
233
+    cleanup of the Docker Options for BIP and MTU of the presumed dead SDN
234
+    interface. '''
235
+
236
+    opts = DockerOpts()
237
+    try:
238
+        opts.pop('bip')
239
+    except KeyError:
240
+        hookenv.log('Unable to locate bip in Docker config.')
241
+        hookenv.log('Assuming no action required.')
242
+
243
+    try:
244
+        opts.pop('mtu')
245
+    except KeyError:
246
+        hookenv.log('Unable to locate mtu in Docker config.')
247
+        hookenv.log('Assuming no action required.')
248
+
249
+    # This method does everything we need to ensure the bridge configuration
250
+    # has been removed. restarting the daemon restores docker with its default
251
+    # networking mode.
252
+    _remove_docker_network_bridge()
253
+    recycle_daemon()
254
+    remove_state('docker.sdn.configured')
255
+
256
+
257
+@when('docker.restart')
258
+def docker_restart():
259
+    '''Other layers should be able to trigger a daemon restart. Invoke the
260
+    method that recycles the docker daemon.'''
261
+    recycle_daemon()
262
+    remove_state('docker.restart')
263
+
264
+
265
+@when('config.changed.docker-opts', 'docker.ready')
266
+def docker_template_update():
267
+    ''' The user has passed configuration that directly effects our running
268
+    docker engine instance. Re-render the systemd files and recycle the
269
+    service. '''
270
+    recycle_daemon()
271
+
272
+
273
+@when('docker.ready', 'dockerhost.connected')
274
+@when_not('dockerhost.configured')
275
+def dockerhost_connected(dockerhost):
276
+    '''Transmits the docker url to any subordinates requiring it'''
277
+    dockerhost.configure(Docker().socket)
278
+
279
+
280
+@when('nrpe-external-master.available')
281
+@when_not('nrpe-external-master.docker.initial-config')
282
+def initial_nrpe_config(nagios=None):
283
+    set_state('nrpe-external-master.docker.initial-config')
284
+    update_nrpe_config(nagios)
285
+
286
+
287
+@when('docker.ready')
288
+@when('nrpe-external-master.available')
289
+@when_any('config.changed.nagios_context',
290
+          'config.changed.nagios_servicegroups')
291
+def update_nrpe_config(unused=None):
292
+    # List of systemd services that will be checked
293
+    services = ('docker',)
294
+
295
+    # The current nrpe-external-master interface doesn't handle a lot of logic,
296
+    # use the charm-helpers code for now.
297
+    hostname = nrpe.get_nagios_hostname()
298
+    current_unit = nrpe.get_nagios_unit_name()
299
+    nrpe_setup = nrpe.NRPE(hostname=hostname)
300
+    nrpe.add_init_service_checks(nrpe_setup, services, current_unit)
301
+    nrpe_setup.write()
302
+
303
+
304
+@when_not('nrpe-external-master.available')
305
+@when('nrpe-external-master.docker.initial-config')
306
+def remove_nrpe_config(nagios=None):
307
+    remove_state('nrpe-external-master.docker.initial-config')
308
+
309
+    # List of systemd services for which the checks will be removed
310
+    services = ('docker',)
311
+
312
+    # The current nrpe-external-master interface doesn't handle a lot of logic,
313
+    # use the charm-helpers code for now.
314
+    hostname = nrpe.get_nagios_hostname()
315
+    nrpe_setup = nrpe.NRPE(hostname=hostname, primary=False)
316
+
317
+    for service in services:
318
+        nrpe_setup.remove_check(shortname=service)
319
+
320
+
321
+def recycle_daemon():
322
+    '''Render the docker template files and restart the docker daemon on this
323
+    system.'''
324
+    hookenv.log('Restarting docker service.')
325
+
326
+    # Re-render our docker daemon template at this time... because we're
327
+    # restarting. And its nice to play nice with others. Isn't that nice?
328
+    opts = DockerOpts()
329
+    render('docker.defaults', '/etc/default/docker',
330
+           {'opts': opts.to_s(), 'manual': config('docker-opts')})
331
+    render('docker.systemd', '/lib/systemd/system/docker.service', config())
332
+    reload_system_daemons()
333
+    host.service_restart('docker')
334
+
335
+    if not _probe_runtime_availability():
336
+        status_set('waiting', 'Container runtime not available.')
337
+        return
338
+
339
+
340
+def reload_system_daemons():
341
+    ''' Reload the system daemons from on-disk configuration changes '''
342
+    hookenv.log('Reloading system daemons.')
343
+    lsb = host.lsb_release()
344
+    code = lsb['DISTRIB_CODENAME']
345
+    if code != 'trusty':
346
+        command = ['systemctl', 'daemon-reload']
347
+        check_call(command)
348
+    else:
349
+        host.service_reload('docker')
350
+
351
+
352
+def _probe_runtime_availability():
353
+    ''' Determine if the workload daemon is active and responding '''
354
+    try:
355
+        cmd = ['docker', 'info']
356
+        check_call(cmd)
357
+        return True
358
+    except CalledProcessError:
359
+        # Remove the availability state if we fail reachability
360
+        remove_state('docker.available')
361
+        return False
362
+
363
+
364
+def _remove_docker_network_bridge():
365
+    ''' By default docker uses the docker0 bridge for container networking.
366
+    This method removes the default docker bridge, and reconfigures the
367
+    DOCKER_OPTS to use the SDN networking bridge. '''
368
+    status_set('maintenance',
369
+               'Reconfiguring container runtime network bridge.')
370
+    host.service_stop('docker')
371
+    apt_install(['bridge-utils'], fatal=True)
372
+    # cmd = "ifconfig docker0 down"
373
+    # ifconfig doesn't always work. use native linux networking commands to
374
+    # mark the bridge as inactive.
375
+    cmd = ['ip', 'link', 'set', 'docker0', 'down']
376
+    check_call(cmd)
377
+
378
+    cmd = ['brctl', 'delbr', 'docker0']
379
+    check_call(cmd)
380
+
381
+    # Render the config and restart docker.
382
+    recycle_daemon()
Back to file index

reactive/kubernetes_worker.py

  1
--- 
  2
+++ reactive/kubernetes_worker.py
  3
@@ -0,0 +1,816 @@
  4
+#!/usr/bin/env python
  5
+
  6
+# Copyright 2015 The Kubernetes Authors.
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#     http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+
 20
+import os
 21
+import random
 22
+import shutil
 23
+
 24
+from shlex import split
 25
+from subprocess import check_call, check_output
 26
+from subprocess import CalledProcessError
 27
+from socket import gethostname
 28
+
 29
+from charms import layer
 30
+from charms.layer import snap
 31
+from charms.reactive import hook
 32
+from charms.reactive import set_state, remove_state, is_state
 33
+from charms.reactive import when, when_any, when_not
 34
+
 35
+from charms.kubernetes.common import get_version
 36
+from charms.kubernetes.flagmanager import FlagManager
 37
+
 38
+from charms.reactive.helpers import data_changed, any_file_changed
 39
+from charms.templating.jinja2 import render
 40
+
 41
+from charmhelpers.core import hookenv, unitdata
 42
+from charmhelpers.core.host import service_stop, service_restart
 43
+from charmhelpers.contrib.charmsupport import nrpe
 44
+
 45
+# Override the default nagios shortname regex to allow periods, which we
 46
+# need because our bin names contain them (e.g. 'snap.foo.daemon'). The
 47
+# default regex in charmhelpers doesn't allow periods, but nagios itself does.
 48
+nrpe.Check.shortname_re = '[\.A-Za-z0-9-_]+$'
 49
+
 50
+kubeconfig_path = '/root/cdk/kubeconfig'
 51
+
 52
+os.environ['PATH'] += os.pathsep + os.path.join(os.sep, 'snap', 'bin')
 53
+
 54
+db = unitdata.kv()
 55
+
 56
+
 57
+@hook('upgrade-charm')
 58
+def upgrade_charm():
 59
+    # Trigger removal of PPA docker installation if it was previously set.
 60
+    set_state('config.changed.install_from_upstream')
 61
+    hookenv.atexit(remove_state, 'config.changed.install_from_upstream')
 62
+
 63
+    cleanup_pre_snap_services()
 64
+    check_resources_for_upgrade_needed()
 65
+
 66
+    # Remove gpu.enabled state so we can reconfigure gpu-related kubelet flags,
 67
+    # since they can differ between k8s versions
 68
+    remove_state('kubernetes-worker.gpu.enabled')
 69
+    kubelet_opts = FlagManager('kubelet')
 70
+    kubelet_opts.destroy('feature-gates')
 71
+    kubelet_opts.destroy('experimental-nvidia-gpus')
 72
+
 73
+    remove_state('kubernetes-worker.cni-plugins.installed')
 74
+    remove_state('kubernetes-worker.config.created')
 75
+    remove_state('kubernetes-worker.ingress.available')
 76
+    set_state('kubernetes-worker.restart-needed')
 77
+
 78
+
 79
+def check_resources_for_upgrade_needed():
 80
+    hookenv.status_set('maintenance', 'Checking resources')
 81
+    resources = ['kubectl', 'kubelet', 'kube-proxy']
 82
+    paths = [hookenv.resource_get(resource) for resource in resources]
 83
+    if any_file_changed(paths):
 84
+        set_upgrade_needed()
 85
+
 86
+
 87
+def set_upgrade_needed():
 88
+    set_state('kubernetes-worker.snaps.upgrade-needed')
 89
+    config = hookenv.config()
 90
+    previous_channel = config.previous('channel')
 91
+    require_manual = config.get('require-manual-upgrade')
 92
+    if previous_channel is None or not require_manual:
 93
+        set_state('kubernetes-worker.snaps.upgrade-specified')
 94
+
 95
+
 96
+def cleanup_pre_snap_services():
 97
+    # remove old states
 98
+    remove_state('kubernetes-worker.components.installed')
 99
+
100
+    # disable old services
101
+    services = ['kubelet', 'kube-proxy']
102
+    for service in services:
103
+        hookenv.log('Stopping {0} service.'.format(service))
104
+        service_stop(service)
105
+
106
+    # cleanup old files
107
+    files = [
108
+        "/lib/systemd/system/kubelet.service",
109
+        "/lib/systemd/system/kube-proxy.service",
110
+        "/etc/default/kube-default",
111
+        "/etc/default/kubelet",
112
+        "/etc/default/kube-proxy",
113
+        "/srv/kubernetes",
114
+        "/usr/local/bin/kubectl",
115
+        "/usr/local/bin/kubelet",
116
+        "/usr/local/bin/kube-proxy",
117
+        "/etc/kubernetes"
118
+    ]
119
+    for file in files:
120
+        if os.path.isdir(file):
121
+            hookenv.log("Removing directory: " + file)
122
+            shutil.rmtree(file)
123
+        elif os.path.isfile(file):
124
+            hookenv.log("Removing file: " + file)
125
+            os.remove(file)
126
+
127
+    # cleanup old flagmanagers
128
+    FlagManager('kubelet').destroy_all()
129
+    FlagManager('kube-proxy').destroy_all()
130
+
131
+
132
+@when('config.changed.channel')
133
+def channel_changed():
134
+    set_upgrade_needed()
135
+
136
+
137
+@when('kubernetes-worker.snaps.upgrade-needed')
138
+@when_not('kubernetes-worker.snaps.upgrade-specified')
139
+def upgrade_needed_status():
140
+    msg = 'Needs manual upgrade, run the upgrade action'
141
+    hookenv.status_set('blocked', msg)
142
+
143
+
144
+@when('kubernetes-worker.snaps.upgrade-specified')
145
+def install_snaps():
146
+    check_resources_for_upgrade_needed()
147
+    channel = hookenv.config('channel')
148
+    hookenv.status_set('maintenance', 'Installing kubectl snap')
149
+    snap.install('kubectl', channel=channel, classic=True)
150
+    hookenv.status_set('maintenance', 'Installing kubelet snap')
151
+    snap.install('kubelet', channel=channel, classic=True)
152
+    hookenv.status_set('maintenance', 'Installing kube-proxy snap')
153
+    snap.install('kube-proxy', channel=channel, classic=True)
154
+    set_state('kubernetes-worker.snaps.installed')
155
+    remove_state('kubernetes-worker.snaps.upgrade-needed')
156
+    remove_state('kubernetes-worker.snaps.upgrade-specified')
157
+
158
+
159
+@hook('stop')
160
+def shutdown():
161
+    ''' When this unit is destroyed:
162
+        - delete the current node
163
+        - stop the kubelet service
164
+        - stop the kube-proxy service
165
+        - remove the 'kubernetes-worker.cni-plugins.installed' state
166
+    '''
167
+    if os.path.isfile(kubeconfig_path):
168
+        kubectl('delete', 'node', gethostname())
169
+    service_stop('kubelet')
170
+    service_stop('kube-proxy')
171
+    remove_state('kubernetes-worker.cni-plugins.installed')
172
+
173
+
174
+@when('docker.available')
175
+@when_not('kubernetes-worker.cni-plugins.installed')
176
+def install_cni_plugins():
177
+    ''' Unpack the cni-plugins resource '''
178
+    charm_dir = os.getenv('CHARM_DIR')
179
+
180
+    # Get the resource via resource_get
181
+    try:
182
+        archive = hookenv.resource_get('cni')
183
+    except Exception:
184
+        message = 'Error fetching the cni resource.'
185
+        hookenv.log(message)
186
+        hookenv.status_set('blocked', message)
187
+        return
188
+
189
+    if not archive:
190
+        hookenv.log('Missing cni resource.')
191
+        hookenv.status_set('blocked', 'Missing cni resource.')
192
+        return
193
+
194
+    # Handle null resource publication, we check if filesize < 1mb
195
+    filesize = os.stat(archive).st_size
196
+    if filesize < 1000000:
197
+        hookenv.status_set('blocked', 'Incomplete cni resource.')
198
+        return
199
+
200
+    hookenv.status_set('maintenance', 'Unpacking cni resource.')
201
+
202
+    unpack_path = '{}/files/cni'.format(charm_dir)
203
+    os.makedirs(unpack_path, exist_ok=True)
204
+    cmd = ['tar', 'xfvz', archive, '-C', unpack_path]
205
+    hookenv.log(cmd)
206
+    check_call(cmd)
207
+
208
+    apps = [
209
+        {'name': 'loopback', 'path': '/opt/cni/bin'}
210
+    ]
211
+
212
+    for app in apps:
213
+        unpacked = '{}/{}'.format(unpack_path, app['name'])
214
+        app_path = os.path.join(app['path'], app['name'])
215
+        install = ['install', '-v', '-D', unpacked, app_path]
216
+        hookenv.log(install)
217
+        check_call(install)
218
+
219
+    # Used by the "registry" action. The action is run on a single worker, but
220
+    # the registry pod can end up on any worker, so we need this directory on
221
+    # all the workers.
222
+    os.makedirs('/srv/registry', exist_ok=True)
223
+
224
+    set_state('kubernetes-worker.cni-plugins.installed')
225
+
226
+
227
+@when('kubernetes-worker.snaps.installed')
228
+def set_app_version():
229
+    ''' Declare the application version to juju '''
230
+    cmd = ['kubelet', '--version']
231
+    version = check_output(cmd)
232
+    hookenv.application_version_set(version.split(b' v')[-1].rstrip())
233
+
234
+
235
+@when('kubernetes-worker.snaps.installed')
236
+@when_not('kube-control.dns.available')
237
+def notify_user_transient_status():
238
+    ''' Notify to the user we are in a transient state and the application
239
+    is still converging. Potentially remotely, or we may be in a detached loop
240
+    wait state '''
241
+
242
+    # During deployment the worker has to start kubelet without cluster dns
243
+    # configured. If this is the first unit online in a service pool waiting
244
+    # to self host the dns pod, and configure itself to query the dns service
245
+    # declared in the kube-system namespace
246
+
247
+    hookenv.status_set('waiting', 'Waiting for cluster DNS.')
248
+
249
+
250
+@when('kubernetes-worker.snaps.installed',
251
+      'kube-control.dns.available')
252
+@when_not('kubernetes-worker.snaps.upgrade-needed')
253
+def charm_status(kube_control):
254
+    '''Update the status message with the current status of kubelet.'''
255
+    update_kubelet_status()
256
+
257
+
258
+def update_kubelet_status():
259
+    ''' There are different states that the kubelet can be in, where we are
260
+    waiting for dns, waiting for cluster turnup, or ready to serve
261
+    applications.'''
262
+    if (_systemctl_is_active('snap.kubelet.daemon')):
263
+        hookenv.status_set('active', 'Kubernetes worker running.')
264
+    # if kubelet is not running, we're waiting on something else to converge
265
+    elif (not _systemctl_is_active('snap.kubelet.daemon')):
266
+        hookenv.status_set('waiting', 'Waiting for kubelet to start.')
267
+
268
+
269
+@when('certificates.available')
270
+def send_data(tls):
271
+    '''Send the data that is required to create a server certificate for
272
+    this server.'''
273
+    # Use the public ip of this unit as the Common Name for the certificate.
274
+    common_name = hookenv.unit_public_ip()
275
+
276
+    # Create SANs that the tls layer will add to the server cert.
277
+    sans = [
278
+        hookenv.unit_public_ip(),
279
+        hookenv.unit_private_ip(),
280
+        gethostname()
281
+    ]
282
+
283
+    # Create a path safe name by removing path characters from the unit name.
284
+    certificate_name = hookenv.local_unit().replace('/', '_')
285
+
286
+    # Request a server cert with this information.
287
+    tls.request_server_cert(common_name, sans, certificate_name)
288
+
289
+
290
+@when('kube-api-endpoint.available', 'kube-control.dns.available',
291
+      'cni.available')
292
+def watch_for_changes(kube_api, kube_control, cni):
293
+    ''' Watch for configuration changes and signal if we need to restart the
294
+    worker services '''
295
+    servers = get_kube_api_servers(kube_api)
296
+    dns = kube_control.get_dns()
297
+    cluster_cidr = cni.get_config()['cidr']
298
+
299
+    if (data_changed('kube-api-servers', servers) or
300
+            data_changed('kube-dns', dns) or
301
+            data_changed('cluster-cidr', cluster_cidr)):
302
+
303
+        set_state('kubernetes-worker.restart-needed')
304
+
305
+
306
+@when('kubernetes-worker.snaps.installed', 'kube-api-endpoint.available',
307
+      'tls_client.ca.saved', 'tls_client.client.certificate.saved',
308
+      'tls_client.client.key.saved', 'tls_client.server.certificate.saved',
309
+      'tls_client.server.key.saved', 'kube-control.dns.available',
310
+      'cni.available', 'kubernetes-worker.restart-needed')
311
+def start_worker(kube_api, kube_control, cni):
312
+    ''' Start kubelet using the provided API and DNS info.'''
313
+    servers = get_kube_api_servers(kube_api)
314
+    # Note that the DNS server doesn't necessarily exist at this point. We know
315
+    # what its IP will eventually be, though, so we can go ahead and configure
316
+    # kubelet with that info. This ensures that early pods are configured with
317
+    # the correct DNS even though the server isn't ready yet.
318
+
319
+    dns = kube_control.get_dns()
320
+    cluster_cidr = cni.get_config()['cidr']
321
+
322
+    if cluster_cidr is None:
323
+        hookenv.log('Waiting for cluster cidr.')
324
+        return
325
+
326
+    # set --allow-privileged flag for kubelet
327
+    set_privileged()
328
+
329
+    create_config(random.choice(servers))
330
+    configure_worker_services(servers, dns, cluster_cidr)
331
+    set_state('kubernetes-worker.config.created')
332
+    restart_unit_services()
333
+    update_kubelet_status()
334
+    remove_state('kubernetes-worker.restart-needed')
335
+
336
+
337
+@when('cni.connected')
338
+@when_not('cni.configured')
339
+def configure_cni(cni):
340
+    ''' Set worker configuration on the CNI relation. This lets the CNI
341
+    subordinate know that we're the worker so it can respond accordingly. '''
342
+    cni.set_config(is_master=False, kubeconfig_path=kubeconfig_path)
343
+
344
+
345
+@when('config.changed.ingress')
346
+def toggle_ingress_state():
347
+    ''' Ingress is a toggled state. Remove ingress.available if set when
348
+    toggled '''
349
+    remove_state('kubernetes-worker.ingress.available')
350
+
351
+
352
+@when('docker.sdn.configured')
353
+def sdn_changed():
354
+    '''The Software Defined Network changed on the container so restart the
355
+    kubernetes services.'''
356
+    restart_unit_services()
357
+    update_kubelet_status()
358
+    remove_state('docker.sdn.configured')
359
+
360
+
361
+@when('kubernetes-worker.config.created')
362
+@when_not('kubernetes-worker.ingress.available')
363
+def render_and_launch_ingress():
364
+    ''' If configuration has ingress RC enabled, launch the ingress load
365
+    balancer and default http backend. Otherwise attempt deletion. '''
366
+    config = hookenv.config()
367
+    # If ingress is enabled, launch the ingress controller
368
+    if config.get('ingress'):
369
+        launch_default_ingress_controller()
370
+    else:
371
+        hookenv.log('Deleting the http backend and ingress.')
372
+        kubectl_manifest('delete',
373
+                         '/root/cdk/addons/default-http-backend.yaml')
374
+        kubectl_manifest('delete',
375
+                         '/root/cdk/addons/ingress-replication-controller.yaml')  # noqa
376
+        hookenv.close_port(80)
377
+        hookenv.close_port(443)
378
+
379
+
380
+@when('kubernetes-worker.ingress.available')
381
+def scale_ingress_controller():
382
+    ''' Scale the number of ingress controller replicas to match the number of
383
+    nodes. '''
384
+    try:
385
+        output = kubectl('get', 'nodes', '-o', 'name')
386
+        count = len(output.splitlines())
387
+        kubectl('scale', '--replicas=%d' % count, 'rc/nginx-ingress-controller')  # noqa
388
+    except CalledProcessError:
389
+        hookenv.log('Failed to scale ingress controllers. Will attempt again next update.')  # noqa
390
+
391
+
392
+@when('config.changed.labels', 'kubernetes-worker.config.created')
393
+def apply_node_labels():
394
+    ''' Parse the labels configuration option and apply the labels to the node.
395
+    '''
396
+    # scrub and try to format an array from the configuration option
397
+    config = hookenv.config()
398
+    user_labels = _parse_labels(config.get('labels'))
399
+
400
+    # For diffing sake, iterate the previous label set
401
+    if config.previous('labels'):
402
+        previous_labels = _parse_labels(config.previous('labels'))
403
+        hookenv.log('previous labels: {}'.format(previous_labels))
404
+    else:
405
+        # this handles first time run if there is no previous labels config
406
+        previous_labels = _parse_labels("")
407
+
408
+    # Calculate label removal
409
+    for label in previous_labels:
410
+        if label not in user_labels:
411
+            hookenv.log('Deleting node label {}'.format(label))
412
+            try:
413
+                _apply_node_label(label, delete=True)
414
+            except CalledProcessError:
415
+                hookenv.log('Error removing node label {}'.format(label))
416
+        # if the label is in user labels we do nothing here, it will get set
417
+        # during the atomic update below.
418
+
419
+    # Atomically set a label
420
+    for label in user_labels:
421
+        _apply_node_label(label)
422
+
423
+
424
+def arch():
425
+    '''Return the package architecture as a string. Raise an exception if the
426
+    architecture is not supported by kubernetes.'''
427
+    # Get the package architecture for this system.
428
+    architecture = check_output(['dpkg', '--print-architecture']).rstrip()
429
+    # Convert the binary result into a string.
430
+    architecture = architecture.decode('utf-8')
431
+    return architecture
432
+
433
+
434
+def create_config(server):
435
+    '''Create a kubernetes configuration for the worker unit.'''
436
+    # Get the options from the tls-client layer.
437
+    layer_options = layer.options('tls-client')
438
+    # Get all the paths to the tls information required for kubeconfig.
439
+    ca = layer_options.get('ca_certificate_path')
440
+    key = layer_options.get('client_key_path')
441
+    cert = layer_options.get('client_certificate_path')
442
+
443
+    # Create kubernetes configuration in the default location for ubuntu.
444
+    create_kubeconfig('/home/ubuntu/.kube/config', server, ca, key, cert,
445
+                      user='ubuntu')
446
+    # Make the config dir readable by the ubuntu users so juju scp works.
447
+    cmd = ['chown', '-R', 'ubuntu:ubuntu', '/home/ubuntu/.kube']
448
+    check_call(cmd)
449
+    # Create kubernetes configuration in the default location for root.
450
+    create_kubeconfig('/root/.kube/config', server, ca, key, cert,
451
+                      user='root')
452
+    # Create kubernetes configuration for kubelet, and kube-proxy services.
453
+    create_kubeconfig(kubeconfig_path, server, ca, key, cert,
454
+                      user='kubelet')
455
+
456
+
457
+def configure_worker_services(api_servers, dns, cluster_cidr):
458
+    ''' Add remaining flags for the worker services and configure snaps to use
459
+    them '''
460
+    layer_options = layer.options('tls-client')
461
+    ca_cert_path = layer_options.get('ca_certificate_path')
462
+    server_cert_path = layer_options.get('server_certificate_path')
463
+    server_key_path = layer_options.get('server_key_path')
464
+
465
+    kubelet_opts = FlagManager('kubelet')
466
+    kubelet_opts.add('require-kubeconfig', 'true')
467
+    kubelet_opts.add('kubeconfig', kubeconfig_path)
468
+    kubelet_opts.add('network-plugin', 'cni')
469
+    kubelet_opts.add('logtostderr', 'true')
470
+    kubelet_opts.add('v', '0')
471
+    kubelet_opts.add('address', '0.0.0.0')
472
+    kubelet_opts.add('port', '10250')
473
+    kubelet_opts.add('cluster-dns', dns['sdn-ip'])
474
+    kubelet_opts.add('cluster-domain', dns['domain'])
475
+    kubelet_opts.add('anonymous-auth', 'false')
476
+    kubelet_opts.add('client-ca-file', ca_cert_path)
477
+    kubelet_opts.add('tls-cert-file', server_cert_path)
478
+    kubelet_opts.add('tls-private-key-file', server_key_path)
479
+
480
+    kube_proxy_opts = FlagManager('kube-proxy')
481
+    kube_proxy_opts.add('cluster-cidr', cluster_cidr)
482
+    kube_proxy_opts.add('kubeconfig', kubeconfig_path)
483
+    kube_proxy_opts.add('logtostderr', 'true')
484
+    kube_proxy_opts.add('v', '0')
485
+    kube_proxy_opts.add('master', random.choice(api_servers), strict=True)
486
+
487
+    cmd = ['snap', 'set', 'kubelet'] + kubelet_opts.to_s().split(' ')
488
+    check_call(cmd)
489
+    cmd = ['snap', 'set', 'kube-proxy'] + kube_proxy_opts.to_s().split(' ')
490
+    check_call(cmd)
491
+
492
+
493
+def create_kubeconfig(kubeconfig, server, ca, key, certificate, user='ubuntu',
494
+                      context='juju-context', cluster='juju-cluster'):
495
+    '''Create a configuration for Kubernetes based on path using the supplied
496
+    arguments for values of the Kubernetes server, CA, key, certificate, user
497
+    context and cluster.'''
498
+    # Create the config file with the address of the master server.
499
+    cmd = 'kubectl config --kubeconfig={0} set-cluster {1} ' \
500
+          '--server={2} --certificate-authority={3} --embed-certs=true'
501
+    check_call(split(cmd.format(kubeconfig, cluster, server, ca)))
502
+    # Create the credentials using the client flags.
503
+    cmd = 'kubectl config --kubeconfig={0} set-credentials {1} ' \
504
+          '--client-key={2} --client-certificate={3} --embed-certs=true'
505
+    check_call(split(cmd.format(kubeconfig, user, key, certificate)))
506
+    # Create a default context with the cluster.
507
+    cmd = 'kubectl config --kubeconfig={0} set-context {1} ' \
508
+          '--cluster={2} --user={3}'
509
+    check_call(split(cmd.format(kubeconfig, context, cluster, user)))
510
+    # Make the config use this new context.
511
+    cmd = 'kubectl config --kubeconfig={0} use-context {1}'
512
+    check_call(split(cmd.format(kubeconfig, context)))
513
+
514
+
515
+def launch_default_ingress_controller():
516
+    ''' Launch the Kubernetes ingress controller & default backend (404) '''
517
+    context = {}
518
+    context['arch'] = arch()
519
+    addon_path = '/root/cdk/addons/{}'
520
+
521
+    # Render the default http backend (404) replicationcontroller manifest
522
+    manifest = addon_path.format('default-http-backend.yaml')
523
+    render('default-http-backend.yaml', manifest, context)
524
+    hookenv.log('Creating the default http backend.')
525
+    try:
526
+        kubectl('apply', '-f', manifest)
527
+    except CalledProcessError as e:
528
+        hookenv.log(e)
529
+        hookenv.log('Failed to create default-http-backend. Will attempt again next update.')  # noqa
530
+        hookenv.close_port(80)
531
+        hookenv.close_port(443)
532
+        return
533
+
534
+    # Render the ingress replication controller manifest
535
+    manifest = addon_path.format('ingress-replication-controller.yaml')
536
+    render('ingress-replication-controller.yaml', manifest, context)
537
+    hookenv.log('Creating the ingress replication controller.')
538
+    try:
539
+        kubectl('apply', '-f', manifest)
540
+    except CalledProcessError as e:
541
+        hookenv.log(e)
542
+        hookenv.log('Failed to create ingress controller. Will attempt again next update.')  # noqa
543
+        hookenv.close_port(80)
544
+        hookenv.close_port(443)
545
+        return
546
+
547
+    set_state('kubernetes-worker.ingress.available')
548
+    hookenv.open_port(80)
549
+    hookenv.open_port(443)
550
+
551
+
552
+def restart_unit_services():
553
+    '''Restart worker services.'''
554
+    hookenv.log('Restarting kubelet and kube-proxy.')
555
+    services = ['kube-proxy', 'kubelet']
556
+    for service in services:
557
+        service_restart('snap.%s.daemon' % service)
558
+
559
+
560
+def get_kube_api_servers(kube_api):
561
+    '''Return the kubernetes api server address and port for this
562
+    relationship.'''
563
+    hosts = []
564
+    # Iterate over every service from the relation object.
565
+    for service in kube_api.services():
566
+        for unit in service['hosts']:
567
+            hosts.append('https://{0}:{1}'.format(unit['hostname'],
568
+                                                  unit['port']))
569
+    return hosts
570
+
571
+
572
+def kubectl(*args):
573
+    ''' Run a kubectl cli command with a config file. Returns stdout and throws
574
+    an error if the command fails. '''
575
+    command = ['kubectl', '--kubeconfig=' + kubeconfig_path] + list(args)
576
+    hookenv.log('Executing {}'.format(command))
577
+    return check_output(command)
578
+
579
+
580
+def kubectl_success(*args):
581
+    ''' Runs kubectl with the given args. Returns True if succesful, False if
582
+    not. '''
583
+    try:
584
+        kubectl(*args)
585
+        return True
586
+    except CalledProcessError:
587
+        return False
588
+
589
+
590
+def kubectl_manifest(operation, manifest):
591
+    ''' Wrap the kubectl creation command when using filepath resources
592
+    :param operation - one of get, create, delete, replace
593
+    :param manifest - filepath to the manifest
594
+     '''
595
+    # Deletions are a special case
596
+    if operation == 'delete':
597
+        # Ensure we immediately remove requested resources with --now
598
+        return kubectl_success(operation, '-f', manifest, '--now')
599
+    else:
600
+        # Guard against an error re-creating the same manifest multiple times
601
+        if operation == 'create':
602
+            # If we already have the definition, its probably safe to assume
603
+            # creation was true.
604
+            if kubectl_success('get', '-f', manifest):
605
+                hookenv.log('Skipping definition for {}'.format(manifest))
606
+                return True
607
+        # Execute the requested command that did not match any of the special
608
+        # cases above
609
+        return kubectl_success(operation, '-f', manifest)
610
+
611
+
612
+@when('nrpe-external-master.available')
613
+@when_not('nrpe-external-master.initial-config')
614
+def initial_nrpe_config(nagios=None):
615
+    set_state('nrpe-external-master.initial-config')
616
+    update_nrpe_config(nagios)
617
+
618
+
619
+@when('kubernetes-worker.config.created')
620
+@when('nrpe-external-master.available')
621
+@when_any('config.changed.nagios_context',
622
+          'config.changed.nagios_servicegroups')
623
+def update_nrpe_config(unused=None):
624
+    services = ('snap.kubelet.daemon', 'snap.kube-proxy.daemon')
625
+    hostname = nrpe.get_nagios_hostname()
626
+    current_unit = nrpe.get_nagios_unit_name()
627
+    nrpe_setup = nrpe.NRPE(hostname=hostname)
628
+    nrpe.add_init_service_checks(nrpe_setup, services, current_unit)
629
+    nrpe_setup.write()
630
+
631
+
632
+@when_not('nrpe-external-master.available')
633
+@when('nrpe-external-master.initial-config')
634
+def remove_nrpe_config(nagios=None):
635
+    remove_state('nrpe-external-master.initial-config')
636
+
637
+    # List of systemd services for which the checks will be removed
638
+    services = ('snap.kubelet.daemon', 'snap.kube-proxy.daemon')
639
+
640
+    # The current nrpe-external-master interface doesn't handle a lot of logic,
641
+    # use the charm-helpers code for now.
642
+    hostname = nrpe.get_nagios_hostname()
643
+    nrpe_setup = nrpe.NRPE(hostname=hostname)
644
+
645
+    for service in services:
646
+        nrpe_setup.remove_check(shortname=service)
647
+
648
+
649
+def set_privileged():
650
+    """Update the allow-privileged flag for kubelet.
651
+
652
+    """
653
+    privileged = hookenv.config('allow-privileged')
654
+    if privileged == 'auto':
655
+        gpu_enabled = is_state('kubernetes-worker.gpu.enabled')
656
+        privileged = 'true' if gpu_enabled else 'false'
657
+
658
+    flag = 'allow-privileged'
659
+    hookenv.log('Setting {}={}'.format(flag, privileged))
660
+
661
+    kubelet_opts = FlagManager('kubelet')
662
+    kubelet_opts.add(flag, privileged)
663
+
664
+    if privileged == 'true':
665
+        set_state('kubernetes-worker.privileged')
666
+    else:
667
+        remove_state('kubernetes-worker.privileged')
668
+
669
+
670
+@when('config.changed.allow-privileged')
671
+@when('kubernetes-worker.config.created')
672
+def on_config_allow_privileged_change():
673
+    """React to changed 'allow-privileged' config value.
674
+
675
+    """
676
+    set_state('kubernetes-worker.restart-needed')
677
+    remove_state('config.changed.allow-privileged')
678
+
679
+
680
+@when('cuda.installed')
681
+@when('kubernetes-worker.config.created')
682
+@when_not('kubernetes-worker.gpu.enabled')
683
+def enable_gpu():
684
+    """Enable GPU usage on this node.
685
+
686
+    """
687
+    config = hookenv.config()
688
+    if config['allow-privileged'] == "false":
689
+        hookenv.status_set(
690
+            'active',
691
+            'GPUs available. Set allow-privileged="auto" to enable.'
692
+        )
693
+        return
694
+
695
+    hookenv.log('Enabling gpu mode')
696
+    try:
697
+        # Not sure why this is necessary, but if you don't run this, k8s will
698
+        # think that the node has 0 gpus (as shown by the output of
699
+        # `kubectl get nodes -o yaml`
700
+        check_call(['nvidia-smi'])
701
+    except CalledProcessError as cpe:
702
+        hookenv.log('Unable to communicate with the NVIDIA driver.')
703
+        hookenv.log(cpe)
704
+        return
705
+
706
+    kubelet_opts = FlagManager('kubelet')
707
+    if get_version('kubelet') < (1, 6):
708
+        hookenv.log('Adding --experimental-nvidia-gpus=1 to kubelet')
709
+        kubelet_opts.add('experimental-nvidia-gpus', '1')
710
+    else:
711
+        hookenv.log('Adding --feature-gates=Accelerators=true to kubelet')
712
+        kubelet_opts.add('feature-gates', 'Accelerators=true')
713
+
714
+    # Apply node labels
715
+    _apply_node_label('gpu=true', overwrite=True)
716
+    _apply_node_label('cuda=true', overwrite=True)
717
+
718
+    set_state('kubernetes-worker.gpu.enabled')
719
+    set_state('kubernetes-worker.restart-needed')
720
+
721
+
722
+@when('kubernetes-worker.gpu.enabled')
723
+@when_not('kubernetes-worker.privileged')
724
+@when_not('kubernetes-worker.restart-needed')
725
+def disable_gpu():
726
+    """Disable GPU usage on this node.
727
+
728
+    This handler fires when we're running in gpu mode, and then the operator
729
+    sets allow-privileged="false". Since we can no longer run privileged
730
+    containers, we need to disable gpu mode.
731
+
732
+    """
733
+    hookenv.log('Disabling gpu mode')
734
+
735
+    kubelet_opts = FlagManager('kubelet')
736
+    if get_version('kubelet') < (1, 6):
737
+        kubelet_opts.destroy('experimental-nvidia-gpus')
738
+    else:
739
+        kubelet_opts.remove('feature-gates', 'Accelerators=true')
740
+
741
+    # Remove node labels
742
+    _apply_node_label('gpu', delete=True)
743
+    _apply_node_label('cuda', delete=True)
744
+
745
+    remove_state('kubernetes-worker.gpu.enabled')
746
+    set_state('kubernetes-worker.restart-needed')
747
+
748
+
749
+@when('kubernetes-worker.gpu.enabled')
750
+@when('kube-control.connected')
751
+def notify_master_gpu_enabled(kube_control):
752
+    """Notify kubernetes-master that we're gpu-enabled.
753
+
754
+    """
755
+    kube_control.set_gpu(True)
756
+
757
+
758
+@when_not('kubernetes-worker.gpu.enabled')
759
+@when('kube-control.connected')
760
+def notify_master_gpu_not_enabled(kube_control):
761
+    """Notify kubernetes-master that we're not gpu-enabled.
762
+
763
+    """
764
+    kube_control.set_gpu(False)
765
+
766
+
767
+@when_not('kube-control.connected')
768
+def missing_kube_control():
769
+    """Inform the operator they need to add the kube-control relation.
770
+
771
+    If deploying via bundle this won't happen, but if operator is upgrading a
772
+    a charm in a deployment that pre-dates the kube-control relation, it'll be
773
+    missing.
774
+
775
+    """
776
+    hookenv.status_set(
777
+        'blocked',
778
+        'Relate {}:kube-control kubernetes-master:kube-control'.format(
779
+            hookenv.service_name()))
780
+
781
+
782
+def _systemctl_is_active(application):
783
+    ''' Poll systemctl to determine if the application is running '''
784
+    cmd = ['systemctl', 'is-active', application]
785
+    try:
786
+        raw = check_output(cmd)
787
+        return b'active' in raw
788
+    except Exception:
789
+        return False
790
+
791
+
792
+def _apply_node_label(label, delete=False, overwrite=False):
793
+    ''' Invoke kubectl to apply node label changes '''
794
+
795
+    hostname = gethostname()
796
+    # TODO: Make this part of the kubectl calls instead of a special string
797
+    cmd_base = 'kubectl --kubeconfig={0} label node {1} {2}'
798
+
799
+    if delete is True:
800
+        label_key = label.split('=')[0]
801
+        cmd = cmd_base.format(kubeconfig_path, hostname, label_key)
802
+        cmd = cmd + '-'
803
+    else:
804
+        cmd = cmd_base.format(kubeconfig_path, hostname, label)
805
+        if overwrite:
806
+            cmd = '{} --overwrite'.format(cmd)
807
+    check_call(split(cmd))
808
+
809
+
810
+def _parse_labels(labels):
811
+    ''' Parse labels from a key=value string separated by space.'''
812
+    label_array = labels.split(' ')
813
+    sanitized_labels = []
814
+    for item in label_array:
815
+        if '=' in item:
816
+            sanitized_labels.append(item)
817
+        else:
818
+            hookenv.log('Skipping malformed option: {}'.format(item))
819
+    return sanitized_labels
Back to file index

reactive/snap.py

<
  1
--- 
  2
+++ reactive/snap.py
  3
@@ -0,0 +1,171 @@
  4
+# Copyright 2016 Canonical Ltd.
  5
+#
  6
+# This file is part of the Snap layer for Juju.
  7
+#
  8
+# Licensed under the Apache License, Version 2.0 (the "License");
  9
+# you may not use this file except in compliance with the License.
 10
+# You may obtain a copy of the License at
 11
+#
 12
+#  http://www.apache.org/licenses/LICENSE-2.0
 13
+#
 14
+# Unless required by applicable law or agreed to in writing, software
 15
+# distributed under the License is distributed on an "AS IS" BASIS,
 16
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 17
+# See the License for the specific language governing permissions and
 18
+# limitations under the License.
 19
+'''
 20
+charms.reactive helpers for dealing with Snap packages.
 21
+'''
 22
+import os.path
 23
+import shutil
 24
+import subprocess
 25
+from textwrap import dedent
 26
+import time
 27
+
 28
+from charmhelpers.core import hookenv, host
 29
+from charms import layer
 30
+from charms import reactive
 31
+from charms.layer import snap
 32
+from charms.reactive import hook
 33
+from charms.reactive.helpers import data_changed
 34
+
 35
+
 36
+def install():
 37
+    opts = layer.options('snap')
 38
+    for snapname, snap_opts in opts.items():
 39
+        installed_state = 'snap.installed.{}'.format(snapname)
 40
+        if not reactive.is_state(installed_state):
 41
+            snap.install(snapname, **snap_opts)
 42
+    if data_changed('snap.install.opts', opts):
 43
+        snap.connect_all()
 44
+
 45
+
 46
+def refresh():
 47
+    opts = layer.options('snap')
 48
+    for snapname, snap_opts in opts.items():
 49
+        snap.refresh(snapname, **snap_opts)
 50
+    snap.connect_all()
 51
+
 52
+
 53
+@hook('upgrade-charm')
 54
+def upgrade_charm():
 55
+    refresh()
 56
+
 57
+
 58
+def get_series():
 59
+    return subprocess.check_output(['lsb_release', '-sc'],
 60
+                                   universal_newlines=True).strip()
 61
+
 62
+
 63
+def snapd_supported():
 64
+    # snaps are not supported in trusty lxc containers.
 65
+    if get_series() == 'trusty' and host.is_container():
 66
+        return False
 67
+    return True  # For all other cases, assume true.
 68
+
 69
+
 70
+def ensure_snapd():
 71
+    if not snapd_supported():
 72
+        hookenv.log('Snaps do not work in this environment', hookenv.ERROR)
 73
+        return
 74
+
 75
+    # I don't use the apt layer, because that would tie this layer
 76
+    # too closely to apt packaging. Perhaps this is a snap-only system.
 77
+    if not shutil.which('snap'):
 78
+        cmd = ['apt', 'install', '-y', 'snapd']
 79
+        subprocess.check_call(cmd, universal_newlines=True)
 80
+
 81
+    # Work around lp:1628289. Remove this stanza once snapd depends
 82
+    # on the necessary package and snaps work in lxd xenial containers
 83
+    # without the workaround.
 84
+    if host.is_container() and not shutil.which('squashfuse'):
 85
+        cmd = ['apt', 'install', '-y', 'squashfuse', 'fuse']
 86
+        subprocess.check_call(cmd, universal_newlines=True)
 87
+
 88
+
 89
+def proxy_settings():
 90
+    proxy_vars = ('http_proxy', 'https_proxy', 'no_proxy')
 91
+    proxy_env = {key: value for key, value in os.environ.items()
 92
+                 if key in proxy_vars}
 93
+
 94
+    snap_proxy = hookenv.config()['snap_proxy']
 95
+    if snap_proxy:
 96
+        proxy_env['http_proxy'] = snap_proxy
 97
+        proxy_env['https_proxy'] = snap_proxy
 98
+    return proxy_env
 99
+
100
+
101
+def update_snap_proxy():
102
+    # This is a hack based on
103
+    # https://bugs.launchpad.net/layer-snap/+bug/1533899/comments/1
104
+    # Do it properly when Bug #1533899 is addressed.
105
+    # Note we can't do this in a standard reactive handler as we need
106
+    # to ensure proxies are configured before attempting installs or
107
+    # updates.
108
+    proxy = proxy_settings()
109
+
110
+    if get_series() == 'trusty':
111
+        # The hack to configure a snapd proxy only works under
112
+        # xenial or later.
113
+        if proxy:
114
+            hookenv.log('snap_proxy config is not supported under '
115
+                        'Ubuntu 14.04 (trusty)', hookenv.ERROR)
116
+        return
117
+
118
+    path = '/etc/systemd/system/snapd.service.d/snap_layer_proxy.conf'
119
+    if not proxy and not os.path.exists(path):
120
+        return  # No proxy asked for and proxy never configured.
121
+
122
+    if not data_changed('snap.proxy', proxy):
123
+        return  # Short circuit avoids unnecessary restarts.
124
+
125
+    if proxy:
126
+        create_snap_proxy_conf(path, proxy)
127
+    else:
128
+        remove_snap_proxy_conf(path)
129
+    subprocess.check_call(['systemctl', 'daemon-reload'],
130
+                          universal_newlines=True)
131
+    time.sleep(2)
132
+    subprocess.check_call(['systemctl', 'restart', 'snapd.service'],
133
+                          universal_newlines=True)
134
+