~xfactor973/gluster-20

Owner: xfactor973
Status: Needs Fixing
Vote: -1 (+2 needed for approval)

CPP?: No
OIL?: No

Actions have been added to make gluster deployments more flexible. Additional unit tests were also added to improve confidence in the code quality.


Tests

Substrate Status Results Last Updated
lxc RETRY 19 days ago
aws RETRY 19 days ago
gce RETRY 19 days ago

Voted: +0
xfactor973 wrote 5 months ago
My testing is currently done through a combination of travis CI + mojo. Amulet doesn't support juju storage yet and that's my blocker.
Voted: +0
chris.macnaughton wrote 5 months ago
In the new actions.rs, would the `?` operator be cleaner than the repetitive `try!` macros?

Additional comment inline.
Voted: -1
kwmonroe wrote 2 months ago
Moving to "needs fixing" pending followup to above comment.

Add Comment

Login to comment/vote on this review.


Policy Checklist

Description Unreviewed Pass Fail

General

Must verify that any software installed or utilized is verified as coming from the intended source. xfactor973
  • Any software installed from the Ubuntu or CentOS default archives satisfies this due to the apt and yum sources including cryptographic signing information.
  • Third party repositories must be listed as a configuration option that can be overridden by the user and not hard coded in the charm itself.
  • Launchpad PPAs are acceptable as the add-apt-repository command retrieves the keys securely.
  • Other third party repositories are acceptable if the signing key is embedded in the charm.
Must provide a means to protect users from known security vulnerabilities in a way consistent with best practices as defined by either operating system policies or upstream documentation.
Basically, this means there must be instructions on how to apply updates if you use software not from distribution channels.
Must have hooks that are idempotent. xfactor973
Should be built using charm layers.
Should use Juju Resources to deliver required payloads.

Testing and Quality

charm proof must pass without errors or warnings. xfactor973
Must include passing unit, functional, or integration tests. xfactor973
Tests must exercise all relations.
Tests must exercise config.
set-config, unset-config, and re-set must be tested as a minimum
Must not use anything infrastructure-provider specific (i.e. querying EC2 metadata service). xfactor973
Must be self contained unless the charm is a proxy for an existing cloud service, e.g. ec2-elb charm.
Must not use symlinks.
Bundles must only use promulgated charms, they cannot reference charms in personal namespaces.
Must call Juju hook tools (relation-*, unit-*, config-*, etc) without a hard coded path.
Should include a tests.yaml for all integration tests.

Metadata

Must include a full description of what the software does.
Must include a maintainer email address for a team or individual who will be responsive to contact.
Must include a license. Call the file 'copyright' and make sure all files' licenses are specified clearly.
Must be under a Free license. xfactor973
Must have a well documented and valid README.md.
Must describe the service.
Must describe how it interacts with other services, if applicable.
Must document the interfaces.
Must show how to deploy the charm. xfactor973
Must define external dependencies, if applicable.
Should link to a recommend production usage bundle and recommended configuration if this differs from the default.
Should reference and link to upstream documentation and best practices.

Security

Must not run any network services using default passwords. xfactor973
Must verify and validate any external payload
  • Known and understood packaging systems that verify packages like apt, pip, and yum are ok.
  • wget | sh style is not ok.
Should make use of whatever Mandatory Access Control system is provided by the distribution.
Should avoid running services as root.

All changes | Changes since last revision

Source Diff

Inline diff comments 1

Back to file index

Cargo.lock

  1
--- Cargo.lock
  2
+++ Cargo.lock
  3
@@ -2,12 +2,13 @@
  4
 name = "gluster-charm"
  5
 version = "0.1.0"
  6
 dependencies = [
  7
- "gluster 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
  8
- "itertools 0.4.15 (registry+https://github.com/rust-lang/crates.io-index)",
  9
- "juju 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
 10
+ "gluster 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)",
 11
+ "itertools 0.4.16 (registry+https://github.com/rust-lang/crates.io-index)",
 12
+ "juju 0.5.4 (registry+https://github.com/rust-lang/crates.io-index)",
 13
  "libudev 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
 14
- "regex 0.1.69 (registry+https://github.com/rust-lang/crates.io-index)",
 15
- "uuid 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
 16
+ "log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
 17
+ "regex 0.1.73 (registry+https://github.com/rust-lang/crates.io-index)",
 18
+ "uuid 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
 19
 ]
 20
 
 21
 [[package]]
 22
@@ -29,20 +30,29 @@
 23
 source = "registry+https://github.com/rust-lang/crates.io-index"
 24
 
 25
 [[package]]
 26
+name = "charmhelpers"
 27
+version = "0.1.3"
 28
+source = "registry+https://github.com/rust-lang/crates.io-index"
 29
+dependencies = [
 30
+ "juju 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
 31
+ "log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
 32
+]
 33
+
 34
+[[package]]
 35
 name = "gluster"
 36
-version = "0.2.0"
 37
+version = "0.4.1"
 38
 source = "registry+https://github.com/rust-lang/crates.io-index"
 39
 dependencies = [
 40
  "byteorder 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)",
 41
  "log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
 42
- "regex 0.1.69 (registry+https://github.com/rust-lang/crates.io-index)",
 43
+ "regex 0.1.73 (registry+https://github.com/rust-lang/crates.io-index)",
 44
  "unix_socket 0.5.0 (registry+https://github.com/rust-lang/crates.io-index)",
 45
- "uuid 0.1.18 (registry+https://github.com/rust-lang/crates.io-index)",
 46
+ "uuid 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)",
 47
 ]
 48
 
 49
 [[package]]
 50
 name = "itertools"
 51
-version = "0.4.15"
 52
+version = "0.4.16"
 53
 source = "registry+https://github.com/rust-lang/crates.io-index"
 54
 
 55
 [[package]]
 56
@@ -51,17 +61,26 @@
 57
 source = "registry+https://github.com/rust-lang/crates.io-index"
 58
 
 59
 [[package]]
 60
+name = "juju"
 61
+version = "0.5.4"
 62
+source = "registry+https://github.com/rust-lang/crates.io-index"
 63
+dependencies = [
 64
+ "charmhelpers 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
 65
+ "log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)",
 66
+]
 67
+
 68
+[[package]]
 69
 name = "kernel32-sys"
 70
 version = "0.2.2"
 71
 source = "registry+https://github.com/rust-lang/crates.io-index"
 72
 dependencies = [
 73
- "winapi 0.2.7 (registry+https://github.com/rust-lang/crates.io-index)",
 74
+ "winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)",
 75
  "winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)",
 76
 ]
 77
 
 78
 [[package]]
 79
 name = "libc"
 80
-version = "0.2.11"
 81
+version = "0.2.14"
 82
 source = "registry+https://github.com/rust-lang/crates.io-index"
 83
 
 84
 [[package]]
 85
@@ -69,7 +88,7 @@
 86
 version = "0.2.0"
 87
 source = "registry+https://github.com/rust-lang/crates.io-index"
 88
 dependencies = [
 89
- "libc 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
 90
+ "libc 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
 91
  "libudev-sys 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
 92
 ]
 93
 
 94
@@ -78,7 +97,7 @@
 95
 version = "0.1.3"
 96
 source = "registry+https://github.com/rust-lang/crates.io-index"
 97
 dependencies = [
 98
- "libc 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
 99
+ "libc 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
100
  "pkg-config 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)",
101
 ]
102
 
103
@@ -92,7 +111,7 @@
104
 version = "0.1.11"
105
 source = "registry+https://github.com/rust-lang/crates.io-index"
106
 dependencies = [
107
- "libc 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
108
+ "libc 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
109
 ]
110
 
111
 [[package]]
112
@@ -105,29 +124,24 @@
113
 version = "0.3.14"
114
 source = "registry+https://github.com/rust-lang/crates.io-index"
115
 dependencies = [
116
- "libc 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
117
+ "libc 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
118
 ]
119
 
120
 [[package]]
121
 name = "regex"
122
-version = "0.1.69"
123
+version = "0.1.73"
124
 source = "registry+https://github.com/rust-lang/crates.io-index"
125
 dependencies = [
126
  "aho-corasick 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)",
127
  "memchr 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)",
128
- "regex-syntax 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)",
129
- "thread_local 0.2.5 (registry+https://github.com/rust-lang/crates.io-index)",
130
+ "regex-syntax 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)",
131
+ "thread_local 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)",
132
  "utf8-ranges 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)",
133
 ]
134
 
135
 [[package]]
136
 name = "regex-syntax"
137
-version = "0.3.1"
138
-source = "registry+https://github.com/rust-lang/crates.io-index"
139
-
140
-[[package]]
141
-name = "rustc-serialize"
142
-version = "0.3.19"
143
+version = "0.3.4"
144
 source = "registry+https://github.com/rust-lang/crates.io-index"
145
 
146
 [[package]]
147
@@ -136,12 +150,12 @@
148
 source = "registry+https://github.com/rust-lang/crates.io-index"
149
 dependencies = [
150
  "kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)",
151
- "libc 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
152
+ "libc 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
153
 ]
154
 
155
 [[package]]
156
 name = "thread_local"
157
-version = "0.2.5"
158
+version = "0.2.6"
159
 source = "registry+https://github.com/rust-lang/crates.io-index"
160
 dependencies = [
161
  "thread-id 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)",
162
@@ -153,7 +167,7 @@
163
 source = "registry+https://github.com/rust-lang/crates.io-index"
164
 dependencies = [
165
  "cfg-if 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
166
- "libc 0.2.11 (registry+https://github.com/rust-lang/crates.io-index)",
167
+ "libc 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)",
168
 ]
169
 
170
 [[package]]
171
@@ -163,21 +177,15 @@
172
 
173
 [[package]]
174
 name = "uuid"
175
-version = "0.1.18"
176
+version = "0.3.0"
177
 source = "registry+https://github.com/rust-lang/crates.io-index"
178
 dependencies = [
179
  "rand 0.3.14 (registry+https://github.com/rust-lang/crates.io-index)",
180
- "rustc-serialize 0.3.19 (registry+https://github.com/rust-lang/crates.io-index)",
181
-]
182
-
183
-[[package]]
184
-name = "uuid"
185
-version = "0.2.2"
186
-source = "registry+https://github.com/rust-lang/crates.io-index"
187
+]
188
 
189
 [[package]]
190
 name = "winapi"
191
-version = "0.2.7"
192
+version = "0.2.8"
193
 source = "registry+https://github.com/rust-lang/crates.io-index"
194
 
195
 [[package]]
196
@@ -185,3 +193,29 @@
197
 version = "0.1.1"
198
 source = "registry+https://github.com/rust-lang/crates.io-index"
199
 
200
+[metadata]
201
+"checksum aho-corasick 0.5.2 (registry+https://github.com/rust-lang/crates.io-index)" = "2b3fb52b09c1710b961acb35390d514be82e4ac96a9969a8e38565a29b878dc9"
202
+"checksum byteorder 0.4.2 (registry+https://github.com/rust-lang/crates.io-index)" = "96c8b41881888cc08af32d47ac4edd52bc7fa27fef774be47a92443756451304"
203
+"checksum cfg-if 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)" = "de1e760d7b6535af4241fca8bd8adf68e2e7edacc6b29f5d399050c5e48cf88c"
204
+"checksum charmhelpers 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)" = "d283acb47c175cb7754dc64402934b02a7d25eb2434a0a8a08f376cd79359e09"
205
+"checksum gluster 0.4.1 (registry+https://github.com/rust-lang/crates.io-index)" = "a160c24cb2d257387482d0514666c4decdb2159ff667d6b65158d9ac81b26803"
206
+"checksum itertools 0.4.16 (registry+https://github.com/rust-lang/crates.io-index)" = "ac6e56e7cfd710efcf4c4f614bd101794845d9fe5f406b87ac5108b9153d033f"
207
+"checksum juju 0.3.1 (registry+https://github.com/rust-lang/crates.io-index)" = "d0b4692d90fcfe6c60e9f7d49d830d485b7c3765cf996dd4ea90af071f891c72"
208
+"checksum juju 0.5.4 (registry+https://github.com/rust-lang/crates.io-index)" = "2e13325024dd5a6618434555e26e97cf329f9533d112b3165a7342300eec2396"
209
+"checksum kernel32-sys 0.2.2 (registry+https://github.com/rust-lang/crates.io-index)" = "7507624b29483431c0ba2d82aece8ca6cdba9382bff4ddd0f7490560c056098d"
210
+"checksum libc 0.2.14 (registry+https://github.com/rust-lang/crates.io-index)" = "39dfaaa0f4da0f1a06876c5d94329d739ad0150868069cc235f1ddf80a0480e7"
211
+"checksum libudev 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)" = "ea626d3bdf40a1c5aee3bcd4f40826970cae8d80a8fec934c82a63840094dcfe"
212
+"checksum libudev-sys 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)" = "249a1e347fa266dc3184ebc9b4dc57108a30feda16ec0b821e94b42be20b9355"
213
+"checksum log 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)" = "ab83497bf8bf4ed2a74259c1c802351fcd67a65baa86394b6ba73c36f4838054"
214
+"checksum memchr 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)" = "d8b629fb514376c675b98c1421e80b151d3817ac42d7c667717d282761418d20"
215
+"checksum pkg-config 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)" = "8cee804ecc7eaf201a4a207241472cc870e825206f6c031e3ee2a72fa425f2fa"
216
+"checksum rand 0.3.14 (registry+https://github.com/rust-lang/crates.io-index)" = "2791d88c6defac799c3f20d74f094ca33b9332612d9aef9078519c82e4fe04a5"
217
+"checksum regex 0.1.73 (registry+https://github.com/rust-lang/crates.io-index)" = "56b7ee9f764ecf412c6e2fff779bca4b22980517ae335a21aeaf4e32625a5df2"
218
+"checksum regex-syntax 0.3.4 (registry+https://github.com/rust-lang/crates.io-index)" = "31040aad7470ad9d8c46302dcffba337bb4289ca5da2e3cd6e37b64109a85199"
219
+"checksum thread-id 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)" = "a9539db560102d1cef46b8b78ce737ff0bb64e7e18d35b2a5688f7d097d0ff03"
220
+"checksum thread_local 0.2.6 (registry+https://github.com/rust-lang/crates.io-index)" = "55dd963dbaeadc08aa7266bf7f91c3154a7805e32bb94b820b769d2ef3b4744d"
221
+"checksum unix_socket 0.5.0 (registry+https://github.com/rust-lang/crates.io-index)" = "6aa2700417c405c38f5e6902d699345241c28c0b7ade4abaad71e35a87eb1564"
222
+"checksum utf8-ranges 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)" = "a1ca13c08c41c9c3e04224ed9ff80461d97e121589ff27c753a16cb10830ae0f"
223
+"checksum uuid 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)" = "37b6bcf0ca642aa5eacd180801d6cfd196bf5defadef564e6cf680a9b8235d56"
224
+"checksum winapi 0.2.8 (registry+https://github.com/rust-lang/crates.io-index)" = "167dc9d6949a9b857f3451275e911c3f44255842c1f7a76f33c55103a909087a"
225
+"checksum winapi-build 0.1.1 (registry+https://github.com/rust-lang/crates.io-index)" = "2d315eee3b34aca4797b2da6b13ed88266e6d612562a0c46390af8299fc699bc"
Back to file index

Cargo.toml

 1
--- Cargo.toml
 2
+++ Cargo.toml
 3
@@ -8,8 +8,9 @@
 4
 itertools = "*"
 5
 juju = "*"
 6
 libudev = "*"
 7
+log = "*"
 8
 regex = "*"
 9
-uuid = "*"
10
+uuid = { version = "*", features = ["v4"] }
11
 
12
 # The development profile, used for `cargo build`
13
 [profile.dev]
Back to file index

actions.yaml

  1
--- 
  2
+++ actions.yaml
  3
@@ -0,0 +1,283 @@
  4
+create-volume-quota:
  5
+  description: |
  6
+    Directory quotas in GlusterFS allows you to set limits on usage of the disk
  7
+    space by volumes.
  8
+  params:
  9
+    volume:
 10
+      type: string
 11
+      description: The volume to enable this quota on
 12
+    usage-limit:
 13
+      type: integer
 14
+      description: The MB limit of the quota for this volume.
 15
+    path:
 16
+      type: string
 17
+      description: The path to limit the usage on.  Defaults to /
 18
+      default: "/"
 19
+  required: [volume, usage-limit]
 20
+  additionalProperties: false
 21
+delete-volume-quota:
 22
+  description: |
 23
+    Directory quotas in GlusterFS allows you to set limits on usage of the disk
 24
+    space by volumes.
 25
+  params:
 26
+    volume:
 27
+      type: string
 28
+      description: The volume to disable this quota on
 29
+    path:
 30
+      type: string
 31
+      description: The path to remove the limit on.  Defaults to /
 32
+      default: "/"
 33
+  required: [volume]
 34
+  additionalProperties: false
 35
+list-volume-quotas:
 36
+  description: |
 37
+    Directory quotas in GlusterFS allows you to set limits on usage of the disk
 38
+    space by volumes.
 39
+  params:
 40
+    volume:
 41
+      type: string
 42
+      description: The volume to list quotas on
 43
+  required: [volume]
 44
+  additionalProperties: false
 45
+set-volume-options:
 46
+  description: |
 47
+    You can tune volume options, as needed, while the cluster is online
 48
+    and available.
 49
+  params:
 50
+    volume:
 51
+      type: string
 52
+      description: The volume to set the option on
 53
+    auth-allow:
 54
+      type: string
 55
+      description: |
 56
+        IP addresses of the clients which should be allowed to access the
 57
+        volume. Valid IP address which includes wild card patterns including *,
 58
+        such as 192.168.1.*
 59
+    auth-reject:
 60
+      type: string
 61
+      description: |
 62
+        IP addresses of the clients which should be denied to access the volume.
 63
+        Valid IP address which includes wild card patterns including *,
 64
+        such as 192.168.1.*
 65
+    cluster-self-heal-window-size:
 66
+      type: integer
 67
+      description: |
 68
+        Specifies the maximum number of blocks per file on which self-heal
 69
+        would happen simultaneously.
 70
+      minimum: 0
 71
+      maximum: 1025
 72
+    cluster-data-self-heal-algorithm:
 73
+      description: |
 74
+        Specifies the type of self-heal. If you set the option as "full",
 75
+        the entire file is copied from source to destinations. If the option
 76
+        is set to "diff" the file blocks that are not in sync are copied to
 77
+        destinations. Reset uses a heuristic model. If the file does not exist
 78
+        on one of the subvolumes, or a zero-byte file exists (created by
 79
+        entry self-heal) the entire content has to be copied anyway, so there
 80
+        is no benefit from using the "diff" algorithm. If the file size is
 81
+        about the same as page size, the entire file can be read and written
 82
+        with a few operations, which will be faster than "diff" which has to
 83
+        read checksums and then read and write.
 84
+      type: string
 85
+      enum: [full,diff,reset]
 86
+    cluster-min-free-disk:
 87
+      type: integer
 88
+      description: |
 89
+        Specifies the percentage of disk space that must be kept free.
 90
+        Might be useful for non-uniform bricks
 91
+      minimum: 0
 92
+      maximum: 100
 93
+    cluster-stripe-block-size:
 94
+      type: integer
 95
+      description: |
 96
+        Specifies the size of the stripe unit that will be read from or written
 97
+        to.
 98
+    cluster-self-heal-daemon:
 99
+      type: boolean
100
+      description: |
101
+        Allows you to turn-off proactive self-heal on replicated
102
+    cluster-ensure-durability:
103
+      type: boolean
104
+      description: |
105
+        This option makes sure the data/metadata is durable across abrupt
106
+        shutdown of the brick.
107
+    diagnostics-brick-log-level:
108
+      type: string
109
+      description: |
110
+        Changes the log-level of the bricks.
111
+      enum: [debug,warning,error,none,trace,critical]
112
+    diagnostics-client-log-level:
113
+      type: string
114
+      description: |
115
+        Changes the log-level of the clients.
116
+      enum: [debug,warning,error,none,trace,critical]
117
+    diagnostics-latency-measurement:
118
+      type: boolean
119
+      description: |
120
+        Statistics related to the latency of each operation would be tracked.
121
+    diagnostics-dump-fd-stats:
122
+      type: boolean
123
+      description: |
124
+        Statistics related to file-operations would be tracked.
125
+    features-read-only:
126
+      type: boolean
127
+      description: |
128
+        Enables you to mount the entire volume as read-only for all the
129
+        clients (including NFS clients) accessing it.
130
+    features-lock-heal:
131
+      type: boolean
132
+      description: |
133
+        Enables self-healing of locks when the network disconnects.
134
+    features-quota-timeout:
135
+      type: integer
136
+      description: |
137
+        For performance reasons, quota caches the directory sizes on client.
138
+        You can set timeout indicating the maximum duration of directory sizes
139
+        in cache, from the time they are populated, during which they are
140
+        considered valid
141
+      minimum: 0
142
+      maximum: 3600
143
+    geo-replication-indexing:
144
+      type: boolean
145
+      description: |
146
+        Use this option to automatically sync the changes in the filesystem
147
+        from Master to Slave.
148
+    nfs-enable-ino32:
149
+      type: boolean
150
+      description: |
151
+        For 32-bit nfs clients or applications that do not support 64-bit
152
+        inode numbers or large files, use this option from the CLI to make
153
+        Gluster NFS return 32-bit inode numbers instead of 64-bit inode numbers.
154
+    nfs-volume-access:
155
+      type: string
156
+      description: |
157
+        Set the access type for the specified sub-volume.
158
+      enum: [read-write,read-only]
159
+    nfs-trusted-write:
160
+      type: boolean
161
+      description: |
162
+        If there is an UNSTABLE write from the client, STABLE flag will be
163
+        returned to force the client to not send a COMMIT request. In some
164
+        environments, combined with a replicated GlusterFS setup, this option
165
+        can improve write performance. This flag allows users to trust Gluster
166
+        replication logic to sync data to the disks and recover when required.
167
+        COMMIT requests if received will be handled in a default manner by
168
+        fsyncing. STABLE writes are still handled in a sync manner.
169
+    nfs-trusted-sync:
170
+      type: boolean
171
+      description: |
172
+        All writes and COMMIT requests are treated as async. This implies that
173
+        no write requests are guaranteed to be on server disks when the write
174
+        reply is received at the NFS client. Trusted sync includes
175
+        trusted-write behavior.
176
+    nfs-export-dir:
177
+      type: string
178
+      description: |
179
+        This option can be used to export specified comma separated
180
+        subdirectories in the volume. The path must be an absolute path.
181
+        Along with path allowed list of IPs/hostname can be associated with
182
+        each subdirectory. If provided connection will allowed only from these
183
+        IPs. Format: \<dir>[(hostspec[hostspec...])][,...]. Where hostspec can
184
+        be an IP address, hostname or an IP range in CIDR notation. Note: Care
185
+        must be taken while configuring this option as invalid entries and/or
186
+        unreachable DNS servers can introduce unwanted delay in all the mount
187
+        calls.
188
+    nfs-export-volumes:
189
+      type: boolean
190
+      description: |
191
+        Enable/Disable exporting entire volumes, instead if used in
192
+        conjunction with nfs3.export-dir, can allow setting up only
193
+        subdirectories as exports.
194
+    nfs-rpc-auth-unix:
195
+      type: boolean
196
+      description: |
197
+        Enable/Disable the AUTH_UNIX authentication type. This option is
198
+        enabled by default for better interoperability. However, you can
199
+        disable it if required.
200
+    nfs-rpc-auth-null:
201
+      type: boolean
202
+      description: |
203
+        Enable/Disable the AUTH_NULL authentication type. It is not recommended
204
+        to change the default value for this option.
205
+    nfs-ports-insecure:
206
+      type: boolean
207
+      description: |
208
+        Allow client connections from unprivileged ports. By default only
209
+        privileged ports are allowed. This is a global setting in case insecure
210
+        ports are to be enabled for all exports using a single option.
211
+    nfs-addr-namelookup:
212
+      type: boolean
213
+      description: |
214
+        Turn-off name lookup for incoming client connections using this option.
215
+        In some setups, the name server can take too long to reply to DNS
216
+        queries resulting in timeouts of mount requests. Use this option to
217
+        turn off name lookups during address authentication. Note, turning this
218
+        off will prevent you from using hostnames in rpc-auth.addr.* filters.
219
+    nfs-register-with-portmap:
220
+      type: boolean
221
+      description: |
222
+        For systems that need to run multiple NFS servers, you need to prevent
223
+        more than one from registering with portmap service. Use this option to
224
+        turn off portmap registration for Gluster NFS.
225
+    nfs-disable:
226
+      type: boolean
227
+      description: |
228
+        Turn-off volume being exported by NFS
229
+    performance-write-behind-window-size:
230
+      type: integer
231
+      description: |
232
+        Size of the per-file write-behind buffer.
233
+    performance-io-thread-count:
234
+      type: integer
235
+      description: |
236
+        The number of threads in IO threads translator.
237
+      minimum: 0
238
+      maximum: 65
239
+    performance-flush-behind:
240
+      type: boolean
241
+      description: |
242
+        If this option is set ON, instructs write-behind translator to perform
243
+        flush in background, by returning success (or any errors, if any of
244
+        previous writes were failed) to application even before flush is sent
245
+        to backend filesystem.
246
+    performance-cache-max-file-size:
247
+      type: integer
248
+      description: |
249
+        Sets the maximum file size cached by the io-cache translator. Can use
250
+        the normal size descriptors of KB, MB, GB,TB or PB (for example, 6GB).
251
+        Maximum size uint64.
252
+    performance-cache-min-file-size:
253
+      type: integer
254
+      description: |
255
+        Sets the minimum file size cached by the io-cache translator. Values
256
+        same as "max" above
257
+    performance-cache-refresh-timeout:
258
+      type: integer
259
+      description: |
260
+        The cached data for a file will be retained till 'cache-refresh-timeout'
261
+        seconds, after which data re-validation is performed.
262
+      minimum: 0
263
+      maximum: 61
264
+    performance-cache-size:
265
+      type: integer
266
+      description: |
267
+        Size of the read cache in bytes
268
+    server-allow-insecure:
269
+      type: boolean
270
+      description: |
271
+        Allow client connections from unprivileged ports. By default only
272
+        privileged ports are allowed. This is a global setting in case insecure
273
+        ports are to be enabled for all exports using a single option.
274
+    server-grace-timeout:
275
+      type: integer
276
+      description: |
277
+        Specifies the duration for the lock state to be maintained on the server
278
+        after a network disconnection.
279
+      minimum: 10
280
+      maximum: 1800
281
+    server-statedump-path:
282
+      type: string
283
+      description: |
284
+        Location of the state dump file.
285
+  required: [volume]
286
+  additionalProperties: false
Back to file index

config.yaml

 1
--- config.yaml
 2
+++ config.yaml
 3
@@ -26,6 +26,7 @@
 4
       Generally 2 or 3 will be fine for almost all use cases.  Greater than 3
 5
       could be useful for read heavy uses cases.
 6
   filesystem_type:
 7
+    type: string
 8
     default: xfs
 9
     description: |
10
       The filesystem type to use for each one of the bricks.  Can be either
Back to file index

copyright

 1
--- 
 2
+++ copyright
 3
@@ -0,0 +1,16 @@
 4
+Format: http://dep.debian.net/deps/dep5/
 5
+
 6
+Files: *
 7
+Copyright: Copyright 2015, Canonical Ltd., All Rights Reserved.
 8
+License: Apache License 2.0
 9
+ Licensed under the Apache License, Version 2.0 (the "License");
10
+ you may not use this file except in compliance with the License.
11
+ You may obtain a copy of the License at
12
+ .
13
+     http://www.apache.org/licenses/LICENSE-2.0
14
+ .
15
+ Unless required by applicable law or agreed to in writing, software
16
+ distributed under the License is distributed on an "AS IS" BASIS,
17
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
18
+ See the License for the specific language governing permissions and
19
+ limitations under the License.
Back to file index

src/actions.rs

  1
--- 
  2
+++ src/actions.rs
  3
@@ -0,0 +1,140 @@
  4
+use gluster;
  5
+use juju;
  6
+use log::LogLevel;
  7
+
  8
+use std::path::PathBuf;
  9
+use std::str::FromStr;
 10
+
 11
+pub fn enable_volume_quota() -> Result<(), String> {
 12
+    // Gather our action parameters
 13
+    let volume = match juju::action_get("volume") {
 14
+        Ok(v) => v,
 15
+        Err(e) => {
 16
+            // Notify the user of the failure and then return the error up the stack
 17
+            try!(juju::action_fail(&e.to_string()).map_err(|e| e.to_string()));
 18
+            return Err(e.to_string());
 19
+        }
 20
+    };
 21
+    let usage_limit = match juju::action_get("usage-limit") {
 22
+        Ok(usage) => usage,
 23
+        Err(e) => {
 24
+            // Notify the user of the failure and then return the error up the stack
 25
+            try!(juju::action_fail(&e.to_string()).map_err(|e| e.to_string()));
 26
+            return Err(e.to_string());
 27
+        }
 28
+    };
 29
+    let parsed_usage_limit = try!(u64::from_str(&usage_limit).map_err(|e| e.to_string()));
 30
+    let path = match juju::action_get("path") {
 31
+        Ok(p) => p,
 32
+        Err(e) => {
 33
+            // Notify the user of the failure and then return the error up the stack
 34
+            try!(juju::action_fail(&e.to_string()).map_err(|e| e.to_string()));
 35
+            return Err(e.to_string());
 36
+        }
 37
+    };
 38
+
 39
+    // Turn quotas on if not already enabled
 40
+    let quotas_enabled = try!(gluster::volume_quotas_enabled(&volume).map_err(|e| e.to_string()));
 41
+    if !quotas_enabled {
 42
+        try!(gluster::volume_enable_quotas(&volume).map_err(|e| e.to_string()));
 43
+    }
 44
+
 45
+    try!(gluster::volume_add_quota(&volume, PathBuf::from(path), parsed_usage_limit)
 46
+        .map_err(|e| e.to_string()));
 47
+    Ok(())
 48
+}
 49
+
 50
+pub fn disable_volume_quota() -> Result<(), String> {
 51
+    // Gather our action parameters
 52
+    let volume = match juju::action_get("volume") {
 53
+        Ok(v) => v,
 54
+        Err(e) => {
 55
+            // Notify the user of the failure and then return the error up the stack
 56
+            try!(juju::action_fail(&e.to_string()).map_err(|e| e.to_string()));
 57
+            return Err(e.to_string());
 58
+        }
 59
+    };
 60
+    let path = match juju::action_get("path") {
 61
+        Ok(p) => p,
 62
+        Err(e) => {
 63
+            // Notify the user of the failure and then return the error up the stack
 64
+            try!(juju::action_fail(&e.to_string()).map_err(|e| e.to_string()));
 65
+            return Err(e.to_string());
 66
+        }
 67
+    };
 68
+
 69
+    let quotas_enabled = try!(gluster::volume_quotas_enabled(&volume).map_err(|e| e.to_string()));
 70
+    if quotas_enabled {
 71
+        match gluster::volume_remove_quota(&volume, PathBuf::from(path)) {
 72
+            Ok(_) => return Ok(()),
 73
+            Err(e) => {
 74
+                // Notify the user of the failure and then return the error up the stack
 75
+                try!(juju::action_fail(&e.to_string()).map_err(|e| e.to_string()));
 76
+                return Err(e.to_string());
 77
+            }
 78
+        }
 79
+    } else {
 80
+        return Ok(());
 81
+    }
 82
+}
 83
+
 84
+pub fn list_volume_quotas() -> Result<(), String> {
 85
+    // Gather our action parameters
 86
+    let volume = match juju::action_get("volume") {
 87
+        Ok(v) => v,
 88
+        Err(e) => {
 89
+            // Notify the user of the failure and then return the error up the stack
 90
+            juju::log(&format!("Failed to get volume param: {:?}", e),
 91
+                      Some(LogLevel::Debug));
 92
+            try!(juju::action_fail(&e.to_string()).map_err(|e| e.to_string()));
 93
+            return Err(e.to_string());
 94
+        }
 95
+    };
 96
+    let quotas_enabled = try!(gluster::volume_quotas_enabled(&volume).map_err(|e| e.to_string()));
 97
+    if quotas_enabled {
 98
+        match gluster::quota_list(&volume) {
 99
+            Ok(quotas) => {
100
+                let quota_string: Vec<String> = quotas.iter()
101
+                    .map(|quota| {
102
+                        format!("path: {:?} limit: {} used: {}",
103
+                                quota.path,
104
+                                quota.limit,
105
+                                quota.used)
106
+                    })
107
+                    .collect();
108
+                try!(juju::action_set("quotas", &quota_string.join("\n"))
109
+                    .map_err(|e| e.to_string()));
110
+                return Ok(());
111
+            }
112
+            Err(e) => {
113
+                juju::log(&format!("Quota list failed: {:?}", e),
114
+                          Some(LogLevel::Error));
115
+                return Err(e.to_string());
116
+            }
117
+        }
118
+    } else {
119
+        juju::log(&format!("Quotas are disabled on volume: {}", volume),
120
+                  Some(LogLevel::Debug));
121
+        return Ok(());
122
+    }
123
+}
124
+
125
+pub fn set_volume_options() -> Result<(), String> {
126
+    // volume is a required parameter so this should be safe
127
+    let mut volume: String = String::new();
128
+
129
+    // Gather all of the action parameters up at once.  We don't know what
130
+    // the user wants to change.
131
+    let options = try!(juju::action_get_all().map_err(|e| e.to_string()));
132
+    let mut settings: Vec<gluster::GlusterOption> = Vec::new();
133
+    for (key, value) in options {
134
+        if key != "volume" {
135
+            settings.push(try!(gluster::GlusterOption::from_str(&key, value)
136
+                .map_err(|e| e.to_string())));
137
+        } else {
138
+            volume = value;
139
+        }
140
+    }
141
+    try!(gluster::volume_set_options(&volume, settings).map_err(|e| e.to_string()));
142
+    return Ok(());
143
+}
Back to file index

src/block.rs

 1
--- src/block.rs
 2
+++ src/block.rs
 3
@@ -1,14 +1,15 @@
 4
 extern crate juju;
 5
 extern crate libudev;
 6
 extern crate regex;
 7
-extern crate uuid;
 8
 use self::regex::Regex;
 9
-use self::uuid::Uuid;
10
+use uuid::Uuid;
11
 
12
 use std::fs;
13
 use std::io::ErrorKind;
14
 use std::path::PathBuf;
15
 use std::process::{Command, Output};
16
+
17
+use log::LogLevel;
18
 
19
 // Formats a block device at Path p with XFS
20
 #[derive(Clone, Debug)]
21
@@ -85,7 +86,8 @@
22
         reserved_blocks_percentage: u8,
23
     },
24
     Xfs {
25
-        // This is optional.  Boost knobs are on by default: http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
26
+        // This is optional.  Boost knobs are on by default:
27
+        // http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E
28
         inode_size: Option<u64>,
29
         force: bool,
30
     },
31
@@ -186,7 +188,8 @@
32
 }
33
 
34
 fn process_output(output: Output) -> Result<i32, String> {
35
-    juju::log(&format!("Command output: {:?}", output));
36
+    juju::log(&format!("Command output: {:?}", output),
37
+              Some(LogLevel::Debug));
38
 
39
     if output.status.success() {
40
         Ok(0)
Back to file index

src/main.rs

  1
--- src/main.rs
  2
+++ src/main.rs
  3
@@ -1,8 +1,14 @@
  4
+mod actions;
  5
 mod block;
  6
 
  7
 extern crate gluster;
  8
 extern crate itertools;
  9
+#[macro_use]
 10
 extern crate juju;
 11
+extern crate log;
 12
+extern crate uuid;
 13
+
 14
+use actions::{disable_volume_quota, enable_volume_quota, list_volume_quotas, set_volume_options};
 15
 
 16
 use itertools::Itertools;
 17
 use std::env;
 18
@@ -13,6 +19,7 @@
 19
 use std::thread;
 20
 use std::time::Duration;
 21
 
 22
+use log::LogLevel;
 23
 // A gluster server has either joined or left this cluster
 24
 //
 25
 
 26
@@ -28,108 +35,119 @@
 27
 
 28
 #[cfg(test)]
 29
 mod tests {
 30
-    extern crate uuid;
 31
+    // extern crate uuid;
 32
+    use std::collections::BTreeMap;
 33
     use std::fs::File;
 34
     use std::io::prelude::Read;
 35
-    use self::uuid::Uuid;
 36
-
 37
-    // #[test]
 38
-    // fn generate_test_peers(amount: usize)->Vec<gluster::Peer>{
 39
-    // let mut peers: Vec<gluster::Peer> = Vec::with_capacity(amount);
 40
-    // let mut count = 0;
 41
-    // loop{
 42
-    // let p = gluster::Peer {
 43
-    // uuid: Uuid::new_v4(),
 44
-    // hostname: format!("host-{}",Uuid::new_v4()),
 45
-    // status: gluster::State::Connected,
 46
-    // };
 47
-    // peers.push(p);
 48
-    // count+=1;
 49
-    // if count == amount{
 50
-    // break;
 51
-    // }
 52
-    // }
 53
-    // return peers;
 54
-    // }
 55
-    //
 56
-    // #[test]
 57
-    // fn generate_test_bricks(peers: &Vec<gluster::Peer>)->Vec<gluster::Brick>{
 58
-    // let mut bricks: Vec<gluster::Brick> = Vec::with_capacity(peers.len());
 59
-    // let mut count = 0;
 60
-    // for peer in peers{
 61
-    // let b = gluster::Brick{
 62
-    // peer: peer.clone(),
 63
-    // path: PathBuf::from(&format!("/mnt/{}",count)),
 64
-    // };
 65
-    // bricks.push(b);
 66
-    // count+=1;
 67
-    // }
 68
-    // return bricks;
 69
-    // }
 70
-    //
 71
+    use std::path::PathBuf;
 72
+    use super::gluster;
 73
+    use super::uuid;
 74
 
 75
     #[test]
 76
-    fn test_block_device_usage() {}
 77
-
 78
-    // #[test]
 79
-    // fn test_brick_generation(){
 80
-    // let mut test_peers = generate_test_peers(3);
 81
-    // let data: Value = json::from_str("[\"/mnt/sda\", \"/mnt/sdb\"]").unwrap();
 82
-    // let brick_path_array = data.as_array().unwrap();
 83
-    //
 84
-    // let c = Config{
 85
-    // volume_name: "test".to_string(),
 86
-    // brick_paths: brick_path_array.clone(),
 87
-    // cluster_type: gluster::VolumeType::Replicate,
 88
-    // replicas: 3,
 89
-    // };
 90
-    //
 91
-    // Case 1: New volume and perfectly matched peer number to replica number
 92
-    // let b1 = get_brick_list(&c, &test_peers, None).unwrap();
 93
-    // println!("get_brick_list 1: {:?}", b1);
 94
-    // assert!(b1.len() == 6);
 95
-    //
 96
-    // Case 2: New volume and we're short 1 Peer
 97
-    //
 98
-    // Drop a peer off the end
 99
-    // test_peers.pop();
100
-    // let b2 = get_brick_list(&c, &test_peers, None);
101
-    // println!("get_brick_list 2: {:?}", b2);
102
-    // assert!(b2.is_none());
103
-    //
104
-    // Now add a peer and try again
105
-    // test_peers.push(gluster::Peer{
106
-    // uuid: Uuid::new_v4(),
107
-    // hostname: "host-x".to_string(),
108
-    // status: gluster::State::Connected,
109
-    // });
110
-    // let b3 = get_brick_list(&c, &test_peers, None);
111
-    // println!("get_brick_list 3: {:?}", b3);
112
-    // assert!(b1.len() == 6);
113
-    //
114
-    //
115
-    // Case 3: Existing volume with 2 peers and we're adding 2 Peers
116
-    // let test_peers2 = generate_test_peers(2);
117
-    // let v = gluster::Volume {
118
-    // name: "test".to_string(),
119
-    // vol_type: gluster::VolumeType::Replicate,
120
-    // id: Uuid::new_v4(),
121
-    // status: "normal".to_string(),
122
-    // transport: gluster::Transport::Tcp,
123
-    // bricks: generate_test_bricks(&test_peers),
124
-    // };
125
-    // let b4 = get_brick_list(&c, &test_peers2, Some(v));
126
-    // println!("get_brick_list 4: {:?}", b4);
127
-    // assert!(b4.is_none());
128
-    //
129
-    //
130
-    // Case 4: Mismatch of new volume and too many peers
131
-    // let test_peers3 = generate_test_peers(4);
132
-    // let b5 = get_brick_list(&c, &test_peers3, None).unwrap();
133
-    // println!("get_brick_list 5: {:?}", b5);
134
-    // assert!(b5.len() == 6);
135
-    // }
136
-    //
137
+    fn test_all_peers_are_ready() {
138
+        let peers: Vec<gluster::Peer> = vec![gluster::Peer {
139
+                                                 uuid: uuid::Uuid::new_v4(),
140
+                                                 hostname: format!("host-{}", uuid::Uuid::new_v4()),
141
+                                                 status: gluster::State::PeerInCluster,
142
+                                             },
143
+                                             gluster::Peer {
144
+                                                 uuid: uuid::Uuid::new_v4(),
145
+                                                 hostname: format!("host-{}", uuid::Uuid::new_v4()),
146
+                                                 status: gluster::State::PeerInCluster,
147
+                                             }];
148
+        let ready = super::peers_are_ready(Ok(peers));
149
+        println!("Peers are ready: {}", ready);
150
+        assert!(ready);
151
+    }
152
+
153
+    #[test]
154
+    #[should_panic]
chris.macnaughton commented 5 months ago
Why should this test panic?
155
+    fn test_some_peers_are_ready() {
156
+        let peers: Vec<gluster::Peer> = vec![gluster::Peer {
157
+                                                 uuid: uuid::Uuid::new_v4(),
158
+                                                 hostname: format!("host-{}", uuid::Uuid::new_v4()),
159
+                                                 status: gluster::State::Connected,
160
+                                             },
161
+                                             gluster::Peer {
162
+                                                 uuid: uuid::Uuid::new_v4(),
163
+                                                 hostname: format!("host-{}", uuid::Uuid::new_v4()),
164
+                                                 status: gluster::State::PeerInCluster,
165
+                                             }];
166
+        let ready = super::peers_are_ready(Ok(peers));
167
+        println!("Some peers are ready: {}", ready);
168
+        assert!(ready);
169
+    }
170
+
171
+    #[test]
172
+    fn test_find_new_peers() {
173
+        let peer1 = gluster::Peer {
174
+            uuid: uuid::Uuid::new_v4(),
175
+            hostname: format!("host-{}", uuid::Uuid::new_v4()),
176
+            status: gluster::State::PeerInCluster,
177
+        };
178
+        let peer2 = gluster::Peer {
179
+            uuid: uuid::Uuid::new_v4(),
180
+            hostname: format!("host-{}", uuid::Uuid::new_v4()),
181
+            status: gluster::State::PeerInCluster,
182
+        };
183
+
184
+        // peer1 and peer2 are in the cluster but only peer1 is actually serving a brick.
185
+        // find_new_peers should return peer2 as a new peer
186
+        let peers: Vec<gluster::Peer> = vec![peer1.clone(), peer2.clone()];
187
+        let existing_brick = gluster::Brick {
188
+            peer: peer1,
189
+            path: PathBuf::from("/mnt/brick1"),
190
+        };
191
+
192
+        let volume_info = gluster::Volume {
193
+            name: "Test".to_string(),
194
+            vol_type: gluster::VolumeType::Replicate,
195
+            id: uuid::Uuid::new_v4(),
196
+            status: "online".to_string(),
197
+            transport: gluster::Transport::Tcp,
198
+            bricks: vec![existing_brick],
199
+            options: BTreeMap::new(),
200
+        };
201
+        let new_peers = super::find_new_peers(&peers, &volume_info);
202
+        assert_eq!(new_peers, vec![peer2]);
203
+    }
204
+
205
+    #[test]
206
+    fn test_cartesian_product() {
207
+        let peer1 = gluster::Peer {
208
+            uuid: uuid::Uuid::new_v4(),
209
+            hostname: format!("host-{}", uuid::Uuid::new_v4()),
210
+            status: gluster::State::PeerInCluster,
211
+        };
212
+        let peer2 = gluster::Peer {
213
+            uuid: uuid::Uuid::new_v4(),
214
+            hostname: format!("host-{}", uuid::Uuid::new_v4()),
215
+            status: gluster::State::PeerInCluster,
216
+        };
217
+        let peers = vec![peer1.clone(), peer2.clone()];
218
+        let paths = vec!["/mnt/brick1".to_string(), "/mnt/brick2".to_string()];
219
+        let result = super::brick_and_server_cartesian_product(&peers, &paths);
220
+        println!("brick_and_server_cartesian_product: {:?}", result);
221
+        assert_eq!(result,
222
+                   vec![
223
+                       gluster::Brick{
224
+                            peer: peer1.clone(),
225
+                            path: PathBuf::from("/mnt/brick1"),
226
+                        },
227
+                        gluster::Brick{
228
+                            peer: peer2.clone(),
229
+                            path: PathBuf::from("/mnt/brick1"),
230
+                        },
231
+                        gluster::Brick{
232
+                            peer: peer1.clone(),
233
+                            path: PathBuf::from("/mnt/brick2"),
234
+                        },
235
+                        gluster::Brick{
236
+                            peer: peer2.clone(),
237
+                            path: PathBuf::from("/mnt/brick2"),
238
+                        },
239
+                    ]);
240
+    }
241
 }
242
 
243
 // Need more expressive return values so we can wait on peers
244
@@ -164,26 +182,26 @@
245
         return false;
246
     }
247
 
248
-    juju::log(&format!("Got peer status: {:?}", peers));
249
+    juju::log(&format!("Got peer status: {:?}", peers),
250
+              Some(LogLevel::Debug));
251
     let result = match peers {
252
         Ok(result) => result,
253
         Err(err) => {
254
-            juju::log(&format!("peers_are_ready failed to get peer status: {:?}", err));
255
+            juju::log(&format!("peers_are_ready failed to get peer status: {:?}", err),
256
+                      Some(LogLevel::Error));
257
             return false;
258
         }
259
     };
260
-    for peer in result {
261
-        if peer.status != gluster::State::PeerInCluster {
262
-            return false;
263
-        }
264
-    }
265
-    return true;
266
+
267
+    // Ensure all peers are in a PeerInCluster state
268
+    result.iter().all(|peer| peer.status == gluster::State::PeerInCluster)
269
 }
270
 
271
 // HDD's are so slow that sometimes the peers take long to join the cluster.
272
 // This will loop and wait for them ie spinlock
273
 fn wait_for_peers() -> Result<(), String> {
274
-    juju::log(&"Waiting for all peers to enter the Peer in Cluster status".to_string());
275
+    juju::log(&"Waiting for all peers to enter the Peer in Cluster status".to_string(),
276
+              Some(LogLevel::Debug));
277
     try!(juju::status_set(juju::Status {
278
             status_type: juju::StatusType::Maintenance,
279
             message: "Waiting for all peers to enter the \"Peer in Cluster status\"".to_string(),
280
@@ -215,7 +233,8 @@
281
                   related_units: Vec<juju::Relation>)
282
                   -> Result<(), String> {
283
 
284
-    juju::log(&format!("Adding in related_units: {:?}", related_units));
285
+    juju::log(&format!("Adding in related_units: {:?}", related_units),
286
+              Some(LogLevel::Debug));
287
     for unit in related_units {
288
         let address = try!(juju::relation_get_by_unit(&"private-address".to_string(), &unit)
289
             .map_err(|e| e.to_string()));
290
@@ -232,11 +251,16 @@
291
 
292
         // Probe the peer in
293
         if !already_probed {
294
-            juju::log(&format!("Adding {} to cluster", &address_trimmed));
295
+            juju::log(&format!("Adding {} to cluster", &address_trimmed),
296
+                      Some(LogLevel::Debug));
297
             match gluster::peer_probe(&address_trimmed) {
298
-                Ok(_) => juju::log(&"Gluster peer probe was successful".to_string()),
299
+                Ok(_) => {
300
+                    juju::log(&"Gluster peer probe was successful".to_string(),
301
+                              Some(LogLevel::Debug))
302
+                }
303
                 Err(why) => {
304
-                    juju::log(&format!("Gluster peer probe failed: {:?}", why));
305
+                    juju::log(&format!("Gluster peer probe failed: {:?}", why),
306
+                              Some(LogLevel::Error));
307
                     return Err(why.to_string());
308
                 }
309
             };
310
@@ -297,7 +321,8 @@
311
     let mut brick_paths: Vec<String> = Vec::new();
312
 
313
     let bricks = juju::storage_list().unwrap();
314
-    juju::log(&format!("storage_list: {:?}", bricks));
315
+    juju::log(&format!("storage_list: {:?}", bricks),
316
+              Some(LogLevel::Debug));
317
 
318
     for brick in bricks.lines() {
319
         // This is the /dev/ location.
320
@@ -310,17 +335,19 @@
321
     }
322
 
323
     if volume.is_none() {
324
-        juju::log(&"Volume is none".to_string());
325
+        juju::log(&"Volume is none".to_string(), Some(LogLevel::Debug));
326
         // number of bricks % replicas == 0 then we're ok to proceed
327
         if peers.len() < replicas {
328
             // Not enough peers to replicate across
329
             juju::log(&"Not enough peers to satisfy the replication level for the Gluster \
330
                         volume.  Waiting for more peers to join."
331
-                .to_string());
332
+                          .to_string(),
333
+                      Some(LogLevel::Debug));
334
             return Err(Status::WaitForMorePeers);
335
         } else if peers.len() == replicas {
336
             // Case 1: A perfect marriage of peers and number of replicas
337
-            juju::log(&"Number of peers and number of replicas match".to_string());
338
+            juju::log(&"Number of peers and number of replicas match".to_string(),
339
+                      Some(LogLevel::Debug));
340
             return Ok(brick_and_server_cartesian_product(peers, &brick_paths));
341
         } else {
342
             // Case 2: We have a mismatch of replicas and hosts
343
@@ -330,24 +357,29 @@
344
 
345
             // Drop these peers off the end of the list
346
             new_peers.truncate(count);
347
-            juju::log(&format!("Too many new peers.  Dropping {} peers off the list", count));
348
+            juju::log(&format!("Too many new peers.  Dropping {} peers off the list", count),
349
+                      Some(LogLevel::Debug));
350
             return Ok(brick_and_server_cartesian_product(&new_peers, &brick_paths));
351
         }
352
     } else {
353
         // Existing volume.  Build a differential list.
354
-        juju::log(&"Existing volume.  Building differential brick list".to_string());
355
+        juju::log(&"Existing volume.  Building differential brick list".to_string(),
356
+                  Some(LogLevel::Debug));
357
         let mut new_peers = find_new_peers(peers, &volume.unwrap());
358
 
359
         if new_peers.len() < replicas {
360
-            juju::log(&"New peers found are less than needed by the replica count".to_string());
361
+            juju::log(&"New peers found are less than needed by the replica count".to_string(),
362
+                      Some(LogLevel::Debug));
363
             return Err(Status::WaitForMorePeers);
364
         } else if new_peers.len() == replicas {
365
-            juju::log(&"New peers and number of replicas match".to_string());
366
+            juju::log(&"New peers and number of replicas match".to_string(),
367
+                      Some(LogLevel::Debug));
368
             return Ok(brick_and_server_cartesian_product(&new_peers, &brick_paths));
369
         } else {
370
             let count = new_peers.len() - (new_peers.len() % replicas);
371
             // Drop these peers off the end of the list
372
-            juju::log(&format!("Too many new peers.  Dropping {} peers off the list", count));
373
+            juju::log(&format!("Too many new peers.  Dropping {} peers off the list", count),
374
+                      Some(LogLevel::Debug));
375
             new_peers.truncate(count);
376
             return Ok(brick_and_server_cartesian_product(&new_peers, &brick_paths));
377
         }
378
@@ -360,7 +392,7 @@
379
         Err(e) => {
380
             match e.kind() {
381
                 std::io::ErrorKind::NotFound => {
382
-                    juju::log(&format!("Creating dir {}", path));
383
+                    juju::log(&format!("Creating dir {}", path), Some(LogLevel::Debug));
384
                     try!(juju::status_set(juju::Status {
385
                             status_type: juju::StatusType::Maintenance,
386
                             message: format!("Creating dir {}", path),
387
@@ -389,7 +421,8 @@
388
         Err(e) => {
389
             juju::log(&format!("Invalid config value for replicas.  Defaulting to 3. Error was \
390
                                 {}",
391
-                               e));
392
+                               e),
393
+                      Some(LogLevel::Error));
394
             3
395
         }
396
     };
397
@@ -404,7 +437,7 @@
398
         Err(e) => {
399
             match e {
400
                 Status::WaitForMorePeers => {
401
-                    juju::log(&"Waiting for more peers".to_string());
402
+                    juju::log(&"Waiting for more peers".to_string(), Some(LogLevel::Info));
403
                     try!(juju::status_set(juju::Status {
404
                             status_type: juju::StatusType::Maintenance,
405
                             message: "Waiting for more peers".to_string(),
406
@@ -422,14 +455,16 @@
407
             }
408
         }
409
     };
410
-    juju::log(&format!("Got brick list: {:?}", brick_list));
411
+    juju::log(&format!("Got brick list: {:?}", brick_list),
412
+              Some(LogLevel::Debug));
413
 
414
     // Check to make sure the bricks are formatted and mounted
415
     // let clean_bricks = try!(check_brick_list(&brick_list).map_err(|e| e.to_string()));
416
 
417
     juju::log(&format!("Creating volume of type {:?} with brick list {:?}",
418
                        cluster_type,
419
-                       brick_list));
420
+                       brick_list),
421
+              Some(LogLevel::Info));
422
 
423
     match cluster_type {
424
         gluster::VolumeType::Distribute => {
425
@@ -519,7 +554,8 @@
426
 
427
     // Are there new peers?
428
     juju::log(&format!("Checking for new peers to expand the volume named {}",
429
-                       volume_name));
430
+                       volume_name),
431
+              Some(LogLevel::Debug));
432
 
433
     // Build the brick list
434
     let brick_list = match get_brick_list(&peers, volume_info) {
435
@@ -527,7 +563,7 @@
436
         Err(e) => {
437
             match e {
438
                 Status::WaitForMorePeers => {
439
-                    juju::log(&"Waiting for more peers".to_string());
440
+                    juju::log(&"Waiting for more peers".to_string(), Some(LogLevel::Info));
441
                     return Ok(0);
442
                 }
443
                 Status::InvalidConfig(config_err) => {
444
@@ -544,7 +580,8 @@
445
     // Check to make sure the bricks are formatted and mounted
446
     // let clean_bricks = try!(check_brick_list(&brick_list).map_err(|e| e.to_string()));
447
 
448
-    juju::log(&format!("Expanding volume with brick list: {:?}", brick_list));
449
+    juju::log(&format!("Expanding volume with brick list: {:?}", brick_list),
450
+              Some(LogLevel::Info));
451
     match gluster::volume_add_brick(&volume_name, brick_list, true) {
452
         Ok(o) => Ok(o),
453
         Err(e) => Err(e.to_string()),
454
@@ -554,7 +591,8 @@
455
 fn shrink_volume(peer: gluster::Peer, volume_info: Option<gluster::Volume>) -> Result<i32, String> {
456
     let volume_name = try!(get_config_value("volume_name"));
457
 
458
-    juju::log(&format!("Shrinking volume named  {}", volume_name));
459
+    juju::log(&format!("Shrinking volume named  {}", volume_name),
460
+              Some(LogLevel::Info));
461
 
462
     let peers: Vec<gluster::Peer> = vec![peer];
463
 
464
@@ -564,7 +602,7 @@
465
         Err(e) => {
466
             match e {
467
                 Status::WaitForMorePeers => {
468
-                    juju::log(&"Waiting for more peers".to_string());
469
+                    juju::log(&"Waiting for more peers".to_string(), Some(LogLevel::Info));
470
                     return Ok(0);
471
                 }
472
                 Status::InvalidConfig(config_err) => {
473
@@ -578,7 +616,8 @@
474
         }
475
     };
476
 
477
-    juju::log(&format!("Shrinking volume with brick list: {:?}", brick_list));
478
+    juju::log(&format!("Shrinking volume with brick list: {:?}", brick_list),
479
+              Some(LogLevel::Info));
480
     match gluster::volume_remove_brick(&volume_name, brick_list, true) {
481
         Ok(o) => Ok(o),
482
         Err(e) => Err(e.to_string()),
483
@@ -591,8 +630,9 @@
484
     let volume_name = try!(get_config_value("volume_name"));
485
 
486
     if leader {
487
-        juju::log(&format!("I am the leader: {}", context.relation_id));
488
-        juju::log(&"Loading config".to_string());
489
+        juju::log(&format!("I am the leader: {}", context.relation_id),
490
+                  Some(LogLevel::Debug));
491
+        juju::log(&"Loading config".to_string(), Some(LogLevel::Info));
492
 
493
         let mut f = try!(File::open("config.yaml").map_err(|e| e.to_string()));
494
         let mut s = String::new();
495
@@ -605,7 +645,7 @@
496
             .map_err(|e| e.to_string()));
497
 
498
         let mut peers = try!(gluster::peer_list().map_err(|e| e.to_string()));
499
-        juju::log(&format!("peer list: {:?}", peers));
500
+        juju::log(&format!("peer list: {:?}", peers), Some(LogLevel::Debug));
501
         let related_units = try!(juju::relation_list().map_err(|e| e.to_string()));
502
         try!(probe_in_units(&peers, related_units));
503
         // Update our peer list
504
@@ -616,7 +656,8 @@
505
         let existing_volume: bool;
506
         match volume_info {
507
             Ok(_) => {
508
-                juju::log(&format!("Expading volume {}", volume_name));
509
+                juju::log(&format!("Expading volume {}", volume_name),
510
+                          Some(LogLevel::Info));
511
                 try!(juju::status_set(juju::Status {
512
                         status_type: juju::StatusType::Maintenance,
513
                         message: format!("Expanding volume {}", volume_name),
514
@@ -625,7 +666,8 @@
515
 
516
                 match expand_volume(peers, volume_info.ok()) {
517
                     Ok(v) => {
518
-                        juju::log(&format!("Expand volume succeeded.  Return code: {}", v));
519
+                        juju::log(&format!("Expand volume succeeded.  Return code: {}", v),
520
+                                  Some(LogLevel::Info));
521
                         try!(juju::status_set(juju::Status {
522
                                 status_type: juju::StatusType::Active,
523
                                 message: "Expand volume succeeded.".to_string(),
524
@@ -634,7 +676,8 @@
525
                         return Ok(());
526
                     }
527
                     Err(e) => {
528
-                        juju::log(&format!("Expand volume failed with output: {}", e));
529
+                        juju::log(&format!("Expand volume failed with output: {}", e),
530
+                                  Some(LogLevel::Error));
531
                         try!(juju::status_set(juju::Status {
532
                                 status_type: juju::StatusType::Blocked,
533
                                 message: "Expand volume failed.  Please check juju debug-log."
534
@@ -653,7 +696,8 @@
535
             }
536
         }
537
         if !existing_volume {
538
-            juju::log(&format!("Creating volume {}", volume_name));
539
+            juju::log(&format!("Creating volume {}", volume_name),
540
+                      Some(LogLevel::Info));
541
             try!(juju::status_set(juju::Status {
542
                     status_type: juju::StatusType::Maintenance,
543
                     message: format!("Creating volume {}", volume_name),
544
@@ -661,7 +705,8 @@
545
                 .map_err(|e| e.to_string()));
546
             match create_volume(&peers, None) {
547
                 Ok(_) => {
548
-                    juju::log(&"Create volume succeeded.".to_string());
549
+                    juju::log(&"Create volume succeeded.".to_string(),
550
+                              Some(LogLevel::Info));
551
                     try!(juju::status_set(juju::Status {
552
                             status_type: juju::StatusType::Maintenance,
553
                             message: "Create volume succeeded".to_string(),
554
@@ -669,7 +714,8 @@
555
                         .map_err(|e| e.to_string()));
556
                 }
557
                 Err(e) => {
558
-                    juju::log(&format!("Create volume failed with output: {}", e));
559
+                    juju::log(&format!("Create volume failed with output: {}", e),
560
+                              Some(LogLevel::Error));
561
                     try!(juju::status_set(juju::Status {
562
                             status_type: juju::StatusType::Blocked,
563
                             message: "Create volume failed.  Please check juju debug-log."
564
@@ -681,7 +727,8 @@
565
             }
566
             match gluster::volume_start(&volume_name, false) {
567
                 Ok(_) => {
568
-                    juju::log(&"Starting volume succeeded.".to_string());
569
+                    juju::log(&"Starting volume succeeded.".to_string(),
570
+                              Some(LogLevel::Info));
571
                     try!(juju::status_set(juju::Status {
572
                             status_type: juju::StatusType::Active,
573
                             message: "Starting volume succeeded.".to_string(),
574
@@ -689,7 +736,8 @@
575
                         .map_err(|e| e.to_string()));
576
                 }
577
                 Err(e) => {
578
-                    juju::log(&format!("Start volume failed with output: {:?}", e));
579
+                    juju::log(&format!("Start volume failed with output: {:?}", e),
580
+                              Some(LogLevel::Error));
581
                     try!(juju::status_set(juju::Status {
582
                             status_type: juju::StatusType::Blocked,
583
                             message: "Start volume failed.  Please check juju debug-log."
584
@@ -718,7 +766,8 @@
585
 
586
 fn server_removed() -> Result<(), String> {
587
     let private_address = try!(juju::unit_get_private_addr().map_err(|e| e.to_string()));
588
-    juju::log(&format!("Removing server: {}", private_address));
589
+    juju::log(&format!("Removing server: {}", private_address),
590
+              Some(LogLevel::Info));
591
     return Ok(());
592
 }
593
 
594
@@ -733,7 +782,8 @@
595
     // Format with the default XFS unless told otherwise
596
     match filesystem_type {
597
         block::FilesystemType::Xfs => {
598
-            juju::log(&format!("Formatting block device with XFS: {:?}", &brick_path));
599
+            juju::log(&format!("Formatting block device with XFS: {:?}", &brick_path),
600
+                      Some(LogLevel::Info));
601
             try!(juju::status_set(juju::Status {
602
                     status_type: juju::StatusType::Maintenance,
603
                     message: format!("Formatting block device with XFS: {:?}", &brick_path),
604
@@ -747,7 +797,8 @@
605
             try!(block::format_block_device(&brick_path, &filesystem_type));
606
         }
607
         block::FilesystemType::Ext4 => {
608
-            juju::log(&format!("Formatting block device with Ext4: {:?}", &brick_path));
609
+            juju::log(&format!("Formatting block device with Ext4: {:?}", &brick_path),
610
+                      Some(LogLevel::Info));
611
             try!(juju::status_set(juju::Status {
612
                     status_type: juju::StatusType::Maintenance,
613
                     message: format!("Formatting block device with Ext4: {:?}", &brick_path),
614
@@ -762,7 +813,8 @@
615
                 .map_err(|e| e.to_string()));
616
         }
617
         block::FilesystemType::Btrfs => {
618
-            juju::log(&format!("Formatting block device with Btrfs: {:?}", &brick_path));
619
+            juju::log(&format!("Formatting block device with Btrfs: {:?}", &brick_path),
620
+                      Some(LogLevel::Info));
621
             try!(juju::status_set(juju::Status {
622
                     status_type: juju::StatusType::Maintenance,
623
                     message: format!("Formatting block device with Btrfs: {:?}", &brick_path),
624
@@ -778,7 +830,8 @@
625
                 .map_err(|e| e.to_string()));
626
         }
627
         _ => {
628
-            juju::log(&format!("Formatting block device with XFS: {:?}", &brick_path));
629
+            juju::log(&format!("Formatting block device with XFS: {:?}", &brick_path),
630
+                      Some(LogLevel::Info));
631
             try!(juju::status_set(juju::Status {
632
                     status_type: juju::StatusType::Maintenance,
633
                     message: format!("Formatting block device with XFS: {:?}", &brick_path),
634
@@ -795,9 +848,11 @@
635
     }
636
     // Update our block device info to reflect formatting
637
     let device_info = try!(block::get_device_info(&brick_path));
638
-    juju::log(&format!("device_info: {:?}", device_info));
639
-
640
-    juju::log(&format!("Mounting block device {:?} at {}", &brick_path, mount_path));
641
+    juju::log(&format!("device_info: {:?}", device_info),
642
+              Some(LogLevel::Info));
643
+
644
+    juju::log(&format!("Mounting block device {:?} at {}", &brick_path, mount_path),
645
+              Some(LogLevel::Info));
646
     try!(juju::status_set(juju::Status {
647
             status_type: juju::StatusType::Maintenance,
648
             message: format!("Mounting block device {:?} at {}", &brick_path, mount_path),
649
@@ -831,48 +886,26 @@
650
 fn main() {
651
     let args: Vec<String> = env::args().collect();
652
     if args.len() > 0 {
653
-        let mut hook_registry: Vec<juju::Hook> = Vec::new();
654
-
655
         // Register our hooks with the Juju library
656
-        hook_registry.push(juju::Hook {
657
-            name: "config-changed".to_string(),
658
-            callback: Box::new(config_changed),
659
-        });
660
-
661
-        hook_registry.push(juju::Hook {
662
-            name: "server-relation-changed".to_string(),
663
-            callback: Box::new(server_changed),
664
-        });
665
-
666
-        hook_registry.push(juju::Hook {
667
-            name: "server-relation-departed".to_string(),
668
-            callback: Box::new(server_removed),
669
-        });
670
-
671
-        hook_registry.push(juju::Hook {
672
-            name: "brick-storage-attached".to_string(),
673
-            callback: Box::new(brick_attached),
674
-        });
675
-
676
-        hook_registry.push(juju::Hook {
677
-            name: "brick-storage-detaching".to_string(),
678
-            callback: Box::new(brick_detached),
679
-        });
680
-
681
-        hook_registry.push(juju::Hook {
682
-            name: "fuse-relation-joined".to_string(),
683
-            callback: Box::new(fuse_relation_joined),
684
-        });
685
-
686
-        hook_registry.push(juju::Hook {
687
-            name: "nfs-relation-joined".to_string(),
688
-            callback: Box::new(nfs_relation_joined),
689
-        });
690
-
691
-        let result = juju::process_hooks(args, hook_registry);
692
+        let hook_registry: Vec<juju::Hook> = vec![
693
+            hook!("set-volume-options", set_volume_options),
694
+            hook!("create-volume-quota", enable_volume_quota),
695
+            hook!("delete-volume-quota", disable_volume_quota),
696
+            hook!("list-volume-quotas", list_volume_quotas),
697
+            hook!("config-changed", config_changed),
698
+            hook!("server-relation-changed", server_changed),
699
+            hook!("server-relation-departed", server_removed),
700
+            hook!("brick-storage-attached", brick_attached),
701
+            hook!("brick-storage-detaching", brick_detached),
702
+            hook!("fuse-relation-joined", fuse_relation_joined),
703
+            hook!("nfs-relation-joined", nfs_relation_joined),
704
+        ];
705
+
706
+        let result = juju::process_hooks(hook_registry);
707
 
708
         if result.is_err() {
709
-            juju::log(&format!("Hook failed with error: {:?}", result.err()));
710
-        }
711
-    }
712
-}
713
+            juju::log(&format!("Hook failed with error: {:?}", result.err()),
714
+                      Some(LogLevel::Error));
715
+        }
716
+    }
717
+}