PSARC 2015/110 OpenStack service updates for Juno
PSARC 2014/302 oslo.messaging - OpenStack RPC and notifications
PSARC 2014/303 concurrent.futures - high-level Python interface for asynchronous execution
PSARC 2014/304 networkx - Python module for complex networks
PSARC 2014/305 taskflow - Python module for task execution
PSARC 2014/329 pycadf - Python interface for CADF (cloud auditing)
PSARC 2014/330 posix_ipc - POSIX IPC primitives for Python
PSARC 2014/331 oauthlib - Python implementation of OAuth request-signing logic
PSARC 2015/058 oslo - OpenStack common libraries (context, db, i18n, middleware, serialization, utils, vmware)
PSARC 2015/059 glance_store - Glance storage library
PSARC 2015/060 ipaddr - an IPv4/IPv6 manipulation library in Python
PSARC 2015/061 simplegeneric - single-dispatch generic Python functions
PSARC 2015/062 wsme - Web Services Made Easy
PSARC 2015/063 retrying - General purpose Python retrying library
PSARC 2015/065 osprofiler - an OpenStack cross-project profiling library
PSARC 2015/066 OpenStack client for Sahara (Hadoop as a Service)
PSARC 2015/067 keystonemiddleware - Middleware for OpenStack Identity
PSARC 2015/068 pyScss - Compiler for the SCSS flavor of the Sass language
PSARC 2015/069 django-pyscss - pyScss support for Django
PSARC 2015/073 barbicanclient - OpenStack client for Barbican (Key Management)
PSARC 2015/074 pysendfile - Python interface to sendfile
PSARC 2015/097 ldappool - a connection pool for python-ldap
PSARC 2015/098 rfc3986 - URI reference validation module for Python
PSARC 2015/102 iniparse - python .ini file parsing module
20667775 OpenStack service updates for Juno (Umbrella)
18615101 Horizon should prevent network, subnet, and port names with hyphens in them
18772068 instance failed to launch with NoValidHost but no reason
18887457 openstack shouldn't deliver .po files
18905324 hostname.xml should set config/ignore_dhcp_hostname = true
18961031 Duplicate names for role-create and user-create are allowed
19015363 Users should not be allowed to attempt to create volumes when quota exceed
19050335 user appears logged in but unauthorised after horizon reboot
19144215 Instance manipulation buttons greyed out after all instances terminated
19249066 heat stack-preview doesn't appear to do anything
19313272 Need bottom slidebar in horizon for small browser windows
19462265 The Python module oslo.messaging should be added to Userland
19462397 The Python module futures should be added to Userland
19476604 The Python module networkx should be added to Userland
19476953 The Python module taskflow should be added to Userland
19519227 The Python module pycadf should be added to Userland
19582394 The Python module posix_ipc should be added to Userland
19598430 The Python module oauthlib should be added to Userland
19815780 nova package should have dependencies on brand-solaris and brand-solaris-kz
19883623 Image snapshots are missing 'instance_uuid' property
19887874 horizon should set up apache log rotation
19987962 Cinder lists additional volumes attached to instance with linuxy device names
20027791 horizon should be migrated to Apache 2.4
20164815 The Python module django-pyscss should be added to Userland
20173049 The Python module retrying should be added to Userland
20174489 The Python module WSME should be added to Userland
20176001 The Python module keystonemiddleware should be added to Userland
20182039 The Python module pysendfile should be added to Userland
20200162 The Python module pyScss should be added to Userland
20222184 horizon doesn't send start request on shutdown instance
20312312 The Python module python-saharaclient should be added to Userland
20514287 wrong vnic label name used for dhcp vnic in evs
20596802 The Python module oslo.middleware should be added to Userland
20596803 The Python module barbicanclient should be added to Userland
20596804 The Python module oslo.context should be added to Userland
20596805 The Python module iniparse should be added to Userland
20596806 The Python module oslo.vmware should be added to Userland
20596807 The Python module osprofiler should be added to Userland
20596808 The Python module oslo.i18n should be added to Userland
20596809 The Python module oslo.utils should be added to Userland
20596811 The Python module ipaddr should be added to Userland
20596812 The Python module glance_store should be added to Userland
20596813 The Python module oslo.serialization should be added to Userland
20596814 The Python module oslo.db should be added to Userland
20596815 The Python module simplegeneric should be added to Userland
20602690 The Python module ldappool should be added to Userland
20602722 The Python module rfc3986 should be added to Userland
20638369 compilemessages.py requires GNU msgfmt without calling gmsgfmt
20715741 cinder 2014.2.2
20715742 glance 2014.2.2
20715743 heat 2014.2.2
20715744 horizon 2014.2.2
20715745 keystone 2014.2.2
20715746 neutron 2014.2.2
20715747 nova 2014.2.2
20715748 swift 2.2.2
20715749 alembic 0.7.4
20715750 amqp 1.4.6
20715751 boto 2.34.0
20715752 ceilometerclient 1.0.12
20715753 cinderclient 1.1.1
20715754 cliff 1.9.0
20715756 django 1.4.19
20715757 django_compressor 1.4
20715758 django_openstack_auth 1.1.9
20715759 eventlet 0.15.2
20715761 glanceclient 0.15.0
20715762 greenlet 0.4.5
20715763 heatclient 0.2.12
20715764 keystoneclient 1.0.0
20715765 kombu 3.0.7
20715766 mysql 1.2.5
20715767 netaddr 0.7.13
20715769 netifaces 0.10.4
20715770 neutronclient 2.3.10
20715771 novaclient 2.20.0
20715772 oslo.config 1.6.0
20715773 py 1.4.26
20715774 pyflakes 0.8.1
20715775 pytest 2.6.4
20715776 pytz 2014.10
20715777 requests 2.6.0
20715778 simplejson 3.6.5
20715779 six 1.9.0
20715780 sqlalchemy-migrate 0.9.1
20715781 sqlalchemy 0.9.8
20715782 stevedore 1.2.0
20715783 swiftclient 2.3.1
20715784 tox 1.8.1
20715785 troveclient 1.0.8
20715786 virtualenv 12.0.7
20715787 websockify 0.6.0
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/components/dnsmasq/patches/03_client_mac.patch Thu Mar 19 14:41:20 2015 -0700
@@ -0,0 +1,43 @@
+Solaris doesn't have an easy way to retrieve the MAC address of the client
+that is soliciting the DHCPv6 IP address. This fix uses a simple way to
+retrieve the client MAC address from the client's EUI64 link-local address.
+
+
+*** dnsmasq-2.68/src/dhcp6.c 2013-12-08 07:58:29.000000000 -0800
+--- NEW/src/dhcp6.c 2015-02-23 18:33:30.937299563 -0800
+***************
+*** 231,236 ****
+--- 231,253 ----
+
+ void get_client_mac(struct in6_addr *client, int iface, unsigned char *mac, unsigned int *maclenp, unsigned int *mactypep)
+ {
++ #ifdef HAVE_SOLARIS_NETWORK
++ /* Solaris does not have an easy way to retrieve MAC address for a given IPv6 address from the kernel.
++ For now the following workaround should work for OpenStack's needs. */
++ uint8_t *addr6;
++
++ *maclenp = ETHER_ADDR_LEN;
++ *mactypep = ARPHRD_ETHER;
++ /* Take the EUI64 based client's link-local address and convert it to client's MAC address.
++ For example: from fe80::f816:3eff:fe5c:df43 link-local address we arrive at fa:16:3e:5c:df:43 */
++ addr6 = client->s6_addr;
++ mac[0] = addr6[8] ^ 0x2;
++ mac[1] = addr6[9];
++ mac[2] = addr6[10];
++ mac[3] = addr6[13];
++ mac[4] = addr6[14];
++ mac[5]= addr6[15];
++ #else
+ /* Recieving a packet from a host does not populate the neighbour
+ cache, so we send a neighbour discovery request if we can't
+ find the sender. Repeat a few times in case of packet loss. */
+***************
+*** 276,281 ****
+--- 293,299 ----
+
+ *maclenp = mac_param.maclen;
+ *mactypep = ARPHRD_ETHER;
++ #endif /* HAVE_SOLARIS_NETWORK */
+ }
+
+ static int find_mac(int family, char *addrp, char *mac, size_t maclen, void *parmv)
--- a/components/openstack/cinder/Makefile Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/Makefile Thu Mar 19 14:41:20 2015 -0700
@@ -22,22 +22,24 @@
#
# Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved.
#
+
include ../../../make-rules/shared-macros.mk
COMPONENT_NAME= cinder
-COMPONENT_CODENAME= havana
-COMPONENT_VERSION= 2013.2.3
+COMPONENT_CODENAME= juno
+COMPONENT_VERSION= 2014.2.2
+COMPONENT_BE_VERSION= 2014.2
COMPONENT_SRC= $(COMPONENT_NAME)-$(COMPONENT_VERSION)
COMPONENT_ARCHIVE= $(COMPONENT_SRC).tar.gz
COMPONENT_ARCHIVE_HASH= \
- sha256:a2740f0a0481139ae21cdb0868bebcce01b9f19832439b7f3056435e75791194
+ sha256:2c779bf9d208163af6c425da9043bbdcb345cebc5c118198482b94062862a117
COMPONENT_ARCHIVE_URL= http://launchpad.net/$(COMPONENT_NAME)/$(COMPONENT_CODENAME)/$(COMPONENT_VERSION)/+download/$(COMPONENT_ARCHIVE)
COMPONENT_SIG_URL= $(COMPONENT_ARCHIVE_URL).asc
COMPONENT_PROJECT_URL= http://www.openstack.org/
COMPONENT_BUGDB= service/cinder
-IPS_COMPONENT_VERSION= 0.$(COMPONENT_VERSION)
+IPS_COMPONENT_VERSION= 0.$(COMPONENT_VERSION)
-TPNO= 17714
+TPNO= 21819
include $(WS_MAKE_RULES)/prep.mk
include $(WS_MAKE_RULES)/setup.py.mk
@@ -49,27 +51,34 @@
# only need to deliver one version. The manifest is parameterized, though.
PYTHON_VERSIONS= 2.6
+PKG_MACROS += COMPONENT_BE_VERSION=$(COMPONENT_BE_VERSION)
PKG_MACROS += PYVER=$(PYTHON_VERSIONS)
+PKG_MACROS += PYV=$(shell echo $(PYTHON_VERSIONS) | tr -d .)
-# cinder-api, cinder-backup, cinder-scheduler, and cinder-volume
-# depend on the cinder-db svc so copy the manifest into the proto
-# directory for pkgdepend to find
+#
+# cinder-api, cinder-backup, cinder-scrubber, and cinder-volume depend
+# on cinder-db, and cinder-upgrade so copy all of the service
+# manifests into the proto directory for pkgdepend(1) to find.
+#
COMPONENT_POST_INSTALL_ACTION += \
- ($(MKDIR) $(PROTO_DIR)/lib/svc/manifest/application/openstack; \
- $(CP) files/cinder-api.xml $(PROTO_DIR)/lib/svc/manifest/application/openstack/; \
- $(CP) files/cinder-backup.xml $(PROTO_DIR)/lib/svc/manifest/application/openstack/; \
- $(CP) files/cinder-db.xml $(PROTO_DIR)/lib/svc/manifest/application/openstack/; \
- $(CP) files/cinder-scheduler.xml $(PROTO_DIR)/lib/svc/manifest/application/openstack/; \
- $(CP) files/cinder-volume.xml $(PROTO_DIR)/lib/svc/manifest/application/openstack/; \
- $(MKDIR) $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/solaris; \
+ ($(MKDIR) $(PROTO_DIR)/lib/svc/manifest/application/openstack; \
+ $(CP) \
+ files/cinder-api.xml \
+ files/cinder-backup.xml \
+ files/cinder-db.xml \
+ files/cinder-scheduler.xml \
+ files/cinder-upgrade.xml \
+ files/cinder-volume.xml \
+ $(PROTO_DIR)/lib/svc/manifest/application/openstack; \
+ $(CP) \
+ files/solaris/solarisfc.py \
+ files/solaris/solarisiscsi.py \
+ $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/brick/initiator; \
+ $(MKDIR) $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/solaris; \
$(TOUCH) $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/solaris/__init__.py; \
$(CP) files/solaris/zfs.py $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/solaris; \
- $(MKDIR) $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa; \
- $(CP) files/zfssa/__init__.py $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa; \
- $(CP) files/zfssa/cinder.akwf $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa; \
- $(CP) files/zfssa/restclient.py $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa; \
- $(CP) files/zfssa/zfssaiscsi.py $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa; \
- $(CP) files/zfssa/zfssarest.py $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa); \
+ $(MKDIR) $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa; \
+ $(CP) files/zfssa/cinder.akwf $(PROTO_DIR)/usr/lib/python2.6/vendor-packages/cinder/volume/drivers/zfssa); \
$(PYTHON) -m compileall $(PROTO_DIR)/$(PYTHON_VENDOR_PACKAGES)
# common targets
@@ -81,10 +90,12 @@
REQUIRED_PACKAGES += library/python/eventlet-26
+REQUIRED_PACKAGES += library/python/iniparse-26
REQUIRED_PACKAGES += library/python/ipython-26
REQUIRED_PACKAGES += library/python/oslo.config-26
+REQUIRED_PACKAGES += library/python/python-mysql-26
+REQUIRED_PACKAGES += library/python/sqlalchemy-26
REQUIRED_PACKAGES += library/python/sqlalchemy-migrate-26
-REQUIRED_PACKAGES += runtime/python-26
REQUIRED_PACKAGES += system/core-os
REQUIRED_PACKAGES += system/file-system/zfs
REQUIRED_PACKAGES += system/storage/fc-utilities
--- a/components/openstack/cinder/cinder.p5m Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/cinder.p5m Thu Mar 19 14:41:20 2015 -0700
@@ -20,7 +20,7 @@
#
#
-# Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved.
#
set name=pkg.fmri \
@@ -28,7 +28,7 @@
set name=pkg.summary value="OpenStack Cinder (Block Storage Service)"
set name=pkg.description \
value="OpenStack Cinder provides an infrastructure for managing block storage volumes in OpenStack. It allows block devices to be exposed and connected to compute instances for expanded storage, better performance and integration with enterprise storage platforms."
-set name=pkg.human-version value="Havana $(COMPONENT_VERSION)"
+set name=pkg.human-version value="Juno $(COMPONENT_VERSION)"
set name=com.oracle.info.description \
value="Cinder, the OpenStack block storage service"
set name=com.oracle.info.tpno value=$(TPNO)
@@ -40,12 +40,14 @@
set name=info.source-url value=$(COMPONENT_ARCHIVE_URL)
set name=info.upstream value="OpenStack <[email protected]>"
set name=info.upstream-url value=$(COMPONENT_PROJECT_URL)
+set name=openstack.upgrade-id reboot-needed=true value=$(COMPONENT_BE_VERSION)
set name=org.opensolaris.arc-caseid value=PSARC/2013/350 value=PSARC/2014/054 \
- value=PSARC/2014/208
+ value=PSARC/2014/208 value=PSARC/2015/110
set name=org.opensolaris.consolidation value=$(CONSOLIDATION)
+#
dir path=etc/cinder owner=cinder group=cinder mode=0700
-file files/api-paste.ini path=etc/cinder/api-paste.ini owner=cinder \
- group=cinder mode=0644 overlay=allow preserve=renamenew
+file path=etc/cinder/api-paste.ini owner=cinder group=cinder mode=0644 \
+ overlay=allow preserve=renamenew
file files/cinder.conf path=etc/cinder/cinder.conf owner=cinder group=cinder \
mode=0644 overlay=allow preserve=renamenew
file etc/cinder/logging_sample.conf path=etc/cinder/logging.conf owner=cinder \
@@ -64,13 +66,14 @@
file path=lib/svc/manifest/application/openstack/cinder-backup.xml
file path=lib/svc/manifest/application/openstack/cinder-db.xml
file path=lib/svc/manifest/application/openstack/cinder-scheduler.xml
+file path=lib/svc/manifest/application/openstack/cinder-upgrade.xml
file path=lib/svc/manifest/application/openstack/cinder-volume.xml
file files/cinder-api path=lib/svc/method/cinder-api
file files/cinder-backup path=lib/svc/method/cinder-backup
file files/cinder-scheduler path=lib/svc/method/cinder-scheduler
+file files/cinder-upgrade path=lib/svc/method/cinder-upgrade
file files/cinder-volume path=lib/svc/method/cinder-volume
file files/cinder-volume-setup path=lib/svc/method/cinder-volume-setup
-file path=usr/bin/cinder-clear-rabbit-queues
file path=usr/bin/cinder-manage pkg.depend.bypass-generate=.*/bpython.*
file usr/bin/cinder-api path=usr/lib/cinder/cinder-api mode=0555
file usr/bin/cinder-backup path=usr/lib/cinder/cinder-backup mode=0555
@@ -83,6 +86,7 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder-$(COMPONENT_VERSION)-py$(PYVER).egg-info/dependency_links.txt
file path=usr/lib/python$(PYVER)/vendor-packages/cinder-$(COMPONENT_VERSION)-py$(PYVER).egg-info/entry_points.txt
file path=usr/lib/python$(PYVER)/vendor-packages/cinder-$(COMPONENT_VERSION)-py$(PYVER).egg-info/not-zip-safe
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder-$(COMPONENT_VERSION)-py$(PYVER).egg-info/pbr.json
file path=usr/lib/python$(PYVER)/vendor-packages/cinder-$(COMPONENT_VERSION)-py$(PYVER).egg-info/requires.txt
file path=usr/lib/python$(PYVER)/vendor-packages/cinder-$(COMPONENT_VERSION)-py$(PYVER).egg-info/top_level.txt
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/__init__.py
@@ -93,6 +97,9 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/admin_actions.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/availability_zones.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/backups.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/cgsnapshots.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/consistencygroups.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/extended_services.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/extended_snapshot_attributes.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/hosts.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/image_create.py
@@ -100,18 +107,23 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/quota_classes.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/quotas.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/scheduler_hints.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/scheduler_stats.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/services.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/snapshot_actions.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/types_extra_specs.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/types_manage.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/used_limits.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_actions.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_encryption_metadata.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_host_attribute.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_image_metadata.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_manage.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_mig_status_attribute.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_replication.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_tenant_attribute.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_transfer.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_type_encryption.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/contrib/volume_unmanage.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/extensions.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/middleware/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/middleware/auth.py
@@ -127,6 +139,10 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/schemas/v1.1/extensions.rng
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/schemas/v1.1/limits.rng
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/schemas/v1.1/metadata.rng
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/schemas/v1.1/qos_association.rng
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/schemas/v1.1/qos_associations.rng
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/schemas/v1.1/qos_spec.rng
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/schemas/v1.1/qos_specs.rng
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/sizelimit.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/urlmap.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/v1/__init__.py
@@ -151,8 +167,11 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/availability_zones.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/backups.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/cgsnapshots.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/consistencygroups.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/limits.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/qos_specs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/scheduler_stats.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/transfers.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/types.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/api/views/versions.py
@@ -175,10 +194,10 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/initiator/host_driver.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/initiator/linuxfc.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/initiator/linuxscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/initiator/solarisfc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/initiator/solarisiscsi.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/iscsi/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/iscsi/iscsi.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/iser/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/iser/iser.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/local_dev/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/local_dev/lvm.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/brick/remotefs/__init__.py
@@ -189,6 +208,8 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/compute/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/compute/aggregate_states.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/compute/nova.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/consistencygroup/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/consistencygroup/api.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/context.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/api.py
@@ -227,140 +248,90 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/019_add_migration_status.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/020_add_volume_admin_metadata_table.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/021_add_default_quota_class.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/022_add_reason_column_to_service.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/023_add_expire_reservations_index.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/024_add_replication_support.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/025_add_consistencygroup.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/026_add_consistencygroup_quota_class.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migrate_repo/versions/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/migration.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/db/sqlalchemy/models.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/exception.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/flow_utils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/hacking/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/hacking/checks.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/i18n.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/image/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/image/glance.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/image/image_utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/keymgr/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/keymgr/barbican.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/keymgr/conf_key_mgr.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/keymgr/key.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/keymgr/key_mgr.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/keymgr/not_implemented_key_mgr.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ar/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/bg_BG/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/bs/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ca/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/cs/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/da/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/de/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/en_AU/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/en_GB/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/en_US/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/es/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/es_MX/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/fi_FI/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/fil/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/fr/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/hi/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/hr/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/hu/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/id/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/it/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/it_IT/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ja/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ka_GE/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/kn/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ko/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ko_KR/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ms/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/nb/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ne/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/nl_NL/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/pl_PL/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/pt/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/pt_BR/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ro/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ru/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/ru_RU/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/sk/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/sl_SI/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/sw_KE/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/tl/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/tl_PH/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/tr/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/tr_TR/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/uk/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/vi_VN/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/zh_CN/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/zh_HK/LC_MESSAGES/cinder.po
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/locale/zh_TW/LC_MESSAGES/cinder.po
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/manager.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/README
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/config/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/config/generator.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/context.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/db/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/db/api.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/db/exception.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/db/sqlalchemy/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/db/sqlalchemy/models.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/db/sqlalchemy/session.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/db/sqlalchemy/utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/eventlet_backdoor.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/excutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/fileutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/gettextutils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/imageutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/importutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/jsonutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/local.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/lockutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/log.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/log_handler.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/loopingcall.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/middleware/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/middleware/base.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/middleware/request_id.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/network_utils.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/api.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/log_notifier.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/no_op_notifier.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/rabbit_notifier.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/rpc_notifier.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/rpc_notifier2.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/notifier/test_notifier.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/periodic_task.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/policy.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/processutils.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rootwrap/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rootwrap/cmd.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rootwrap/filters.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rootwrap/wrapper.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/amqp.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/common.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/dispatcher.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/impl_fake.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/impl_kombu.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/impl_qpid.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/impl_zmq.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/matchmaker.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/matchmaker_redis.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/proxy.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/service.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/rpc/zmq_receiver.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/request_utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/filter.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/base_filter.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/base_handler.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/base_weight.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/filters/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/filters/availability_zone_filter.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/filters/capabilities_filter.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/filters/extra_specs_ops.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/filters/ignore_attempted_hosts_filter.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/filters/json_filter.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/weight.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/scheduler/weights/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/service.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/sslutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/strutils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/systemd.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/test.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/threadgroup.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/timeutils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/units.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/uuidutils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/openstack/common/versionutils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/policy.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/quota.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/quota_utils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/replication/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/replication/api.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/rpc.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/chance.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/driver.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/filter_scheduler.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/filters/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/filters/affinity_filter.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/filters/capacity_filter.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/filters/retry_filter.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/flows/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/flows/create_volume.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/host_manager.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/manager.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/rpcapi.py
@@ -368,20 +339,13 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/simple.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/weights/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/weights/capacity.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/weights/chance.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/scheduler/weights/volume_number.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/service.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/decorators.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/exceptions.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/patterns/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/patterns/base.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/patterns/linear_flow.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/states.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/task.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/taskflow/utils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/ssh_utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/test.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/transfer/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/transfer/api.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/units.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/version.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/__init__.py
@@ -390,23 +354,126 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/driver.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/block_device.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/coraid.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/datera.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_smis_common.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_smis_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_cli_fc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_cli_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vmax_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vmax_fast.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vmax_fc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vmax_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vmax_masking.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vmax_provision.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vmax_utils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/emc_vnx_cli.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/emc/xtremio.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/eqlx.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/fujitsu_eternus_dx_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/fujitsu_eternus_dx_fc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/fujitsu_eternus_dx_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/fusionio/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/fusionio/ioControl.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/glusterfs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hds/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hds/hds.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hds/hnas_backend.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hds/hus_backend.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hds/iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hds/nfs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hitachi/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hitachi/hbsd_basiclib.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hitachi/hbsd_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hitachi/hbsd_fc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hitachi/hbsd_horcm.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hitachi/hbsd_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/hitachi/hbsd_snm2.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/huawei/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/huawei/huawei_dorado.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/huawei/huawei_hvs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/huawei/huawei_t.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/huawei/huawei_utils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/huawei/rest_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/huawei/ssh_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/gpfs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/ibmnas.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/storwize_svc/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/storwize_svc/helpers.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/storwize_svc/replication.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/storwize_svc/ssh.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/ibm/xiv_ds8k.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/lvm.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/api.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/eseries/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/eseries/client.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/eseries/iscsi.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/nfs.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/options.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/ssc_utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/netapp/utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nexenta/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nexenta/iscsi.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nexenta/jsonrpc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nexenta/nfs.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nexenta/options.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nexenta/utils.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nexenta/volume.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nfs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/nimble.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/prophetstor/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/prophetstor/dpl_fc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/prophetstor/dpl_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/prophetstor/dplcommon.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/prophetstor/options.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/pure.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/rbd.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/remotefs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_3par_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_3par_fc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_3par_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_lefthand_cliq_proxy.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_lefthand_iscsi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_lefthand_rest_proxy.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_msa_client.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_msa_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/hp/hp_msa_fc.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/san.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/san/solaris.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/scality.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/sheepdog.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/smbfs.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/solaris/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/solaris/zfs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/solidfire.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/api.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/datastore.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/error_util.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/io_util.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/pbm.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/read_write_util.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/vim.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/vim_util.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/vmdk.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/vmware_images.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/volumeops.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/wsdl/5.5/core-types.xsd
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/wsdl/5.5/pbm-messagetypes.xsd
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/wsdl/5.5/pbm-types.xsd
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/wsdl/5.5/pbm.wsdl
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/vmware/wsdl/5.5/pbmService.wsdl
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/windows/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/windows/constants.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/windows/remotefs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/windows/smbfs.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/windows/vhdutils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/windows/windows.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/windows/windows_utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/zadara.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/zfssa/__init__.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/zfssa/cinder.akwf
@@ -414,24 +481,69 @@
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/zfssa/zfssaiscsi.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/drivers/zfssa/zfssarest.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/base.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/create_volume/__init__.py
-file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/utils.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/api/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/api/create_volume.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/manager/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/manager/create_volume.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/flows/manager/manage_existing.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/iscsi.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/manager.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/qos_specs.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/rpcapi.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/utils.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/volume/volume_types.py
file path=usr/lib/python$(PYVER)/vendor-packages/cinder/wsgi.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/brocade/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/brocade/brcd_fabric_opts.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/brocade/brcd_fc_san_lookup_service.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/brocade/brcd_fc_zone_client_cli.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/brocade/brcd_fc_zone_driver.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/brocade/fc_zone_constants.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/cisco/__init__.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/cisco/cisco_fabric_opts.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/cisco/cisco_fc_san_lookup_service.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/cisco/cisco_fc_zone_client_cli.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/cisco/cisco_fc_zone_driver.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/cisco/fc_zone_constants.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/drivers/fc_zone_driver.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/fc_common.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/fc_san_lookup_service.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/fc_zone_manager.py
+file path=usr/lib/python$(PYVER)/vendor-packages/cinder/zonemanager/utils.py
dir path=var/lib/cinder owner=cinder group=cinder mode=0700
+#
group groupname=cinder gid=81
user username=cinder ftpuser=false gcos-field="OpenStack Cinder" group=cinder \
home-dir=/var/lib/cinder uid=81
+#
license cinder.license license="Apache v2.0"
# force a group dependency on the optional anyjson; pkgdepend work is needed to
# flush this out.
-depend type=group fmri=library/python/anyjson-26
+depend type=group fmri=library/python/anyjson-$(PYV)
+
+# force a group dependency on the optional netaddr; pkgdepend work is needed to
+# flush this out.
+depend type=group fmri=library/python/netaddr-$(PYV)
+
+# force a group dependency on the optional pywbem; pkgdepend work is needed to
+# flush this out.
+depend type=group fmri=library/python/pywbem-$(PYV)
+
+# force a group dependency on the optional requests; pkgdepend work is needed to
+# flush this out.
+depend type=group fmri=library/python/requests-$(PYV)
+
+# force a group dependency on the optional simplejson; pkgdepend work is needed
+# to flush this out.
+depend type=group fmri=library/python/simplejson-$(PYV)
+
+# force a group dependency on the optional suds; pkgdepend work is needed to
+# flush this out.
+depend type=group fmri=library/python/suds-$(PYV)
# force a dependency on package delivering fcinfo(1M)
depend type=require fmri=__TBD pkg.debug.depend.file=usr/sbin/fcinfo
@@ -445,62 +557,86 @@
# force a dependency on package delivering zfs(1M)
depend type=require fmri=__TBD pkg.debug.depend.file=usr/sbin/zfs
-# force a dependency on pywbem; pkgdepend work is needed to flush this out.
-# (dependency is for EMC volume driver)
-depend type=require fmri=library/python-2/pywbem
+# force a dependency on argparse; pkgdepend work is needed to flush this out.
+depend type=require fmri=library/python/argparse-$(PYV)
# force a dependency on babel; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/babel-26
+depend type=require fmri=library/python/babel-$(PYV)
+
+# force a dependency on barbicanclient; pkgdepend work is needed to flush this
+# out.
+depend type=require fmri=library/python/barbicanclient-$(PYV)
# force a dependency on glanceclient; pkgdepend work is needed to flush this
# out.
-depend type=require fmri=library/python/glanceclient-26
+depend type=require fmri=library/python/glanceclient-$(PYV)
# force a dependency on greenlet; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/greenlet-26
+depend type=require fmri=library/python/greenlet-$(PYV)
# force a dependency on iso8601; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/iso8601-26
+depend type=require fmri=library/python/iso8601-$(PYV)
-# force a dependency on keystoneclient; used via a paste.deploy filter
-depend type=require fmri=library/python/keystoneclient-26
+# force a dependency on keystoneclient; pkgdepend work is needed to flush this
+# out.
+depend type=require fmri=library/python/keystoneclient-$(PYV)
-# force a dependency on kombu; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/kombu-26
+# force a dependency on keystonemiddleware; used via a paste.deploy filter
+depend type=require fmri=library/python/keystonemiddleware-$(PYV)
# force a dependency on lxml; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/lxml-26
+depend type=require fmri=library/python/lxml-$(PYV)
# force a dependency on novaclient; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/novaclient-26
+depend type=require fmri=library/python/novaclient-$(PYV)
+
+# force a dependency on oslo.db; pkgdepend work is needed to flush this out.
+depend type=require fmri=library/python/oslo.db-$(PYV)
+
+# force a dependency on oslo.i18n; pkgdepend work is needed to flush this out.
+depend type=require fmri=library/python/oslo.i18n-$(PYV)
+
+# force a dependency on oslo.messaging; pkgdepend work is needed to flush this
+# out.
+depend type=require fmri=library/python/oslo.messaging-$(PYV)
+
+# force a dependency on osprofiler; pkgdepend work is needed to flush this out.
+depend type=require fmri=library/python/osprofiler-$(PYV)
# force a dependency on paste; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/paste-26
+depend type=require fmri=library/python/paste-$(PYV)
# force a dependency on paste.deploy; pkgdepend work is needed to flush this
# out.
-depend type=require fmri=library/python/paste.deploy-26
+depend type=require fmri=library/python/paste.deploy-$(PYV)
# force a dependency on pbr; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/pbr-26
+depend type=require fmri=library/python/pbr-$(PYV)
# force a dependency on routes; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/routes-26
+depend type=require fmri=library/python/routes-$(PYV)
# force a dependency on setuptools; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/setuptools-26
+depend type=require fmri=library/python/setuptools-$(PYV)
# force a dependency on six; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/six-26
+depend type=require fmri=library/python/six-$(PYV)
# force a dependency on sqlalchemy; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/sqlalchemy-26
+depend type=require fmri=library/python/sqlalchemy-$(PYV)
# force a dependency on stevedore; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/stevedore-26
+depend type=require fmri=library/python/stevedore-$(PYV)
# force a dependency on swiftclient; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/swiftclient-26
+depend type=require fmri=library/python/swiftclient-$(PYV)
+
+# force a dependency on taskflow; pkgdepend work is needed to flush this out.
+depend type=require fmri=library/python/taskflow-$(PYV)
# force a dependency on webob; pkgdepend work is needed to flush this out.
-depend type=require fmri=library/python/webob-26
+depend type=require fmri=library/python/webob-$(PYV)
+
+# force a dependency on the Solaris Install library; pkgdepend work is needed to
+# flush this out.
+depend type=require fmri=system/library/install
--- a/components/openstack/cinder/files/api-paste.ini Fri Mar 20 03:13:26 2015 -0700
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,61 +0,0 @@
-#############
-# OpenStack #
-#############
-
-[composite:osapi_volume]
-use = call:cinder.api:root_app_factory
-/: apiversions
-/v1: openstack_volume_api_v1
-/v2: openstack_volume_api_v2
-
-[composite:openstack_volume_api_v1]
-use = call:cinder.api.middleware.auth:pipeline_factory
-noauth = faultwrap sizelimit noauth apiv1
-keystone = faultwrap sizelimit authtoken keystonecontext apiv1
-keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv1
-
-[composite:openstack_volume_api_v2]
-use = call:cinder.api.middleware.auth:pipeline_factory
-noauth = faultwrap sizelimit noauth apiv2
-keystone = faultwrap sizelimit authtoken keystonecontext apiv2
-keystone_nolimit = faultwrap sizelimit authtoken keystonecontext apiv2
-
-[filter:faultwrap]
-paste.filter_factory = cinder.api.middleware.fault:FaultWrapper.factory
-
-[filter:noauth]
-paste.filter_factory = cinder.api.middleware.auth:NoAuthMiddleware.factory
-
-[filter:sizelimit]
-paste.filter_factory = cinder.api.middleware.sizelimit:RequestBodySizeLimiter.factory
-
-[app:apiv1]
-paste.app_factory = cinder.api.v1.router:APIRouter.factory
-
-[app:apiv2]
-paste.app_factory = cinder.api.v2.router:APIRouter.factory
-
-[pipeline:apiversions]
-pipeline = faultwrap osvolumeversionapp
-
-[app:osvolumeversionapp]
-paste.app_factory = cinder.api.versions:Versions.factory
-
-##########
-# Shared #
-##########
-
-[filter:keystonecontext]
-paste.filter_factory = cinder.api.middleware.auth:CinderKeystoneContext.factory
-
-[filter:authtoken]
-paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
-auth_uri = http://127.0.0.1:5000/v2.0
-identity_uri = http://127.0.0.1:35357
-admin_tenant_name = %SERVICE_TENANT_NAME%
-admin_user = %SERVICE_USER%
-admin_password = %SERVICE_PASSWORD%
-# signing_dir is configurable, but the default behavior of the authtoken
-# middleware should be sufficient. It will create a temporary directory
-# in the home directory for the user the cinder process is running as.
-signing_dir = /var/lib/cinder/keystone-signing
--- a/components/openstack/cinder/files/cinder-api.xml Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder-api.xml Thu Mar 19 14:41:20 2015 -0700
@@ -1,7 +1,7 @@
<?xml version="1.0" ?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--
- Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved.
+ Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@@ -30,6 +30,12 @@
<service_fmri value='svc:/milestone/multi-user:default' />
</dependency>
+ <dependency name='upgrade' grouping='require_all' restart_on='none'
+ type='service'>
+ <service_fmri
+ value='svc:/application/openstack/cinder/cinder-upgrade' />
+ </dependency>
+
<!-- create a dependency on the cinder_db service so the cinder
services do not collide when creating the database -->
<dependency name='cinder_db' grouping='optional_all' restart_on='error'
@@ -42,6 +48,11 @@
<service_fmri value='svc:/network/ntp'/>
</dependency>
+ <dependency name='rabbitmq' grouping='optional_all' restart_on='none'
+ type='service'>
+ <service_fmri value='svc:/network/amqp/rabbitmq'/>
+ </dependency>
+
<logfile_attributes permissions='600'/>
<exec_method timeout_seconds="60" type="method" name="start"
@@ -72,7 +83,7 @@
<description>
<loctext xml:lang="C">
cinder-api is a server daemon that provides the Cinder API service in
- order to provide volume management for the OpenStack Compute service.
+ order to provide volume management for the OpenStack Compute service.
</loctext>
</description>
</template>
--- a/components/openstack/cinder/files/cinder-backup Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder-backup Thu Mar 19 14:41:20 2015 -0700
@@ -1,6 +1,6 @@
#!/usr/bin/python2.6
-# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@@ -20,7 +20,7 @@
def start():
- smf_include.smf_subprocess("/usr/lib/cinder/cinder-backup")
+ smf_include.smf_subprocess("/usr/bin/pfexec /usr/lib/cinder/cinder-backup")
if __name__ == "__main__":
os.putenv("LC_ALL", "C")
--- a/components/openstack/cinder/files/cinder-backup.xml Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder-backup.xml Thu Mar 19 14:41:20 2015 -0700
@@ -1,7 +1,7 @@
<?xml version="1.0" ?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--
- Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
+ Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@@ -30,6 +30,12 @@
<service_fmri value='svc:/milestone/multi-user:default' />
</dependency>
+ <dependency name='upgrade' grouping='require_all' restart_on='none'
+ type='service'>
+ <service_fmri
+ value='svc:/application/openstack/cinder/cinder-upgrade' />
+ </dependency>
+
<!-- create a dependency on the cinder_db service so the cinder
services do not collide when creating the database -->
<dependency name='cinder_db' grouping='optional_all' restart_on='error'
@@ -42,6 +48,11 @@
<service_fmri value='svc:/network/ntp'/>
</dependency>
+ <dependency name='rabbitmq' grouping='optional_all' restart_on='none'
+ type='service'>
+ <service_fmri value='svc:/network/amqp/rabbitmq'/>
+ </dependency>
+
<logfile_attributes permissions='600'/>
<exec_method timeout_seconds="60" type="method" name="start"
--- a/components/openstack/cinder/files/cinder-db.xml Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder-db.xml Thu Mar 19 14:41:20 2015 -0700
@@ -1,7 +1,7 @@
<?xml version="1.0" ?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--
- Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved.
+ Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@@ -30,11 +30,22 @@
<service_fmri value='svc:/milestone/multi-user:default' />
</dependency>
+ <dependency name='upgrade' grouping='require_all' restart_on='none'
+ type='service'>
+ <service_fmri
+ value='svc:/application/openstack/cinder/cinder-upgrade' />
+ </dependency>
+
<dependency name='ntp' grouping='optional_all' restart_on='none'
type='service'>
<service_fmri value='svc:/network/ntp'/>
</dependency>
+ <dependency name='mysql' grouping='optional_all' restart_on='none'
+ type='service'>
+ <service_fmri value='svc:/application/database/mysql'/>
+ </dependency>
+
<logfile_attributes permissions='600'/>
<exec_method timeout_seconds="60" type="method" name="start"
--- a/components/openstack/cinder/files/cinder-scheduler.xml Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder-scheduler.xml Thu Mar 19 14:41:20 2015 -0700
@@ -1,7 +1,7 @@
<?xml version="1.0" ?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--
- Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved.
+ Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@@ -30,6 +30,12 @@
<service_fmri value='svc:/milestone/multi-user:default' />
</dependency>
+ <dependency name='upgrade' grouping='require_all' restart_on='none'
+ type='service'>
+ <service_fmri
+ value='svc:/application/openstack/cinder/cinder-upgrade' />
+ </dependency>
+
<!-- create a dependency on the cinder_db service so the cinder
services do not collide when creating the database -->
<dependency name='cinder_db' grouping='optional_all' restart_on='error'
@@ -42,6 +48,11 @@
<service_fmri value='svc:/network/ntp'/>
</dependency>
+ <dependency name='rabbitmq' grouping='optional_all' restart_on='none'
+ type='service'>
+ <service_fmri value='svc:/network/amqp/rabbitmq'/>
+ </dependency>
+
<logfile_attributes permissions='600'/>
<exec_method timeout_seconds="60" type="method" name="start"
@@ -72,7 +83,7 @@
<description>
<loctext xml:lang="C">
cinder-scheduler picks a cinder-volume node to host the block storage
- requested by the OpenStack Compute service.
+ requested by the OpenStack Compute service.
</loctext>
</description>
</template>
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/components/openstack/cinder/files/cinder-upgrade Thu Mar 19 14:41:20 2015 -0700
@@ -0,0 +1,261 @@
+#!/usr/bin/python2.6
+
+# Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from ConfigParser import NoOptionError
+from datetime import datetime
+import errno
+import glob
+import os
+import shutil
+from subprocess import check_call, Popen, PIPE
+import sys
+import time
+import traceback
+
+import iniparse
+import smf_include
+import sqlalchemy
+
+
+CINDER_CONF_MAPPINGS = {
+ # Deprecated group/name
+ ('DEFAULT', 'rabbit_durable_queues'): ('DEFAULT', 'amqp_durable_queues'),
+ ('rpc_notifier2', 'topics'): ('DEFAULT', 'notification_topics'),
+ ('DEFAULT', 'osapi_compute_link_prefix'):
+ ('DEFAULT', 'osapi_volume_base_URL'),
+ ('DEFAULT', 'backup_service'): ('DEFAULT', 'backup_driver'),
+ ('DEFAULT', 'pybasedir'): ('DEFAULT', 'state_path'),
+ ('DEFAULT', 'log_config'): ('DEFAULT', 'log_config_append'),
+ ('DEFAULT', 'logfile'): ('DEFAULT', 'log_file'),
+ ('DEFAULT', 'logdir'): ('DEFAULT', 'log_dir'),
+ ('DEFAULT', 'num_iscsi_scan_tries'):
+ ('DEFAULT', 'num_volume_device_scan_tries'),
+ ('DEFAULT', 'db_backend'): ('database', 'backend'),
+ ('DEFAULT', 'sql_connection'): ('database', 'connection'),
+ ('DATABASE', 'sql_connection'): ('database', 'connection'),
+ ('sql', 'connection'): ('database', 'connection'),
+ ('DEFAULT', 'sql_idle_timeout'): ('database', 'idle_timeout'),
+ ('DATABASE', 'sql_idle_timeout'): ('database', 'idle_timeout'),
+ ('sql', 'idle_timeout'): ('database', 'idle_timeout'),
+ ('DEFAULT', 'sql_min_pool_size'): ('database', 'min_pool_size'),
+ ('DATABASE', 'sql_min_pool_size'): ('database', 'min_pool_size'),
+ ('DEFAULT', 'sql_max_pool_size'): ('database', 'max_pool_size'),
+ ('DATABASE', 'sql_max_pool_size'): ('database', 'max_pool_size'),
+ ('DEFAULT', 'sql_max_retries'): ('database', 'max_retries'),
+ ('DATABASE', 'sql_max_retries'): ('database', 'max_retries'),
+ ('DEFAULT', 'sql_retry_interval'): ('database', 'retry_interval'),
+ ('DATABASE', 'reconnect_interval'): ('database', 'retry_interval'),
+ ('DEFAULT', 'sql_max_overflow'): ('database', 'max_overflow'),
+ ('DATABASE', 'sqlalchemy_max_overflow'): ('database', 'max_overflow'),
+ ('DEFAULT', 'sql_connection_debug'): ('database', 'connection_debug'),
+ ('DEFAULT', 'sql_connection_trace'): ('database', 'connection_trace'),
+ ('DATABASE', 'sqlalchemy_pool_timeout'): ('database', 'pool_timeout'),
+ ('DEFAULT', 'dbapi_use_tpool'): ('database', 'use_tpool'),
+ ('DEFAULT', 'memcache_servers'):
+ ('keystone_authtoken', 'memcached_servers'),
+ ('DEFAULT', 'matchmaker_ringfile'): ('matchmaker_ring', 'ringfile'),
+}
+
+
+def update_mapping(section, key, mapping):
+ """ look for deprecated variables and, if found, convert it to the new
+ section/key.
+ """
+
+ if (section, key) in mapping:
+ print "Deprecated value found: [%s] %s" % (section, key)
+ section, key = mapping[(section, key)]
+ if section is None and key is None:
+ print "Removing from configuration"
+ else:
+ print "Updating to: [%s] %s" % (section, key)
+ return section, key
+
+
+def alter_mysql_tables(engine):
+ """ Convert MySQL tables to use utf8
+ """
+
+ import MySQLdb
+
+ for _none in range(5):
+ try:
+ db = MySQLdb.connect(host=engine.url.host,
+ user=engine.url.username,
+ passwd=engine.url.password,
+ db=engine.url.database)
+ break
+ except MySQLdb.OperationalError as err:
+ # mysql is not ready. sleep for 2 more seconds
+ time.sleep(2)
+ else:
+ print "Unable to connect to MySQL: %s" % err
+ print ("Please verify MySQL is properly configured and online "
+ "before using svcadm(1M) to clear this service.")
+ sys.exit(smf_include.SMF_EXIT_ERR_FATAL)
+
+ cursor = db.cursor()
+ cursor.execute("ALTER DATABASE %s CHARACTER SET = 'utf8'" %
+ engine.url.database)
+ cursor.execute("ALTER DATABASE %s COLLATE = 'utf8_general_ci'" %
+ engine.url.database)
+ cursor.execute("SHOW tables")
+ res = cursor.fetchall()
+ if res:
+ cursor.execute("SET foreign_key_checks = 0")
+ for item in res:
+ cursor.execute("ALTER TABLE %s.%s CONVERT TO "
+ "CHARACTER SET 'utf8', COLLATE 'utf8_general_ci'"
+ % (engine.url.database, item[0]))
+ cursor.execute("SET foreign_key_checks = 1")
+ db.commit()
+ db.close()
+
+
+def modify_conf(old_file, mapping=None):
+ """ Copy over all uncommented options from the old configuration file. In
+ addition, look for deprecated section/keys and convert them to the new
+ section/key.
+ """
+
+ new_file = old_file + '.new'
+
+ # open the previous version
+ old = iniparse.ConfigParser()
+ old.readfp(open(old_file))
+
+ # open the new version
+ new = iniparse.ConfigParser()
+ try:
+ new.readfp(open(new_file))
+ except IOError as err:
+ if err.errno == errno.ENOENT:
+ # The upgrade did not deliver a .new file so, return
+ print "%s not found - continuing with %s" % (new_file, old_file)
+ return
+ else:
+ raise
+ print "\nupdating %s" % old_file
+
+ # walk every single section for uncommented options
+ default_items = set(old.items('DEFAULT'))
+ for section in old.sections() + ['DEFAULT']:
+
+ # DEFAULT items show up in every section so remove them
+ if section != 'DEFAULT':
+ section_items = set(old.items(section)) - default_items
+ else:
+ section_items = default_items
+
+ for key, value in section_items:
+ # keep a copy of the old value
+ oldvalue = value
+
+ if mapping is not None:
+ section, key = update_mapping(section, key, mapping)
+
+ if section is None and key is None:
+ # option is deprecated so continue
+ continue
+
+ if not new.has_section(section):
+ if section != 'DEFAULT':
+ new.add_section(section)
+
+ # print to the log when a value for the same section.key is
+ # changing to a new value
+ try:
+ new_value = new.get(section, key)
+ if new_value != value and '%SERVICE' not in new_value:
+ print "Changing [%s] %s:\n- %s\n+ %s" % \
+ (section, key, oldvalue, new_value)
+ print
+ except NoOptionError:
+ # the new configuration file does not have this option set so
+ # just continue
+ pass
+
+ # Only copy the old value to the new conf file if the entry doesn't
+ # exist or if it contains '%SERVICE'
+ if not new.has_option(section, key) or \
+ '%SERVICE' in new.get(section, key):
+ new.set(section, key, value)
+
+ # copy the old conf file to a backup
+ today = datetime.now().strftime("%Y%m%d%H%M%S")
+ shutil.copy2(old_file, old_file + '.' + today)
+
+ # copy the new conf file in place
+ with open(old_file, 'wb+') as fh:
+ new.write(fh)
+
+
+def start():
+ # pull out the current version of config/upgrade-id
+ p = Popen(['/usr/bin/svcprop', '-p', 'config/upgrade-id',
+ os.environ['SMF_FMRI']], stdout=PIPE, stderr=PIPE)
+ curr_ver, _err = p.communicate()
+ curr_ver = curr_ver.strip()
+
+ # extract the openstack-upgrade-id from the pkg
+ p = Popen(['/usr/bin/pkg', 'contents', '-H', '-t', 'set', '-o', 'value',
+ '-a', 'name=openstack.upgrade-id',
+ 'pkg:/cloud/openstack/cinder'], stdout=PIPE, stderr=PIPE)
+ pkg_ver, _err = p.communicate()
+ pkg_ver = pkg_ver.strip()
+
+ if curr_ver == pkg_ver:
+ # No need to upgrade
+ sys.exit(smf_include.SMF_EXIT_OK)
+
+ # look for any .new files
+ if glob.glob('/etc/cinder/*.new'):
+ # the versions are different, so perform an upgrade
+ # modify the configuration files
+ modify_conf('/etc/cinder/api-paste.ini')
+ modify_conf('/etc/cinder/cinder.conf', CINDER_CONF_MAPPINGS)
+ modify_conf('/etc/cinder/logging.conf')
+
+ config = iniparse.RawConfigParser()
+ config.read('/etc/cinder/cinder.conf')
+ # In certain cases the database section does not exist and the
+ # default database chosen is sqlite.
+ if config.has_section('database'):
+ db_connection = config.get('database', 'connection')
+
+ if db_connection.startswith('mysql'):
+ engine = sqlalchemy.create_engine(db_connection)
+ if engine.url.username != '%SERVICE_USER%':
+ alter_mysql_tables(engine)
+ print "altered character set to utf8 in cinder tables"
+
+ # update the current version
+ check_call(['/usr/sbin/svccfg', '-s', os.environ['SMF_FMRI'], 'setprop',
+ 'config/upgrade-id', '=', pkg_ver])
+ check_call(['/usr/sbin/svccfg', '-s', os.environ['SMF_FMRI'], 'refresh'])
+
+ sys.exit(smf_include.SMF_EXIT_OK)
+
+
+if __name__ == '__main__':
+ os.putenv('LC_ALL', 'C')
+ try:
+ smf_include.smf_main()
+ except Exception as err:
+ print 'Unknown error: %s' % err
+ print
+ traceback.print_exc(file=sys.stdout)
+ sys.exit(smf_include.SMF_EXIT_ERR_FATAL)
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/components/openstack/cinder/files/cinder-upgrade.xml Thu Mar 19 14:41:20 2015 -0700
@@ -0,0 +1,78 @@
+<?xml version="1.0" ?>
+<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
+<!--
+ Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved.
+
+ Licensed under the Apache License, Version 2.0 (the "License"); you may
+ not use this file except in compliance with the License. You may obtain
+ a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ License for the specific language governing permissions and limitations
+ under the License.
+
+ NOTE: This service manifest is not editable; its contents will
+ be overwritten by package or patch operations, including
+ operating system upgrade. Make customizations in a different
+ file.
+-->
+<service_bundle type="manifest" name="cinder">
+
+ <service version="1" type="service"
+ name="application/openstack/cinder/cinder-upgrade">
+
+ <dependency name='multiuser' grouping='require_all' restart_on='error'
+ type='service'>
+ <service_fmri value='svc:/milestone/multi-user:default' />
+ </dependency>
+
+ <logfile_attributes permissions='600'/>
+
+ <exec_method timeout_seconds="300" type="method" name="start"
+ exec="/lib/svc/method/cinder-upgrade %m">
+ <method_context>
+ <method_credential user='cinder' group='cinder' />
+ </method_context>
+ </exec_method>
+ <exec_method timeout_seconds="60" type="method" name="stop"
+ exec=":true"/>
+
+ <property_group type="framework" name="startd">
+ <propval type="astring" name="duration" value="transient"/>
+ </property_group>
+
+ <instance name='default' enabled='true'>
+ <!-- to start/stop/refresh the service -->
+ <property_group name='general' type='framework'>
+ <propval name='action_authorization' type='astring'
+ value='solaris.smf.manage.cinder' />
+ <propval name='value_authorization' type='astring'
+ value='solaris.smf.value.cinder' />
+ </property_group>
+
+ <property_group name="config" type="application">
+ <propval type="astring" name="upgrade-id" value="" />
+ <propval name='value_authorization' type='astring'
+ value='solaris.smf.value.cinder' />
+ </property_group>
+ </instance>
+
+ <template>
+ <common_name>
+ <loctext xml:lang="C">
+ OpenStack Cinder Upgrade Service
+ </loctext>
+ </common_name>
+ <description>
+ <loctext xml:lang="C">
+ cinder-upgrade is a transient service to upgrade the Cinder
+ configuration across major release version changes.
+ </loctext>
+ </description>
+ </template>
+ </service>
+</service_bundle>
--- a/components/openstack/cinder/files/cinder-volume-setup Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder-volume-setup Thu Mar 19 14:41:20 2015 -0700
@@ -1,6 +1,6 @@
#!/usr/bin/python2.6
-# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
--- a/components/openstack/cinder/files/cinder-volume.xml Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder-volume.xml Thu Mar 19 14:41:20 2015 -0700
@@ -1,7 +1,7 @@
<?xml version="1.0" ?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--
- Copyright (c) 2013, 2014, Oracle and/or its affiliates. All rights reserved.
+ Copyright (c) 2013, 2015, Oracle and/or its affiliates. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
@@ -30,6 +30,12 @@
<service_fmri value='svc:/milestone/multi-user:default' />
</dependency>
+ <dependency name='upgrade' grouping='require_all' restart_on='none'
+ type='service'>
+ <service_fmri
+ value='svc:/application/openstack/cinder/cinder-upgrade' />
+ </dependency>
+
<dependency name='iscsi_target' grouping='optional_all' restart_on='error'
type='service'>
<service_fmri value='svc:/network/iscsi/target:default' />
@@ -69,6 +75,11 @@
<service_fmri value='svc:/network/ntp'/>
</dependency>
+ <dependency name='rabbitmq' grouping='optional_all' restart_on='none'
+ type='service'>
+ <service_fmri value='svc:/network/amqp/rabbitmq'/>
+ </dependency>
+
<exec_method timeout_seconds="60" type="method" name="start"
exec="/lib/svc/method/cinder-volume %m">
<method_context>
--- a/components/openstack/cinder/files/cinder.conf Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder.conf Thu Mar 19 14:41:20 2015 -0700
@@ -1,60 +1,242 @@
-####################
-# cinder.conf sample #
-####################
-
[DEFAULT]
#
+# Options defined in oslo.messaging
+#
+
+# Use durable queues in AMQP. (boolean value)
+# Deprecated group/name - [DEFAULT]/rabbit_durable_queues
+#amqp_durable_queues=false
+
+# Auto-delete queues in AMQP. (boolean value)
+#amqp_auto_delete=false
+
+# Size of RPC connection pool. (integer value)
+#rpc_conn_pool_size=30
+
+# Qpid broker hostname. (string value)
+#qpid_hostname=localhost
+
+# Qpid broker port. (integer value)
+#qpid_port=5672
+
+# Qpid HA cluster host:port pairs. (list value)
+#qpid_hosts=$qpid_hostname:$qpid_port
+
+# Username for Qpid connection. (string value)
+#qpid_username=
+
+# Password for Qpid connection. (string value)
+#qpid_password=
+
+# Space separated list of SASL mechanisms to use for auth.
+# (string value)
+#qpid_sasl_mechanisms=
+
+# Seconds between connection keepalive heartbeats. (integer
+# value)
+#qpid_heartbeat=60
+
+# Transport to use, either 'tcp' or 'ssl'. (string value)
+#qpid_protocol=tcp
+
+# Whether to disable the Nagle algorithm. (boolean value)
+#qpid_tcp_nodelay=true
+
+# The number of prefetched messages held by receiver. (integer
+# value)
+#qpid_receiver_capacity=1
+
+# The qpid topology version to use. Version 1 is what was
+# originally used by impl_qpid. Version 2 includes some
+# backwards-incompatible changes that allow broker federation
+# to work. Users should update to version 2 when they are
+# able to take everything down, as it requires a clean break.
+# (integer value)
+#qpid_topology_version=1
+
+# SSL version to use (valid only if SSL enabled). valid values
+# are TLSv1 and SSLv23. SSLv2 and SSLv3 may be available on
+# some distributions. (string value)
+#kombu_ssl_version=
+
+# SSL key file (valid only if SSL enabled). (string value)
+#kombu_ssl_keyfile=
+
+# SSL cert file (valid only if SSL enabled). (string value)
+#kombu_ssl_certfile=
+
+# SSL certification authority file (valid only if SSL
+# enabled). (string value)
+#kombu_ssl_ca_certs=
+
+# How long to wait before reconnecting in response to an AMQP
+# consumer cancel notification. (floating point value)
+#kombu_reconnect_delay=1.0
+
+# The RabbitMQ broker address where a single node is used.
+# (string value)
+#rabbit_host=localhost
+
+# The RabbitMQ broker port where a single node is used.
+# (integer value)
+#rabbit_port=5672
+
+# RabbitMQ HA cluster host:port pairs. (list value)
+#rabbit_hosts=$rabbit_host:$rabbit_port
+
+# Connect over SSL for RabbitMQ. (boolean value)
+#rabbit_use_ssl=false
+
+# The RabbitMQ userid. (string value)
+#rabbit_userid=guest
+
+# The RabbitMQ password. (string value)
+#rabbit_password=guest
+
+# The RabbitMQ login method. (string value)
+#rabbit_login_method=AMQPLAIN
+
+# The RabbitMQ virtual host. (string value)
+#rabbit_virtual_host=/
+
+# How frequently to retry connecting with RabbitMQ. (integer
+# value)
+#rabbit_retry_interval=1
+
+# How long to backoff for between retries when connecting to
+# RabbitMQ. (integer value)
+#rabbit_retry_backoff=2
+
+# Maximum number of RabbitMQ connection retries. Default is 0
+# (infinite retry count). (integer value)
+#rabbit_max_retries=0
+
+# Use HA queues in RabbitMQ (x-ha-policy: all). If you change
+# this option, you must wipe the RabbitMQ database. (boolean
+# value)
+#rabbit_ha_queues=false
+
+# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake
+# (boolean value)
+#fake_rabbit=false
+
+# ZeroMQ bind address. Should be a wildcard (*), an ethernet
+# interface, or IP. The "host" option should point or resolve
+# to this address. (string value)
+#rpc_zmq_bind_address=*
+
+# MatchMaker driver. (string value)
+#rpc_zmq_matchmaker=oslo.messaging._drivers.matchmaker.MatchMakerLocalhost
+
+# ZeroMQ receiver listening port. (integer value)
+#rpc_zmq_port=9501
+
+# Number of ZeroMQ contexts, defaults to 1. (integer value)
+#rpc_zmq_contexts=1
+
+# Maximum number of ingress messages to locally buffer per
+# topic. Default is unlimited. (integer value)
+#rpc_zmq_topic_backlog=<None>
+
+# Directory for holding IPC sockets. (string value)
+#rpc_zmq_ipc_dir=/var/run/openstack
+
+# Name of this node. Must be a valid hostname, FQDN, or IP
+# address. Must match "host" option, if running Nova. (string
+# value)
+#rpc_zmq_host=cinder
+
+# Seconds to wait before a cast expires (TTL). Only supported
+# by impl_zmq. (integer value)
+#rpc_cast_timeout=30
+
+# Heartbeat frequency. (integer value)
+#matchmaker_heartbeat_freq=300
+
+# Heartbeat time-to-live. (integer value)
+#matchmaker_heartbeat_ttl=600
+
+# Size of RPC greenthread pool. (integer value)
+#rpc_thread_pool_size=64
+
+# Driver or drivers to handle sending notifications. (multi
+# valued)
+#notification_driver=
+
+# AMQP topic used for OpenStack notifications. (list value)
+# Deprecated group/name - [rpc_notifier2]/topics
+#notification_topics=notifications
+
+# Seconds to wait for a response from a call. (integer value)
+#rpc_response_timeout=60
+
+# A URL representing the messaging driver to use and its full
+# configuration. If not set, we fall back to the rpc_backend
+# option and driver specific configuration. (string value)
+#transport_url=<None>
+
+# The messaging driver to use, defaults to rabbit. Other
+# drivers include qpid and zmq. (string value)
+#rpc_backend=rabbit
+
+# The default exchange under which topics are scoped. May be
+# overridden by an exchange name specified in the
+# transport_url option. (string value)
+#control_exchange=openstack
+
+
+#
# Options defined in cinder.exception
#
-# make exception message format errors fatal (boolean value)
+# Make exception message format errors fatal. (boolean value)
#fatal_exception_format_errors=false
#
-# Options defined in cinder.policy
-#
-
-# JSON file representing policy (string value)
-#policy_file=policy.json
-
-# Rule checked when requested rule is not found (string value)
-#policy_default_rule=default
-
-
-#
# Options defined in cinder.quota
#
-# number of volumes allowed per project (integer value)
+# Number of volumes allowed per project (integer value)
#quota_volumes=10
-# number of volume snapshots allowed per project (integer
+# Number of volume snapshots allowed per project (integer
# value)
#quota_snapshots=10
-# number of volume gigabytes (snapshots are also included)
-# allowed per project (integer value)
+# Number of consistencygroups allowed per project (integer
+# value)
+#quota_consistencygroups=10
+
+# Total amount of storage, in gigabytes, allowed for volumes
+# and snapshots per project (integer value)
#quota_gigabytes=1000
-# number of seconds until a reservation expires (integer
+# Number of volume backups allowed per project (integer value)
+#quota_backups=10
+
+# Total amount of storage, in gigabytes, allowed for backups
+# per project (integer value)
+#quota_backup_gigabytes=1000
+
+# Number of seconds until a reservation expires (integer
# value)
#reservation_expire=86400
-# count of reservations until usage is refreshed (integer
+# Count of reservations until usage is refreshed (integer
# value)
#until_refresh=0
-# number of seconds between subsequent usage refreshes
+# Number of seconds between subsequent usage refreshes
# (integer value)
#max_age=0
-# default driver to use for quota checks (string value)
+# Default driver to use for quota checks (string value)
#quota_driver=cinder.quota.DbQuotaDriver
-# whether to use default quota class for default quota
-# (boolean value)
+# Enables or disables use of default quota class with default
+# quota. (boolean value)
#use_default_quota_class=true
@@ -62,24 +244,49 @@
# Options defined in cinder.service
#
-# seconds between nodes reporting state to datastore (integer
-# value)
+# Interval, in seconds, between nodes reporting state to
+# datastore (integer value)
#report_interval=10
-# seconds between running periodic tasks (integer value)
+# Interval, in seconds, between running periodic tasks
+# (integer value)
#periodic_interval=60
-# range of seconds to randomly delay when starting the
+# Range, in seconds, to randomly delay when starting the
# periodic task scheduler to reduce stampeding. (Disable by
# setting to 0) (integer value)
#periodic_fuzzy_delay=60
-# IP address for OpenStack Volume API to listen (string value)
+# IP address on which OpenStack Volume API listens (string
+# value)
#osapi_volume_listen=0.0.0.0
-# port for os volume api to listen (integer value)
+# Port on which OpenStack Volume API listens (integer value)
#osapi_volume_listen_port=8776
+# Number of workers for OpenStack Volume API service. The
+# default is equal to the number of CPUs available. (integer
+# value)
+osapi_volume_workers=1
+
+
+#
+# Options defined in cinder.ssh_utils
+#
+
+# Option to enable strict host key checking. When set to
+# "True" Cinder will only connect to systems with a host key
+# present in the configured "ssh_hosts_key_file". When set to
+# "False" the host key will be saved upon first connection and
+# used for subsequent connections. Default=False (boolean
+# value)
+#strict_ssh_host_key_policy=false
+
+# File containing SSH host keys for the systems with which
+# Cinder needs to communicate. OPTIONAL:
+# Default=$state_path/ssh_known_hosts (string value)
+#ssh_hosts_key_file=$state_path/ssh_known_hosts
+
#
# Options defined in cinder.test
@@ -88,22 +295,44 @@
# File name of clean sqlite db (string value)
#sqlite_clean_db=clean.sqlite
-# should we use everything for testing (boolean value)
-#fake_tests=true
-
#
# Options defined in cinder.wsgi
#
-# Number of backlog requests to configure the socket with
-# (integer value)
-#backlog=4096
+# Maximum line size of message headers to be accepted.
+# max_header_line may need to be increased when using large
+# tokens (typically those generated by the Keystone v3 API
+# with big service catalogs). (integer value)
+#max_header_line=16384
+
+# If False, closes the client socket connection explicitly.
+# Setting it to True to maintain backward compatibility.
+# Recommended setting is set it to False. (boolean value)
+#wsgi_keep_alive=true
+
+# Timeout for client connections' socket operations. If an
+# incoming connection is idle for this number of seconds it
+# will be closed. A value of '0' means wait forever. (integer
+# value)
+#client_socket_timeout=0
+
+# Sets the value of TCP_KEEPALIVE (True/False) for each server
+# socket. (boolean value)
+#tcp_keepalive=true
# Sets the value of TCP_KEEPIDLE in seconds for each server
# socket. Not supported on OS X. (integer value)
#tcp_keepidle=600
+# Sets the value of TCP_KEEPINTVL in seconds for each server
+# socket. Not supported on OS X. (integer value)
+#tcp_keepalive_interval=<None>
+
+# Sets the value of TCP_KEEPCNT for each server socket. Not
+# supported on OS X. (integer value)
+#tcp_keepalive_count=<None>
+
# CA certificate file to use to verify connecting clients
# (string value)
#ssl_ca_file=<None>
@@ -121,12 +350,13 @@
# Options defined in cinder.api.common
#
-# the maximum number of items returned in a single response
-# from a collection resource (integer value)
+# The maximum number of items that a collection resource
+# returns in a single response (integer value)
#osapi_max_limit=1000
# Base URL that will be presented to users in links to the
# OpenStack Volume API (string value)
+# Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix
#osapi_volume_base_URL=<None>
@@ -148,32 +378,45 @@
#
+# Options defined in cinder.backup.driver
+#
+
+# Backup metadata version to be used when backing up volume
+# metadata. If this number is bumped, make sure the service
+# doing the restore supports the new version. (integer value)
+#backup_metadata_version=1
+
+
+#
# Options defined in cinder.backup.drivers.ceph
#
-# Ceph config file to use. (string value)
+# Ceph configuration file to use. (string value)
#backup_ceph_conf=/etc/ceph/ceph.conf
-# the Ceph user to connect with (string value)
+# The Ceph user to connect with. Default here is to use the
+# same user as for Cinder volumes. If not using cephx this
+# should be set to None. (string value)
#backup_ceph_user=cinder
-# the chunk size in bytes that a backup will be broken into
-# before transfer to backup store (integer value)
+# The chunk size, in bytes, that a backup is broken into
+# before transfer to the Ceph object store. (integer value)
#backup_ceph_chunk_size=134217728
-# the Ceph pool to backup to (string value)
+# The Ceph pool where volume backups are stored. (string
+# value)
#backup_ceph_pool=backups
-# RBD stripe unit to use when creating a backup image (integer
-# value)
+# RBD stripe unit to use when creating a backup image.
+# (integer value)
#backup_ceph_stripe_unit=0
-# RBD stripe count to use when creating a backup image
+# RBD stripe count to use when creating a backup image.
# (integer value)
#backup_ceph_stripe_count=0
-# If True, always discard excess bytes when restoring volumes.
-# (boolean value)
+# If True, always discard excess bytes when restoring volumes
+# i.e. pad with zeroes. (boolean value)
#restore_discard_excess_bytes=true
@@ -182,11 +425,25 @@
#
# The URL of the Swift endpoint (string value)
-#backup_swift_url=http://localhost:8080/v1/AUTH_
+#backup_swift_url=<None>
+
+# Info to match when looking for swift in the service catalog.
+# Format is: separated values of the form:
+# <service_type>:<service_name>:<endpoint_type> - Only used if
+# backup_swift_url is unset (string value)
+#swift_catalog_info=object-store:swift:publicURL
# Swift authentication mechanism (string value)
#backup_swift_auth=per_user
+# Swift authentication version. Specify "1" for auth 1.0, or
+# "2" for auth 2.0 (string value)
+#backup_swift_auth_version=1
+
+# Swift tenant/account name. Required when connecting to an
+# auth 2.0 system (string value)
+#backup_swift_tenant=<None>
+
# Swift user name (string value)
#backup_swift_user=<None>
@@ -231,6 +488,7 @@
#
# Driver to use for backups. (string value)
+# Deprecated group/name - [DEFAULT]/backup_service
#backup_driver=cinder.backup.drivers.swift
@@ -238,39 +496,29 @@
# Options defined in cinder.common.config
#
-# Virtualization api connection type : libvirt, xenapi, or
-# fake (string value)
-#connection_type=<None>
-
# File name for the paste.deploy config for cinder-api (string
# value)
#api_paste_config=api-paste.ini
-# Directory where the cinder python module is installed
-# (string value)
-#pybasedir=/usr/lib/python2.6/vendor-packages
-
-# Directory where cinder binaries are installed (string value)
-bindir=/usr/bin
-
# Top-level directory for maintaining cinder's state (string
# value)
-state_path=/var/lib/cinder
-
-# ip address of this host (string value)
+# Deprecated group/name - [DEFAULT]/pybasedir
+#state_path=/var/lib/cinder
+
+# IP address of this host (string value)
#my_ip=10.0.0.1
-# default glance hostname or ip (string value)
+# Default glance host name or IP (string value)
#glance_host=$my_ip
-# default glance port (integer value)
+# Default glance port (integer value)
#glance_port=9292
-# A list of the glance api servers available to cinder
+# A list of the glance API servers available to cinder
# ([hostname|ip]:port) (list value)
#glance_api_servers=$glance_host:$glance_port
-# Version of the glance api to use (integer value)
+# Version of the glance API to use (integer value)
#glance_api_version=1
# Number retries when downloading an image from glance
@@ -281,35 +529,38 @@
# (boolean value)
#glance_api_insecure=false
-# Whether to attempt to negotiate SSL layer compression when
-# using SSL (https) requests. Set to False to disable SSL
-# layer compression. In some cases disabling this may improve
-# data throughput, eg when high network bandwidth is available
-# and you are using already compressed image formats such as
-# qcow2 . (boolean value)
+# Enables or disables negotiation of SSL layer compression. In
+# some cases disabling compression can improve data
+# throughput, such as when high network bandwidth is available
+# and you use compressed image formats like qcow2. (boolean
+# value)
#glance_api_ssl_compression=false
+# Location of ca certificates file to use for glance client
+# requests. (string value)
+#glance_ca_certificates_file=<None>
+
# http/https timeout value for glance operations. If no value
# (None) is supplied here, the glanceclient default value is
# used. (integer value)
#glance_request_timeout=<None>
-# the topic scheduler nodes listen on (string value)
+# The topic that scheduler nodes listen on (string value)
#scheduler_topic=cinder-scheduler
-# the topic volume nodes listen on (string value)
+# The topic that volume nodes listen on (string value)
#volume_topic=cinder-volume
-# the topic volume backup nodes listen on (string value)
+# The topic that volume backup nodes listen on (string value)
#backup_topic=cinder-backup
-# Deploy v1 of the Cinder API. (boolean value)
+# DEPRECATED: Deploy v1 of the Cinder API. (boolean value)
#enable_v1_api=true
-# Deploy v2 of the Cinder API. (boolean value)
+# Deploy v2 of the Cinder API. (boolean value)
#enable_v2_api=true
-# whether to rate limit the api (boolean value)
+# Enables or disables rate limit of the API. (boolean value)
#api_rate_limit=true
# Specify list of extensions to load when using
@@ -320,44 +571,36 @@
# osapi volume extension to load (multi valued)
#osapi_volume_extension=cinder.api.contrib.standard_extensions
-# full class name for the Manager for volume (string value)
+# Full class name for the Manager for volume (string value)
#volume_manager=cinder.volume.manager.VolumeManager
-# full class name for the Manager for volume backup (string
+# Full class name for the Manager for volume backup (string
# value)
#backup_manager=cinder.backup.manager.BackupManager
-# full class name for the Manager for scheduler (string value)
+# Full class name for the Manager for scheduler (string value)
#scheduler_manager=cinder.scheduler.manager.SchedulerManager
-# Name of this node. This can be an opaque identifier. It is
-# not necessarily a hostname, FQDN, or IP address. (string
+# Name of this node. This can be an opaque identifier. It is
+# not necessarily a host name, FQDN, or IP address. (string
# value)
#host=cinder
-# availability zone of this node (string value)
+# Availability zone of this node (string value)
#storage_availability_zone=nova
-# default availability zone to use when creating a new volume.
-# If this is not set then we use the value from the
-# storage_availability_zone option as the default
-# availability_zone for new volumes. (string value)
+# Default availability zone for new volumes. If not set, the
+# storage_availability_zone option value is used as the
+# default for new volumes. (string value)
#default_availability_zone=<None>
-# Memcached servers or None for in process cache. (list value)
-#memcached_servers=<None>
-
-# default volume type to use (string value)
+# Default volume type to use (string value)
#default_volume_type=<None>
-# time period to generate volume usages for. Time period must
-# be hour, day, month or year (string value)
+# Time period for which to generate volume usages. The options
+# are hour, day, month, or year. (string value)
#volume_usage_audit_period=month
-# Deprecated: command to use for running commands as root
-# (string value)
-#root_helper=sudo
-
# Path to the rootwrap configuration file to use for running
# commands as root (string value)
#rootwrap_config=/etc/cinder/rootwrap.conf
@@ -368,8 +611,8 @@
# List of modules/decorators to monkey patch (list value)
#monkey_patch_modules=
-# maximum time since last check-in for up service (integer
-# value)
+# Maximum time since last check-in for a service to be
+# considered up (integer value)
#service_down_time=60
# The full class name of the volume API class to use (string
@@ -397,6 +640,14 @@
# value)
#transfer_api_class=cinder.transfer.api.API
+# The full class name of the volume replication API class
+# (string value)
+#replication_api_class=cinder.replication.api.API
+
+# The full class name of the consistencygroup API class
+# (string value)
+#consistencygroup_api_class=cinder.consistencygroup.api.API
+
#
# Options defined in cinder.compute
@@ -411,8 +662,8 @@
# Options defined in cinder.compute.nova
#
-# Info to match when looking for nova in the service catalog.
-# Format is : separated values of the form:
+# Match this value when searching for nova in the service
+# catalog. Format is: separated values of the form:
# <service_type>:<service_name>:<endpoint_type> (string value)
#nova_catalog_info=compute:nova:publicURL
@@ -421,18 +672,18 @@
#nova_catalog_admin_info=compute:nova:adminURL
# Override service catalog lookup with template for nova
-# endpoint e.g. http://localhost:8774/v2/%(tenant_id)s (string
-# value)
+# endpoint e.g. http://localhost:8774/v2/%(project_id)s
+# (string value)
#nova_endpoint_template=<None>
# Same as nova_endpoint_template, but for admin endpoint.
# (string value)
#nova_endpoint_admin_template=<None>
-# region name of this node (string value)
+# Region name of this node (string value)
#os_region_name=<None>
-# Location of ca certicates file to use for nova client
+# Location of ca certificates file to use for nova client
# requests. (string value)
#nova_ca_certificates_file=<None>
@@ -469,7 +720,7 @@
# Options defined in cinder.db.base
#
-# driver to use for database access (string value)
+# Driver to use for database access (string value)
#db_driver=cinder.db
@@ -477,6 +728,9 @@
# Options defined in cinder.image.glance
#
+# Default core properties of image (list value)
+#glance_core_properties=checksum,container_format,disk_format,image_name,image_id,min_disk,min_ram,name,size
+
# A list of url schemes that can be downloaded directly via
# the direct_url. Currently supported schemes: [file]. (list
# value)
@@ -493,21 +747,17 @@
#
-# Options defined in cinder.openstack.common.db.sqlalchemy.session
-#
-
-# the filename to use with sqlite (string value)
-#sqlite_db=cinder.sqlite
-
-# If true, use synchronous mode for sqlite (boolean value)
-#sqlite_synchronous=true
-
-
-#
# Options defined in cinder.openstack.common.eventlet_backdoor
#
-# port for eventlet backdoor to listen (integer value)
+# Enable eventlet backdoor. Acceptable values are 0, <port>,
+# and <start>:<end>, where 0 results in listening on a random
+# tcp port number; <port> results in listening on the
+# specified port number (and not enabling backdoor if that
+# port is in use); and <start>:<end> results in listening on
+# the smallest unused port number within the specified range
+# of port numbers. The chosen port is displayed in the
+# service's log file. (string value)
#backdoor_port=<None>
@@ -535,108 +785,89 @@
# of default WARNING level). (boolean value)
#verbose=false
-# Log output to standard error (boolean value)
+# Log output to standard error. (boolean value)
#use_stderr=true
-# format string to use for log messages with context (string
+# Format string to use for log messages with context. (string
# value)
-#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user)s %(tenant)s] %(instance)s%(message)s
-
-# format string to use for log messages without context
+#logging_context_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
+
+# Format string to use for log messages without context.
# (string value)
#logging_default_format_string=%(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
-# data to append to log format when level is DEBUG (string
+# Data to append to log format when level is DEBUG. (string
# value)
#logging_debug_format_suffix=%(funcName)s %(pathname)s:%(lineno)d
-# prefix each line of exception output with this format
+# Prefix each line of exception output with this format.
# (string value)
#logging_exception_prefix=%(asctime)s.%(msecs)03d %(process)d TRACE %(name)s %(instance)s
-# list of logger=LEVEL pairs (list value)
-#default_log_levels=amqplib=WARN,sqlalchemy=WARN,boto=WARN,suds=INFO,keystone=INFO,eventlet.wsgi.server=WARN
-
-# publish error events (boolean value)
+# List of logger=LEVEL pairs. (list value)
+#default_log_levels=amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN
+
+# Enables or disables publication of error events. (boolean
+# value)
#publish_errors=false
-# make deprecations fatal (boolean value)
+# Enables or disables fatal status of deprecations. (boolean
+# value)
#fatal_deprecations=false
-# If an instance is passed with the log message, format it
-# like this (string value)
+# The format for an instance that is passed with the log
+# message. (string value)
#instance_format="[instance: %(uuid)s] "
-# If an instance UUID is passed with the log message, format
-# it like this (string value)
+# The format for an instance UUID that is passed with the log
+# message. (string value)
#instance_uuid_format="[instance: %(uuid)s] "
-# If this option is specified, the logging configuration file
-# specified is used and overrides any other logging options
-# specified. Please see the Python logging module
-# documentation for details on logging configuration files.
-# (string value)
-#log_config=<None>
-
-# A logging.Formatter log message format string which may use
-# any of the available logging.LogRecord attributes. This
-# option is deprecated. Please use
+# The name of a logging configuration file. This file is
+# appended to any existing logging configuration files. For
+# details about logging configuration files, see the Python
+# logging module documentation. (string value)
+# Deprecated group/name - [DEFAULT]/log_config
+#log_config_append=<None>
+
+# DEPRECATED. A logging.Formatter log message format string
+# which may use any of the available logging.LogRecord
+# attributes. This option is deprecated. Please use
# logging_context_format_string and
# logging_default_format_string instead. (string value)
#log_format=<None>
# Format string for %%(asctime)s in log records. Default:
-# %(default)s (string value)
+# %(default)s . (string value)
#log_date_format=%Y-%m-%d %H:%M:%S
# (Optional) Name of log file to output to. If no default is
# set, logging will go to stdout. (string value)
+# Deprecated group/name - [DEFAULT]/logfile
#log_file=<None>
# (Optional) The base directory used for relative --log-file
-# paths (string value)
+# paths. (string value)
+# Deprecated group/name - [DEFAULT]/logdir
#log_dir=<None>
-# Use syslog for logging. (boolean value)
+# Use syslog for logging. Existing syslog format is DEPRECATED
+# during I, and will change in J to honor RFC5424. (boolean
+# value)
#use_syslog=false
-# syslog facility to receive log lines (string value)
+# (Optional) Enables or disables syslog rfc5424 format for
+# logging. If enabled, prefixes the MSG part of the syslog
+# message with APP-NAME (RFC5424). The format without the APP-
+# NAME is deprecated in I, and will be removed in J. (boolean
+# value)
+#use_syslog_rfc_format=false
+
+# Syslog facility to receive log lines. (string value)
#syslog_log_facility=LOG_USER
#
-# Options defined in cinder.openstack.common.notifier.api
-#
-
-# Driver or drivers to handle sending notifications (multi
-# valued)
-
-# Default notification level for outgoing notifications
-# (string value)
-#default_notification_level=INFO
-
-# Default publisher_id for outgoing notifications (string
-# value)
-#default_publisher_id=<None>
-
-
-#
-# Options defined in cinder.openstack.common.notifier.rpc_notifier
-#
-
-# AMQP topic used for OpenStack notifications (list value)
-#notification_topics=notifications
-
-
-#
-# Options defined in cinder.openstack.common.notifier.rpc_notifier2
-#
-
-# AMQP topic(s) used for OpenStack notifications (list value)
-#topics=notifications
-
-
-#
# Options defined in cinder.openstack.common.periodic_task
#
@@ -646,199 +877,15 @@
#
-# Options defined in cinder.openstack.common.rpc
-#
-
-# The messaging module to use, defaults to kombu. (string
-# value)
-#rpc_backend=cinder.openstack.common.rpc.impl_kombu
-
-# Size of RPC thread pool (integer value)
-#rpc_thread_pool_size=64
-
-# Size of RPC connection pool (integer value)
-#rpc_conn_pool_size=30
-
-# Seconds to wait for a response from call or multicall
-# (integer value)
-#rpc_response_timeout=60
-
-# Seconds to wait before a cast expires (TTL). Only supported
-# by impl_zmq. (integer value)
-#rpc_cast_timeout=30
-
-# Modules of exceptions that are permitted to be recreatedupon
-# receiving exception data from an rpc call. (list value)
-#allowed_rpc_exception_modules=nova.exception,cinder.exception,exceptions
-
-# If passed, use a fake RabbitMQ provider (boolean value)
-#fake_rabbit=false
-
-# AMQP exchange to connect to if using RabbitMQ or Qpid
-# (string value)
-#control_exchange=openstack
-
-
+# Options defined in cinder.openstack.common.policy
#
-# Options defined in cinder.openstack.common.rpc.amqp
-#
-
-# Enable a fast single reply queue if using AMQP based RPC
-# like RabbitMQ or Qpid. (boolean value)
-#amqp_rpc_single_reply_queue=false
-
-# Use durable queues in amqp. (boolean value)
-#amqp_durable_queues=false
-
-# Auto-delete queues in amqp. (boolean value)
-#amqp_auto_delete=false
-
-
-#
-# Options defined in cinder.openstack.common.rpc.impl_kombu
-#
-
-# SSL version to use (valid only if SSL enabled) (string
-# value)
-#kombu_ssl_version=
-
-# SSL key file (valid only if SSL enabled) (string value)
-#kombu_ssl_keyfile=
-
-# SSL cert file (valid only if SSL enabled) (string value)
-#kombu_ssl_certfile=
-
-# SSL certification authority file (valid only if SSL enabled)
-# (string value)
-#kombu_ssl_ca_certs=
-
-# The RabbitMQ broker address where a single node is used
+
+# The JSON file that defines policies. (string value)
+#policy_file=policy.json
+
+# Default rule. Enforced when a requested rule is not found.
# (string value)
-#rabbit_host=localhost
-
-# The RabbitMQ broker port where a single node is used
-# (integer value)
-#rabbit_port=5672
-
-# RabbitMQ HA cluster host:port pairs (list value)
-#rabbit_hosts=$rabbit_host:$rabbit_port
-
-# connect over SSL for RabbitMQ (boolean value)
-#rabbit_use_ssl=false
-
-# the RabbitMQ userid (string value)
-#rabbit_userid=guest
-
-# the RabbitMQ password (string value)
-#rabbit_password=guest
-
-# the RabbitMQ virtual host (string value)
-#rabbit_virtual_host=/
-
-# how frequently to retry connecting with RabbitMQ (integer
-# value)
-#rabbit_retry_interval=1
-
-# how long to backoff for between retries when connecting to
-# RabbitMQ (integer value)
-#rabbit_retry_backoff=2
-
-# maximum retries with trying to connect to RabbitMQ (the
-# default of 0 implies an infinite retry count) (integer
-# value)
-#rabbit_max_retries=0
-
-# use H/A queues in RabbitMQ (x-ha-policy: all).You need to
-# wipe RabbitMQ database when changing this option. (boolean
-# value)
-#rabbit_ha_queues=false
-
-
-#
-# Options defined in cinder.openstack.common.rpc.impl_qpid
-#
-
-# Qpid broker hostname (string value)
-#qpid_hostname=localhost
-
-# Qpid broker port (integer value)
-#qpid_port=5672
-
-# Qpid HA cluster host:port pairs (list value)
-#qpid_hosts=$qpid_hostname:$qpid_port
-
-# Username for qpid connection (string value)
-#qpid_username=
-
-# Password for qpid connection (string value)
-#qpid_password=
-
-# Space separated list of SASL mechanisms to use for auth
-# (string value)
-#qpid_sasl_mechanisms=
-
-# Seconds between connection keepalive heartbeats (integer
-# value)
-#qpid_heartbeat=60
-
-# Transport to use, either 'tcp' or 'ssl' (string value)
-#qpid_protocol=tcp
-
-# Disable Nagle algorithm (boolean value)
-#qpid_tcp_nodelay=true
-
-# The qpid topology version to use. Version 1 is what was
-# originally used by impl_qpid. Version 2 includes some
-# backwards-incompatible changes that allow broker federation
-# to work. Users should update to version 2 when they are
-# able to take everything down, as it requires a clean break.
-# (integer value)
-#qpid_topology_version=1
-
-
-#
-# Options defined in cinder.openstack.common.rpc.impl_zmq
-#
-
-# ZeroMQ bind address. Should be a wildcard (*), an ethernet
-# interface, or IP. The "host" option should point or resolve
-# to this address. (string value)
-#rpc_zmq_bind_address=*
-
-# MatchMaker driver (string value)
-#rpc_zmq_matchmaker=cinder.openstack.common.rpc.matchmaker.MatchMakerLocalhost
-
-# ZeroMQ receiver listening port (integer value)
-#rpc_zmq_port=9501
-
-# Number of ZeroMQ contexts, defaults to 1 (integer value)
-#rpc_zmq_contexts=1
-
-# Maximum number of ingress messages to locally buffer per
-# topic. Default is unlimited. (integer value)
-#rpc_zmq_topic_backlog=<None>
-
-# Directory for holding IPC sockets (string value)
-#rpc_zmq_ipc_dir=/var/run/openstack
-
-# Name of this node. Must be a valid hostname, FQDN, or IP
-# address. Must match "host" option, if running Nova. (string
-# value)
-#rpc_zmq_host=cinder
-
-
-#
-# Options defined in cinder.openstack.common.rpc.matchmaker
-#
-
-# Matchmaker ring file (JSON) (string value)
-#matchmaker_ringfile=/etc/nova/matchmaker_ring.json
-
-# Heartbeat frequency (integer value)
-#matchmaker_heartbeat_freq=300
-
-# Heartbeat time-to-live. (integer value)
-#matchmaker_heartbeat_ttl=600
+#policy_default_rule=default
#
@@ -887,8 +934,11 @@
# Options defined in cinder.scheduler.simple
#
-# maximum number of volume gigabytes to allow per host
-# (integer value)
+# This configure option has been deprecated along with the
+# SimpleScheduler. New scheduler is able to gather capacity
+# information for each host, thus setting the maximum number
+# of volume gigabytes for host is no longer needed. It's safe
+# to remove this configure from cinder.conf. (integer value)
#max_gigabytes=10000
@@ -900,6 +950,19 @@
# numbers mean to stack vs spread. (floating point value)
#capacity_weight_multiplier=1.0
+# Multiplier used for weighing volume capacity. Negative
+# numbers mean to stack vs spread. (floating point value)
+#allocated_capacity_weight_multiplier=-1.0
+
+
+#
+# Options defined in cinder.scheduler.weights.volume_number
+#
+
+# Multiplier used for weighing volume number. Negative numbers
+# mean to spread vs stack. (floating point value)
+#volume_number_multiplier=-1.0
+
#
# Options defined in cinder.transfer.api
@@ -917,6 +980,10 @@
# Options defined in cinder.volume.api
#
+# Cache volume availability zones in memory for the provided
+# duration in seconds (integer value)
+#az_cache_duration=3600
+
# Create volume from snapshot at the host where snapshot
# resides (boolean value)
#snapshot_same_host=true
@@ -930,7 +997,30 @@
# Options defined in cinder.volume.driver
#
-# number of times to attempt to run flakey shell commands
+# The maximum number of times to rescan iSER targetto find
+# volume (integer value)
+#num_iser_scan_tries=3
+
+# The maximum number of iSER target IDs per host (integer
+# value)
+#iser_num_targets=100
+
+# Prefix for iSER volumes (string value)
+#iser_target_prefix=iqn.2010-10.org.iser.openstack:
+
+# The IP address that the iSER daemon is listening on (string
+# value)
+#iser_ip_address=$my_ip
+
+# The port that the iSER daemon is listening on (integer
+# value)
+#iser_port=3260
+
+# The name of the iSER target user-land tool to use (string
+# value)
+#iser_helper=tgtadm
+
+# Number of times to attempt to run flakey shell commands
# (integer value)
#num_shell_tries=3
@@ -938,11 +1028,11 @@
# value)
#reserved_percentage=0
-# The maximum number of iscsi target ids per host (integer
+# The maximum number of iSCSI target IDs per host (integer
# value)
#iscsi_num_targets=100
-# prefix for iscsi volumes (string value)
+# Prefix for iSCSI volumes (string value)
#iscsi_target_prefix=iqn.2010-10.org.openstack:
# The IP address that the iSCSI daemon is listening on (string
@@ -955,30 +1045,9 @@
# The maximum number of times to rescan targets to find volume
# (integer value)
+# Deprecated group/name - [DEFAULT]/num_iscsi_scan_tries
#num_volume_device_scan_tries=3
-# The maximum number of times to rescan iSER targetto find
-# volume (integer value)
-#num_iser_scan_tries=3
-
-# The maximum number of iser target ids per host (integer
-# value)
-#iser_num_targets=100
-
-# prefix for iser volumes (string value)
-#iser_target_prefix=iqn.2010-10.org.iser.openstack:
-
-# The IP address that the iSER daemon is listening on (string
-# value)
-#iser_ip_address=$my_ip
-
-# The port that the iSER daemon is listening on (integer
-# value)
-#iser_port=3260
-
-# iser target user-land tool to use (string value)
-#iser_helper=tgtadm
-
# The backend name for a given driver implementation (string
# value)
#volume_backend_name=<None>
@@ -988,7 +1057,7 @@
# value)
#use_multipath_for_image_xfer=false
-# Method used to wipe old voumes (valid options are: none,
+# Method used to wipe old volumes (valid options are: none,
# zero, shred) (string value)
#volume_clear=zero
@@ -996,7 +1065,14 @@
# (integer value)
#volume_clear_size=0
-# iscsi target user-land tool to use (string value)
+# The flag to pass to ionice to alter the i/o priority of the
+# process used to zero a volume after deletion, for example
+# "-c3" for idle only priority. (string value)
+#volume_clear_ionice=<None>
+
+# iSCSI target user-land tool to use. tgtadm is default, use
+# lioadm for LIO iSCSI support, iseradm for the ISER protocol,
+# or fake for testing. (string value)
#iscsi_helper=tgtadm
# Volume configuration file storage directory (string value)
@@ -1014,6 +1090,32 @@
# will autodetect type of backing device (string value)
#iscsi_iotype=fileio
+# The default block size used when copying/clearing volumes
+# (string value)
+#volume_dd_blocksize=1M
+
+# The blkio cgroup name to be used to limit bandwidth of
+# volume copy (string value)
+#volume_copy_blkio_cgroup_name=cinder-volume-copy
+
+# The upper limit of bandwidth of volume copy. 0 => unlimited
+# (integer value)
+#volume_copy_bps_limit=0
+
+# Sets the behavior of the iSCSI target to either perform
+# write-back(on) or write-through(off). This parameter is
+# valid if iscsi_helper is set to tgtadm or iseradm. (string
+# value)
+#iscsi_write_cache=on
+
+# The path to the client certificate key for verification, if
+# the driver supports it. (string value)
+#driver_client_cert_key=<None>
+
+# The path to the client certificate for verification, if the
+# driver supports it. (string value)
+#driver_client_cert=<None>
+
#
# Options defined in cinder.volume.drivers.block_device
@@ -1046,6 +1148,77 @@
#
+# Options defined in cinder.volume.drivers.datera
+#
+
+# Datera API token. (string value)
+#datera_api_token=<None>
+
+# Datera API port. (string value)
+#datera_api_port=7717
+
+# Datera API version. (string value)
+#datera_api_version=1
+
+# Number of replicas to create of an inode. (string value)
+#datera_num_replicas=3
+
+
+#
+# Options defined in cinder.volume.drivers.emc.emc_vmax_common
+#
+
+# use this file for cinder emc plugin config data (string
+# value)
+#cinder_emc_config_file=/etc/cinder/cinder_emc_config.xml
+
+
+#
+# Options defined in cinder.volume.drivers.emc.emc_vnx_cli
+#
+
+# VNX authentication scope type. (string value)
+#storage_vnx_authentication_type=global
+
+# Directory path that contains the VNX security file. Make
+# sure the security file is generated first. (string value)
+#storage_vnx_security_file_dir=<None>
+
+# Naviseccli Path. (string value)
+#naviseccli_path=
+
+# Storage pool name. (string value)
+#storage_vnx_pool_name=<None>
+
+# VNX secondary SP IP Address. (string value)
+#san_secondary_ip=<None>
+
+# Default timeout for CLI operations in minutes. For example,
+# LUN migration is a typical long running operation, which
+# depends on the LUN size and the load of the array. An upper
+# bound in the specific deployment can be set to avoid
+# unnecessary long wait. By default, it is 365 days long.
+# (integer value)
+#default_timeout=525600
+
+# Default max number of LUNs in a storage group. By default,
+# the value is 255. (integer value)
+#max_luns_per_storage_group=255
+
+# To destroy storage group when the last LUN is removed from
+# it. By default, the value is False. (boolean value)
+#destroy_empty_storage_group=false
+
+# Mapping between hostname and its iSCSI initiator IP
+# addresses. (string value)
+#iscsi_initiators=
+
+# Automatically register initiators. By default, the value is
+# False. (boolean value)
+#initiator_auto_registration=false
+
+
+#
# Options defined in cinder.volume.drivers.eqlx
#
@@ -1059,7 +1232,7 @@
# Maximum retry count for reconnection (integer value)
#eqlx_cli_max_retries=5
-# Use CHAP authentificaion for targets? (boolean value)
+# Use CHAP authentication for targets? (boolean value)
#eqlx_use_chap=false
# Existing CHAP account name (string value)
@@ -1073,6 +1246,31 @@
#
+# Options defined in cinder.volume.drivers.fujitsu_eternus_dx_common
+#
+
+# The configuration file for the Cinder SMI-S driver (string
+# value)
+#cinder_smis_config_file=/etc/cinder/cinder_fujitsu_eternus_dx.xml
+
+
+#
+# Options defined in cinder.volume.drivers.fusionio.ioControl
+#
+
+# amount of time wait for iSCSI target to come online (integer
+# value)
+#fusionio_iocontrol_targetdelay=5
+
+# number of retries for GET operations (integer value)
+#fusionio_iocontrol_retry=3
+
+# verify the array certificate on each transaction (boolean
+# value)
+#fusionio_iocontrol_verify_cert=true
+
+
+#
# Options defined in cinder.volume.drivers.glusterfs
#
@@ -1080,9 +1278,6 @@
# value)
#glusterfs_shares_config=/etc/cinder/glusterfs_shares
-# Use du or df for free space calculation (string value)
-#glusterfs_disk_util=df
-
# Create volumes as sparsed files which take no space.If set
# to False volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
@@ -1098,7 +1293,127 @@
#
-# Options defined in cinder.volume.drivers.gpfs
+# Options defined in cinder.volume.drivers.hds.hds
+#
+
+# The configuration file for the Cinder HDS driver for HUS
+# (string value)
+#hds_cinder_config_file=/opt/hds/hus/cinder_hus_conf.xml
+
+
+#
+# Options defined in cinder.volume.drivers.hds.iscsi
+#
+
+# Configuration file for HDS iSCSI cinder plugin (string
+# value)
+#hds_hnas_iscsi_config_file=/opt/hds/hnas/cinder_iscsi_conf.xml
+
+
+#
+# Options defined in cinder.volume.drivers.hds.nfs
+#
+
+# Configuration file for HDS NFS cinder plugin (string value)
+#hds_hnas_nfs_config_file=/opt/hds/hnas/cinder_nfs_conf.xml
+
+
+#
+# Options defined in cinder.volume.drivers.hitachi.hbsd_common
+#
+
+# Serial number of storage system (string value)
+#hitachi_serial_number=<None>
+
+# Name of an array unit (string value)
+#hitachi_unit_name=<None>
+
+# Pool ID of storage system (integer value)
+#hitachi_pool_id=<None>
+
+# Thin pool ID of storage system (integer value)
+#hitachi_thin_pool_id=<None>
+
+# Range of logical device of storage system (string value)
+#hitachi_ldev_range=<None>
+
+# Default copy method of storage system (string value)
+#hitachi_default_copy_method=FULL
+
+# Copy speed of storage system (integer value)
+#hitachi_copy_speed=3
+
+# Interval to check copy (integer value)
+#hitachi_copy_check_interval=3
+
+# Interval to check copy asynchronously (integer value)
+#hitachi_async_copy_check_interval=10
+
+# Control port names for HostGroup or iSCSI Target (string
+# value)
+#hitachi_target_ports=<None>
+
+# Range of group number (string value)
+#hitachi_group_range=<None>
+
+# Request for creating HostGroup or iSCSI Target (boolean
+# value)
+#hitachi_group_request=false
+
+
+#
+# Options defined in cinder.volume.drivers.hitachi.hbsd_fc
+#
+
+# Request for FC Zone creating HostGroup (boolean value)
+#hitachi_zoning_request=false
+
+
+#
+# Options defined in cinder.volume.drivers.hitachi.hbsd_horcm
+#
+
+# Instance numbers for HORCM (string value)
+#hitachi_horcm_numbers=200,201
+
+# Username of storage system for HORCM (string value)
+#hitachi_horcm_user=<None>
+
+# Password of storage system for HORCM (string value)
+#hitachi_horcm_password=<None>
+
+# Add to HORCM configuration (boolean value)
+#hitachi_horcm_add_conf=true
+
+
+#
+# Options defined in cinder.volume.drivers.hitachi.hbsd_iscsi
+#
+
+# Add CHAP user (boolean value)
+#hitachi_add_chap_user=false
+
+# iSCSI authentication method (string value)
+#hitachi_auth_method=<None>
+
+# iSCSI authentication username (string value)
+#hitachi_auth_user=HBSD-CHAP-user
+
+# iSCSI authentication password (string value)
+#hitachi_auth_password=HBSD-CHAP-password
+
+
+#
+# Options defined in cinder.volume.drivers.huawei
+#
+
+# The configuration file for the Cinder Huawei driver (string
+# value)
+#cinder_huawei_conf_file=/etc/cinder/cinder_huawei_conf.xml
+
+
+#
+# Options defined in cinder.volume.drivers.ibm.gpfs
#
# Specifies the path of the GPFS directory where Block Storage
@@ -1134,364 +1449,40 @@
# may take a significantly longer time. (boolean value)
#gpfs_sparse_volumes=true
-
-#
-# Options defined in cinder.volume.drivers.hds.hds
-#
-
-# configuration file for HDS cinder plugin for HUS (string
-# value)
-#hds_cinder_config_file=/opt/hds/hus/cinder_hus_conf.xml
-
-
-#
-# Options defined in cinder.volume.drivers.huawei
-#
-
-# config data for cinder huawei plugin (string value)
-#cinder_huawei_conf_file=/etc/cinder/cinder_huawei_conf.xml
-
-
-#
-# Options defined in cinder.volume.drivers.lvm
-#
-
-# Name for the VG that will contain exported volumes (string
-# value)
-#volume_group=cinder-volumes
-
-# Size of thin provisioning pool (None uses entire cinder VG)
-# (string value)
-#pool_size=<None>
-
-# If set, create lvms with multiple mirrors. Note that this
-# requires lvm_mirrors + 2 pvs with available space (integer
-# value)
-#lvm_mirrors=0
-
-# Type of LVM volumes to deploy; (default or thin) (string
-# value)
-#lvm_type=default
-
-
-#
-# Options defined in cinder.volume.drivers.netapp.options
-#
-
-# Vfiler to use for provisioning (string value)
-#netapp_vfiler=<None>
-
-# User name for the storage controller (string value)
-#netapp_login=<None>
-
-# Password for the storage controller (string value)
-#netapp_password=<None>
-
-# Cluster vserver to use for provisioning (string value)
-#netapp_vserver=<None>
-
-# Host name for the storage controller (string value)
-#netapp_server_hostname=<None>
-
-# Port number for the storage controller (integer value)
-#netapp_server_port=80
-
-# Threshold available percent to start cache cleaning.
-# (integer value)
-#thres_avl_size_perc_start=20
-
-# Threshold available percent to stop cache cleaning. (integer
-# value)
-#thres_avl_size_perc_stop=60
-
-# Threshold minutes after which cache file can be cleaned.
-# (integer value)
-#expiry_thres_minutes=720
-
-# Volume size multiplier to ensure while creation (floating
-# point value)
-#netapp_size_multiplier=1.2
-
-# Comma separated volumes to be used for provisioning (string
-# value)
-#netapp_volume_list=<None>
-
-# Storage family type. (string value)
-#netapp_storage_family=ontap_cluster
-
-# Storage protocol type. (string value)
-#netapp_storage_protocol=<None>
-
-# Transport type protocol (string value)
-#netapp_transport_type=http
-
-
-#
-# Options defined in cinder.volume.drivers.nexenta.options
-#
-
-# IP address of Nexenta SA (string value)
-#nexenta_host=
-
-# HTTP port to connect to Nexenta REST API server (integer
-# value)
-#nexenta_rest_port=2000
-
-# Use http or https for REST connection (default auto) (string
-# value)
-#nexenta_rest_protocol=auto
-
-# User name to connect to Nexenta SA (string value)
-#nexenta_user=admin
-
-# Password to connect to Nexenta SA (string value)
-#nexenta_password=nexenta
-
-# Nexenta target portal port (integer value)
-#nexenta_iscsi_target_portal_port=3260
-
-# pool on SA that will hold all volumes (string value)
-#nexenta_volume=cinder
-
-# IQN prefix for iSCSI targets (string value)
-#nexenta_target_prefix=iqn.1986-03.com.sun:02:cinder-
-
-# prefix for iSCSI target groups on SA (string value)
-#nexenta_target_group_prefix=cinder/
-
-# File with the list of available nfs shares (string value)
-#nexenta_shares_config=/etc/cinder/nfs_shares
-
-# Base dir containing mount points for nfs shares (string
-# value)
-#nexenta_mount_point_base=$state_path/mnt
-
-# Create volumes as sparsed files which take no space.If set
-# to False volume is created as regular file.In such case
-# volume creation takes a lot of time. (boolean value)
-#nexenta_sparsed_volumes=true
-
-# Default compression value for new ZFS folders. (string
-# value)
-#nexenta_volume_compression=on
-
-# Mount options passed to the nfs client. See section of the
-# nfs man page for details (string value)
-#nexenta_mount_options=<None>
-
-# Percent of ACTUAL usage of the underlying volume before no
-# new volumes can be allocated to the volume destination.
-# (floating point value)
-#nexenta_used_ratio=0.95
-
-# This will compare the allocated to available space on the
-# volume destination. If the ratio exceeds this number, the
-# destination will no longer be valid. (floating point value)
-#nexenta_oversub_ratio=1.0
-
-# block size for volumes (blank=default,8KB) (string value)
-#nexenta_blocksize=
-
-# flag to create sparse volumes (boolean value)
-#nexenta_sparse=false
+# Specifies the storage pool that volumes are assigned to. By
+# default, the system storage pool is used. (string value)
+#gpfs_storage_pool=system
#
-# Options defined in cinder.volume.drivers.nfs
-#
-
-# File with the list of available nfs shares (string value)
-#nfs_shares_config=/etc/cinder/nfs_shares
-
-# Create volumes as sparsed files which take no space.If set
-# to False volume is created as regular file.In such case
-# volume creation takes a lot of time. (boolean value)
-#nfs_sparsed_volumes=true
-
-# Percent of ACTUAL usage of the underlying volume before no
-# new volumes can be allocated to the volume destination.
-# (floating point value)
-#nfs_used_ratio=0.95
-
-# This will compare the allocated to available space on the
-# volume destination. If the ratio exceeds this number, the
-# destination will no longer be valid. (floating point value)
-#nfs_oversub_ratio=1.0
-
-# Base dir containing mount points for nfs shares. (string
-# value)
-#nfs_mount_point_base=$state_path/mnt
-
-# Mount options passed to the nfs client. See section of the
-# nfs man page for details. (string value)
-#nfs_mount_options=<None>
-
-
+# Options defined in cinder.volume.drivers.ibm.ibmnas
#
-# Options defined in cinder.volume.drivers.rbd
-#
-
-# the RADOS pool in which rbd volumes are stored (string
-# value)
-#rbd_pool=rbd
-
-# the RADOS client name for accessing rbd volumes - only set
-# when using cephx authentication (string value)
-#rbd_user=<None>
-
-# path to the ceph configuration file to use (string value)
-#rbd_ceph_conf=
-
-# flatten volumes created from snapshots to remove dependency
-# (boolean value)
-#rbd_flatten_volume_from_snapshot=false
-
-# the libvirt uuid of the secret for the rbd_uservolumes
+
+# IP address or Hostname of NAS system. (string value)
+#nas_ip=
+
+# User name to connect to NAS system. (string value)
+#nas_login=admin
+
+# Password to connect to NAS system. (string value)
+#nas_password=
+
+# SSH port to use to connect to NAS system. (integer value)
+#nas_ssh_port=22
+
+# Filename of private key to use for SSH authentication.
# (string value)
-#rbd_secret_uuid=<None>
-
-# where to store temporary image files if the volume driver
-# does not write them directly to the volume (string value)
-#volume_tmp_dir=<None>
-
-# maximum number of nested clones that can be taken of a
-# volume before enforcing a flatten prior to next clone. A
-# value of zero disables cloning (integer value)
-#rbd_max_clone_depth=5
+#nas_private_key=
+
+# IBMNAS platform type to be used as backend storage; valid
+# values are - v7ku : for using IBM Storwize V7000 Unified,
+# sonas : for using IBM Scale Out NAS, gpfs-nas : for using
+# NFS based IBM GPFS deployments. (string value)
+#ibmnas_platform_type=v7ku
#
-# Options defined in cinder.volume.drivers.san.hp.hp_3par_common
-#
-
-# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1
-# (string value)
-#hp3par_api_url=
-
-# 3PAR Super user username (string value)
-#hp3par_username=
-
-# 3PAR Super user password (string value)
-#hp3par_password=
-
-# This option is DEPRECATED and no longer used. The 3par
-# domain name to use. (string value)
-#hp3par_domain=<None>
-
-# The CPG to use for volume creation (string value)
-#hp3par_cpg=OpenStack
-
-# The CPG to use for Snapshots for volumes. If empty
-# hp3par_cpg will be used (string value)
-#hp3par_cpg_snap=
-
-# The time in hours to retain a snapshot. You can't delete it
-# before this expires. (string value)
-#hp3par_snapshot_retention=
-
-# The time in hours when a snapshot expires and is deleted.
-# This must be larger than expiration (string value)
-#hp3par_snapshot_expiration=
-
-# Enable HTTP debugging to 3PAR (boolean value)
-#hp3par_debug=false
-
-# List of target iSCSI addresses to use. (list value)
-#hp3par_iscsi_ips=
-
-
-#
-# Options defined in cinder.volume.drivers.san.san
-#
-
-# Use thin provisioning for SAN volumes? (boolean value)
-#san_thin_provision=true
-
-# IP address of SAN controller (string value)
-#san_ip=
-
-# Username for SAN controller (string value)
-#san_login=admin
-
-# Password for SAN controller (string value)
-#san_password=
-
-# Filename of private key to use for SSH authentication
-# (string value)
-#san_private_key=
-
-# Cluster name to use for creating volumes (string value)
-#san_clustername=
-
-# SSH port to use with SAN (integer value)
-#san_ssh_port=22
-
-# Execute commands locally instead of over SSH; use if the
-# volume service is running on the SAN device (boolean value)
-#san_is_local=false
-
-# SSH connection timeout in seconds (integer value)
-#ssh_conn_timeout=30
-
-# Minimum ssh connections in the pool (integer value)
-#ssh_min_pool_conn=1
-
-# Maximum ssh connections in the pool (integer value)
-#ssh_max_pool_conn=5
-
-
-#
-# Options defined in cinder.volume.drivers.san.solaris
-#
-
-# The ZFS path under which to create zvols for volumes.
-# (string value)
-#san_zfs_volume_base=rpool/
-
-
-#
-# Options defined in cinder.volume.drivers.scality
-#
-
-# Path or URL to Scality SOFS configuration file (string
-# value)
-#scality_sofs_config=<None>
-
-# Base dir where Scality SOFS shall be mounted (string value)
-#scality_sofs_mount_point=$state_path/scality
-
-# Path from Scality SOFS root to volume dir (string value)
-#scality_sofs_volume_dir=cinder/volumes
-
-
-#
-# Options defined in cinder.volume.drivers.solaris.zfs
-#
-
-# The base dataset for ZFS cinder volumes.
-#zfs_volume_base=rpool/cinder
-
-
-#
-# Options defined in cinder.volume.drivers.solidfire
-#
-
-# Set 512 byte emulation on volume creation; (boolean value)
-#sf_emulate_512=true
-
-# Allow tenants to specify QOS on create (boolean value)
-#sf_allow_tenant_qos=false
-
-# Create SolidFire accounts with this prefix (string value)
-#sf_account_prefix=cinder
-
-# SolidFire API port. Useful if the device api is behind a
-# proxy on a different port. (integer value)
-#sf_api_port=443
-
-
-#
-# Options defined in cinder.volume.drivers.storwize_svc
+# Options defined in cinder.volume.drivers.ibm.storwize_svc
#
# Storage system storage pool for volumes (string value)
@@ -1542,6 +1533,587 @@
# Allows vdisk to multi host mapping (boolean value)
#storwize_svc_multihostmap_enabled=true
+# Indicate whether svc driver is compatible for NPIV setup. If
+# it is compatible, it will allow no wwpns being returned on
+# get_conn_fc_wwpns during initialize_connection (boolean
+# value)
+#storwize_svc_npiv_compatibility_mode=false
+
+# Allow tenants to specify QOS on create (boolean value)
+#storwize_svc_allow_tenant_qos=false
+
+# If operating in stretched cluster mode, specify the name of
+# the pool in which mirrored copies are stored.Example:
+# "pool2" (string value)
+#storwize_svc_stretched_cluster_partner=<None>
+
+
+#
+# Options defined in cinder.volume.drivers.ibm.xiv_ds8k
+#
+
+# Proxy driver that connects to the IBM Storage Array (string
+# value)
+#xiv_ds8k_proxy=xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy
+
+# Connection type to the IBM Storage Array
+# (fibre_channel|iscsi) (string value)
+#xiv_ds8k_connection_type=iscsi
+
+# CHAP authentication mode, effective only for iscsi
+# (disabled|enabled) (string value)
+#xiv_chap=disabled
+
+
+#
+# Options defined in cinder.volume.drivers.lvm
+#
+
+# Name for the VG that will contain exported volumes (string
+# value)
+#volume_group=cinder-volumes
+
+# If >0, create LVs with multiple mirrors. Note that this
+# requires lvm_mirrors + 2 PVs with available space (integer
+# value)
+#lvm_mirrors=0
+
+# Type of LVM volumes to deploy; (default or thin) (string
+# value)
+#lvm_type=default
+
+
+#
+# Options defined in cinder.volume.drivers.netapp.options
+#
+
+# The vFiler unit on which provisioning of block storage
+# volumes will be done. This option is only used by the driver
+# when connecting to an instance with a storage family of Data
+# ONTAP operating in 7-Mode. Only use this option when
+# utilizing the MultiStore feature on the NetApp storage
+# system. (string value)
+#netapp_vfiler=<None>
+
+# Administrative user account name used to access the storage
+# system or proxy server. (string value)
+#netapp_login=<None>
+
+# Password for the administrative user account specified in
+# the netapp_login option. (string value)
+#netapp_password=<None>
+
+# This option specifies the virtual storage server (Vserver)
+# name on the storage cluster on which provisioning of block
+# storage volumes should occur. If using the NFS storage
+# protocol, this parameter is mandatory for storage service
+# catalog support (utilized by Cinder volume type extra_specs
+# support). If this option is specified, the exports belonging
+# to the Vserver will only be used for provisioning in the
+# future. Block storage volumes on exports not belonging to
+# the Vserver specified by this option will continue to
+# function normally. (string value)
+#netapp_vserver=<None>
+
+# The hostname (or IP address) for the storage system or proxy
+# server. (string value)
+#netapp_server_hostname=<None>
+
+# The TCP port to use for communication with the storage
+# system or proxy server. If not specified, Data ONTAP drivers
+# will use 80 for HTTP and 443 for HTTPS; E-Series will use
+# 8080 for HTTP and 8443 for HTTPS. (integer value)
+#netapp_server_port=<None>
+
+# This option is used to specify the path to the E-Series
+# proxy application on a proxy server. The value is combined
+# with the value of the netapp_transport_type,
+# netapp_server_hostname, and netapp_server_port options to
+# create the URL used by the driver to connect to the proxy
+# application. (string value)
+#netapp_webservice_path=/devmgr/v2
+
+# This option is only utilized when the storage family is
+# configured to eseries. This option is used to restrict
+# provisioning to the specified controllers. Specify the value
+# of this option to be a comma separated list of controller
+# hostnames or IP addresses to be used for provisioning.
+# (string value)
+#netapp_controller_ips=<None>
+
+# Password for the NetApp E-Series storage array. (string
+# value)
+#netapp_sa_password=<None>
+
+# This option is used to restrict provisioning to the
+# specified storage pools. Only dynamic disk pools are
+# currently supported. Specify the value of this option to be
+# a comma separated list of disk pool names to be used for
+# provisioning. (string value)
+#netapp_storage_pools=<None>
+
+# This option is used to define how the controllers in the
+# E-Series storage array will work with the particular
+# operating system on the hosts that are connected to it.
+# (string value)
+#netapp_eseries_host_type=linux_dm_mp
+
+# If the percentage of available space for an NFS share has
+# dropped below the value specified by this option, the NFS
+# image cache will be cleaned. (integer value)
+#thres_avl_size_perc_start=20
+
+# When the percentage of available space on an NFS share has
+# reached the percentage specified by this option, the driver
+# will stop clearing files from the NFS image cache that have
+# not been accessed in the last M minutes, where M is the
+# value of the expiry_thres_minutes configuration option.
+# (integer value)
+#thres_avl_size_perc_stop=60
+
+# This option specifies the threshold for last access time for
+# images in the NFS image cache. When a cache cleaning cycle
+# begins, images in the cache that have not been accessed in
+# the last M minutes, where M is the value of this parameter,
+# will be deleted from the cache to create free space on the
+# NFS share. (integer value)
+#expiry_thres_minutes=720
+
+# This option specifies the path of the NetApp copy offload
+# tool binary. Ensure that the binary has execute permissions
+# set which allow the effective user of the cinder-volume
+# process to execute the file. (string value)
+#netapp_copyoffload_tool_path=<None>
+
+# The quantity to be multiplied by the requested volume size
+# to ensure enough space is available on the virtual storage
+# server (Vserver) to fulfill the volume creation request.
+# (floating point value)
+#netapp_size_multiplier=1.2
+
+# This option is only utilized when the storage protocol is
+# configured to use iSCSI. This option is used to restrict
+# provisioning to the specified controller volumes. Specify
+# the value of this option to be a comma separated list of
+# NetApp controller volume names to be used for provisioning.
+# (string value)
+#netapp_volume_list=<None>
+
+# The storage family type used on the storage system; valid
+# values are ontap_7mode for using Data ONTAP operating in
+# 7-Mode, ontap_cluster for using clustered Data ONTAP, or
+# eseries for using E-Series. (string value)
+#netapp_storage_family=ontap_cluster
+
+# The storage protocol to be used on the data path with the
+# storage system; valid values are iscsi or nfs. (string
+# value)
+#netapp_storage_protocol=<None>
+
+# The transport protocol used when communicating with the
+# storage system or proxy server. Valid values are http or
+# https. (string value)
+#netapp_transport_type=http
+
+
+#
+# Options defined in cinder.volume.drivers.nexenta.options
+#
+
+# IP address of Nexenta SA (string value)
+#nexenta_host=
+
+# HTTP port to connect to Nexenta REST API server (integer
+# value)
+#nexenta_rest_port=2000
+
+# Use http or https for REST connection (default auto) (string
+# value)
+#nexenta_rest_protocol=auto
+
+# User name to connect to Nexenta SA (string value)
+#nexenta_user=admin
+
+# Password to connect to Nexenta SA (string value)
+#nexenta_password=nexenta
+
+# Nexenta target portal port (integer value)
+#nexenta_iscsi_target_portal_port=3260
+
+# SA Pool that holds all volumes (string value)
+#nexenta_volume=cinder
+
+# IQN prefix for iSCSI targets (string value)
+#nexenta_target_prefix=iqn.1986-03.com.sun:02:cinder-
+
+# Prefix for iSCSI target groups on SA (string value)
+#nexenta_target_group_prefix=cinder/
+
+# File with the list of available nfs shares (string value)
+#nexenta_shares_config=/etc/cinder/nfs_shares
+
+# Base directory that contains NFS share mount points (string
+# value)
+#nexenta_mount_point_base=$state_path/mnt
+
+# Enables or disables the creation of volumes as sparsed files
+# that take no space. If disabled (False), volume is created
+# as a regular file, which takes a long time. (boolean value)
+#nexenta_sparsed_volumes=true
+
+# Default compression value for new ZFS folders. (string
+# value)
+#nexenta_volume_compression=on
+
+# If set True cache NexentaStor appliance volroot option
+# value. (boolean value)
+#nexenta_nms_cache_volroot=true
+
+# Enable stream compression, level 1..9. 1 - gives best speed;
+# 9 - gives best compression. (integer value)
+#nexenta_rrmgr_compression=0
+
+# TCP Buffer size in KiloBytes. (integer value)
+#nexenta_rrmgr_tcp_buf_size=4096
+
+# Number of TCP connections. (integer value)
+#nexenta_rrmgr_connections=2
+
+# Block size for volumes (default=blank means 8KB) (string
+# value)
+#nexenta_blocksize=
+
+# Enables or disables the creation of sparse volumes (boolean
+# value)
+#nexenta_sparse=false
+
+
+#
+# Options defined in cinder.volume.drivers.nfs
+#
+
+# File with the list of available nfs shares (string value)
+#nfs_shares_config=/etc/cinder/nfs_shares
+
+# Create volumes as sparsed files which take no space.If set
+# to False volume is created as regular file.In such case
+# volume creation takes a lot of time. (boolean value)
+#nfs_sparsed_volumes=true
+
+# Percent of ACTUAL usage of the underlying volume before no
+# new volumes can be allocated to the volume destination.
+# (floating point value)
+#nfs_used_ratio=0.95
+
+# This will compare the allocated to available space on the
+# volume destination. If the ratio exceeds this number, the
+# destination will no longer be valid. (floating point value)
+#nfs_oversub_ratio=1.0
+
+# Base dir containing mount points for nfs shares. (string
+# value)
+#nfs_mount_point_base=$state_path/mnt
+
+# Mount options passed to the nfs client. See section of the
+# nfs man page for details. (string value)
+#nfs_mount_options=<None>
+
+
+#
+# Options defined in cinder.volume.drivers.nimble
+#
+
+# Nimble Controller pool name (string value)
+#nimble_pool_name=default
+
+# Nimble Subnet Label (string value)
+#nimble_subnet_label=*
+
+
+#
+# Options defined in cinder.volume.drivers.prophetstor.options
+#
+
+# DPL pool uuid in which DPL volumes are stored. (string
+# value)
+#dpl_pool=
+
+# DPL port number. (integer value)
+#dpl_port=8357
+
+
+#
+# Options defined in cinder.volume.drivers.pure
+#
+
+# REST API authorization token. (string value)
+#pure_api_token=<None>
+
+
+#
+# Options defined in cinder.volume.drivers.rbd
+#
+
+# The RADOS pool where rbd volumes are stored (string value)
+#rbd_pool=rbd
+
+# The RADOS client name for accessing rbd volumes - only set
+# when using cephx authentication (string value)
+#rbd_user=<None>
+
+# Path to the ceph configuration file (string value)
+#rbd_ceph_conf=
+
+# Flatten volumes created from snapshots to remove dependency
+# from volume to snapshot (boolean value)
+#rbd_flatten_volume_from_snapshot=false
+
+# The libvirt uuid of the secret for the rbd_user volumes
+# (string value)
+#rbd_secret_uuid=<None>
+
+# Directory where temporary image files are stored when the
+# volume driver does not write them directly to the volume.
+# (string value)
+#volume_tmp_dir=<None>
+
+# Maximum number of nested volume clones that are taken before
+# a flatten occurs. Set to 0 to disable cloning. (integer
+# value)
+#rbd_max_clone_depth=5
+
+# Volumes will be chunked into objects of this size (in
+# megabytes). (integer value)
+#rbd_store_chunk_size=4
+
+# Timeout value (in seconds) used when connecting to ceph
+# cluster. If value < 0, no timeout is set and default
+# librados value is used. (integer value)
+#rados_connect_timeout=-1
+
+
+#
+# Options defined in cinder.volume.drivers.remotefs
+#
+
+# IP address or Hostname of NAS system. (string value)
+#nas_ip=
+
+# User name to connect to NAS system. (string value)
+#nas_login=admin
+
+# Password to connect to NAS system. (string value)
+#nas_password=
+
+# SSH port to use to connect to NAS system. (integer value)
+#nas_ssh_port=22
+
+# Filename of private key to use for SSH authentication.
+# (string value)
+#nas_private_key=
+
+
+#
+# Options defined in cinder.volume.drivers.san.hp.hp_3par_common
+#
+
+# 3PAR WSAPI Server Url like https://<3par ip>:8080/api/v1
+# (string value)
+#hp3par_api_url=
+
+# 3PAR Super user username (string value)
+#hp3par_username=
+
+# 3PAR Super user password (string value)
+#hp3par_password=
+
+# The CPG to use for volume creation (string value)
+#hp3par_cpg=OpenStack
+
+# The CPG to use for Snapshots for volumes. If empty
+# hp3par_cpg will be used (string value)
+#hp3par_cpg_snap=
+
+# The time in hours to retain a snapshot. You can't delete it
+# before this expires. (string value)
+#hp3par_snapshot_retention=
+
+# The time in hours when a snapshot expires and is deleted.
+# This must be larger than expiration (string value)
+#hp3par_snapshot_expiration=
+
+# Enable HTTP debugging to 3PAR (boolean value)
+#hp3par_debug=false
+
+# List of target iSCSI addresses to use. (list value)
+#hp3par_iscsi_ips=
+
+# Enable CHAP authentication for iSCSI connections. (boolean
+# value)
+#hp3par_iscsi_chap_enabled=false
+
+
+#
+# Options defined in cinder.volume.drivers.san.hp.hp_lefthand_rest_proxy
+#
+
+# HP LeftHand WSAPI Server Url like https://<LeftHand
+# ip>:8081/lhos (string value)
+#hplefthand_api_url=<None>
+
+# HP LeftHand Super user username (string value)
+#hplefthand_username=<None>
+
+# HP LeftHand Super user password (string value)
+#hplefthand_password=<None>
+
+# HP LeftHand cluster name (string value)
+#hplefthand_clustername=<None>
+
+# Configure CHAP authentication for iSCSI connections
+# (Default: Disabled) (boolean value)
+#hplefthand_iscsi_chap_enabled=false
+
+# Enable HTTP debugging to LeftHand (boolean value)
+#hplefthand_debug=false
+
+
+#
+# Options defined in cinder.volume.drivers.san.hp.hp_msa_common
+#
+
+# The VDisk to use for volume creation. (string value)
+#msa_vdisk=OpenStack
+
+
+#
+# Options defined in cinder.volume.drivers.san.san
+#
+
+# Use thin provisioning for SAN volumes? (boolean value)
+#san_thin_provision=true
+
+# IP address of SAN controller (string value)
+#san_ip=
+
+# Username for SAN controller (string value)
+#san_login=admin
+
+# Password for SAN controller (string value)
+#san_password=
+
+# Filename of private key to use for SSH authentication
+# (string value)
+#san_private_key=
+
+# Cluster name to use for creating volumes (string value)
+#san_clustername=
+
+# SSH port to use with SAN (integer value)
+#san_ssh_port=22
+
+# Execute commands locally instead of over SSH; use if the
+# volume service is running on the SAN device (boolean value)
+san_is_local=true
+
+# SSH connection timeout in seconds (integer value)
+#ssh_conn_timeout=30
+
+# Minimum ssh connections in the pool (integer value)
+#ssh_min_pool_conn=1
+
+# Maximum ssh connections in the pool (integer value)
+#ssh_max_pool_conn=5
+
+
+#
+# Options defined in cinder.volume.drivers.san.solaris
+#
+
+# The ZFS path under which to create zvols for volumes.
+# (string value)
+#san_zfs_volume_base=rpool/
+
+
+#
+# Options defined in cinder.volume.drivers.scality
+#
+
+# Path or URL to Scality SOFS configuration file (string
+# value)
+#scality_sofs_config=<None>
+
+# Base dir where Scality SOFS shall be mounted (string value)
+#scality_sofs_mount_point=$state_path/scality
+
+# Path from Scality SOFS root to volume dir (string value)
+#scality_sofs_volume_dir=cinder/volumes
+
+
+#
+# Options defined in cinder.volume.drivers.smbfs
+#
+
+# File with the list of available smbfs shares. (string value)
+#smbfs_shares_config=/etc/cinder/smbfs_shares
+
+# Default format that will be used when creating volumes if no
+# volume format is specified. Can be set to: raw, qcow2, vhd
+# or vhdx. (string value)
+#smbfs_default_volume_format=qcow2
+
+# Create volumes as sparsed files which take no space rather
+# than regular files when using raw format, in which case
+# volume creation takes lot of time. (boolean value)
+#smbfs_sparsed_volumes=true
+
+# Percent of ACTUAL usage of the underlying volume before no
+# new volumes can be allocated to the volume destination.
+# (floating point value)
+#smbfs_used_ratio=0.95
+
+# This will compare the allocated to available space on the
+# volume destination. If the ratio exceeds this number, the
+# destination will no longer be valid. (floating point value)
+#smbfs_oversub_ratio=1.0
+
+# Base dir containing mount points for smbfs shares. (string
+# value)
+#smbfs_mount_point_base=$state_path/mnt
+
+# Mount options passed to the smbfs client. See mount.cifs man
+# page for details. (string value)
+#smbfs_mount_options=noperm,file_mode=0775,dir_mode=0775
+
+
+#
+# Options defined in cinder.volume.drivers.solaris.zfs
+#
+
+# The base dataset for ZFS cinder volumes.
+#zfs_volume_base=rpool/cinder
+
+
+#
+# Options defined in cinder.volume.drivers.solidfire
+#
+
+# Set 512 byte emulation on volume creation; (boolean value)
+#sf_emulate_512=true
+
+# Allow tenants to specify QOS on create (boolean value)
+#sf_allow_tenant_qos=false
+
+# Create SolidFire accounts with this prefix. Any string can
+# be used here, but the string "hostname" is special and will
+# create a prefix using the cinder node hostsname (previous
+# default behavior). The default is NO prefix. (string value)
+#sf_account_prefix=<None>
+
+# SolidFire API port. Useful if the device api is behind a
+# proxy on a different port. (integer value)
+#sf_api_port=443
+
#
# Options defined in cinder.volume.drivers.vmware.vmdk
@@ -1568,9 +2140,9 @@
# upon connection related issues. (integer value)
#vmware_api_retry_count=10
-# The interval used for polling remote tasks invoked on VMware
-# ESX/VC server. (integer value)
-#vmware_task_poll_interval=5
+# The interval (in seconds) for polling remote tasks invoked
+# on VMware ESX/VC server. (floating point value)
+#vmware_task_poll_interval=0.5
# Name for the folder in the VC datacenter that will contain
# cinder volumes. (string value)
@@ -1586,6 +2158,16 @@
# less than the configured value. (integer value)
#vmware_max_objects_retrieval=100
+# Optional string specifying the VMware VC server version. The
+# driver attempts to retrieve the version from VMware VC
+# server. Set this configuration only if you want to override
+# the VC server version. (string value)
+#vmware_host_version=<None>
+
+# Directory where virtual disks are stored during volume
+# backup and restore. (string value)
+#vmware_tmp_dir=/tmp
+
#
# Options defined in cinder.volume.drivers.windows.windows
@@ -1596,42 +2178,6 @@
#
-# Options defined in cinder.volume.drivers.xenapi.sm
-#
-
-# NFS server to be used by XenAPINFSDriver (string value)
-#xenapi_nfs_server=<None>
-
-# Path of exported NFS, used by XenAPINFSDriver (string value)
-#xenapi_nfs_serverpath=<None>
-
-# URL for XenAPI connection (string value)
-#xenapi_connection_url=<None>
-
-# Username for XenAPI connection (string value)
-#xenapi_connection_username=root
-
-# Password for XenAPI connection (string value)
-#xenapi_connection_password=<None>
-
-# Base path to the storage repository (string value)
-#xenapi_sr_base_path=/var/run/sr-mount
-
-
-#
-# Options defined in cinder.volume.drivers.xiv_ds8k
-#
-
-# Proxy driver that connects to the IBM Storage Array (string
-# value)
-#xiv_ds8k_proxy=xiv_ds8k_openstack.nova_proxy.XIVDS8KNovaProxy
-
-# Connection type to the IBM Storage Array
-# (fibre_channel|iscsi) (string value)
-#xiv_ds8k_connection_type=iscsi
-
-
-#
# Options defined in cinder.volume.drivers.zadara
#
@@ -1659,12 +2205,6 @@
# Default encryption policy for volumes (boolean value)
#zadara_vol_encrypt=false
-# Default striping mode for volumes (string value)
-#zadara_default_striping_mode=simple
-
-# Default stripe size for volumes (string value)
-#zadara_default_stripesize=64
-
# Default template for VPSA volume names (string value)
#zadara_vol_name_template=OS_%s
@@ -1681,74 +2221,58 @@
# Options defined in cinder.volume.drivers.zfssa.zfssaiscsi
#
-# ZFSSA management hostname/IP
-#zfssa_host=<appliance ip>
-
-# ZFSSA management user login
-#zfssa_auth_user=<user>
-
-# ZFSSA management user password
-#zfssa_auth_password=<password>
-
-# ZFSSA pool name
-#zfssa_pool=<pool>
-
-# ZFSSA project name
-#zfssa_project=<project>
-
-# ZFSSA volume block size
-# Must be one of 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k.
-# This property is optional. If not provided, default is 8k.
-#zfssa_lun_volblocksize=
-
-# ZFSSA flag to create sparse (thin-provisioned) volume
-#zfssa_lun_sparse=False
-
-# ZFSSA flag to turn on compression on the volume
-# Must be one of off, lzjb, gzip-2, gzip, gzip-9.
-# This property is optional. If not provided, default is inherited
-# from the project.
+# Storage pool name. (string value)
+#zfssa_pool=<None>
+
+# Project name. (string value)
+#zfssa_project=<None>
+
+# Block size: 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k.
+# (string value)
+#zfssa_lun_volblocksize=8k
+
+# Flag to enable sparse (thin-provisioned): True, False.
+# (boolean value)
+#zfssa_lun_sparse=false
+
+# Data compression-off, lzjb, gzip-2, gzip, gzip-9. (string
+# value)
#zfssa_lun_compression=
-# ZFSSA flag to set write bias to latency or throughput
-# This property is optional. If not provided, default is inherited
-# from the project.
+# Synchronous write bias-latency, throughput. (string value)
#zfssa_lun_logbias=
-# ZFSSA iSCSI initiator group name
+# iSCSI initiator group. (string value)
#zfssa_initiator_group=
-# Cinder host initiator IQNs. Separate multiple entries with commas.
+# iSCSI initiator IQNs. (comma separated) (string value)
#zfssa_initiator=
-# Cinder host initiator CHAP user.
-# This property is optional. Comment out the line if CHAP authentication is
-# not used.
+# iSCSI initiator CHAP user. (string value)
#zfssa_initiator_user=
-# Cinder host initiator CHAP password.
-# This property is optional. Comment out the line if CHAP authentication is
-# not used.
+# iSCSI initiator CHAP password. (string value)
#zfssa_initiator_password=
-# ZFSSA iSCSI target group name
-#zfssa_target_group=
-
-# ZFSSA iSCSI target CHAP user.
-# This property is optional. Comment out the line if CHAP authentication is
-# not used.
+# iSCSI target group name. (string value)
+#zfssa_target_group=tgt-grp
+
+# iSCSI target CHAP user. (string value)
#zfssa_target_user=
-# ZFSSA iSCSI target CHAP password.
-# This property is optional. Comment out the line if CHAP authentication is
-# not used.
+# iSCSI target CHAP password. (string value)
#zfssa_target_password=
-# ZFSSA iSCSI target portal (data-ip:port)
-#zfssa_target_portal=<data ip address>:3260
-
-# ZFSSA iSCSI target network interfaces (separate multiple entries with comma)
-#zfssa_target_interfaces=<device>
+# iSCSI target portal (Data-IP:Port, w.x.y.z:3260). (string
+# value)
+#zfssa_target_portal=<None>
+
+# Network interfaces of iSCSI targets. (comma separated)
+# (string value)
+#zfssa_target_interfaces=<None>
+
+# REST connection timeout. (seconds) (integer value)
+#zfssa_rest_timeout=<None>
#
@@ -1773,14 +2297,230 @@
# (boolean value)
#volume_service_inithost_offload=false
+# FC Zoning mode configured (string value)
+#zoning_mode=none
+
+# User defined capabilities, a JSON formatted string
+# specifying key/value pairs. (string value)
+#extra_capabilities={}
+
+
+[BRCD_FABRIC_EXAMPLE]
+
+#
+# Options defined in cinder.zonemanager.drivers.brocade.brcd_fabric_opts
+#
+
+# Management IP of fabric (string value)
+#fc_fabric_address=
+
+# Fabric user ID (string value)
+#fc_fabric_user=
+
+# Password for user (string value)
+#fc_fabric_password=
+
+# Connecting port (integer value)
+#fc_fabric_port=22
+
+# overridden zoning policy (string value)
+#zoning_policy=initiator-target
+
+# overridden zoning activation state (boolean value)
+#zone_activate=true
+
+# overridden zone name prefix (string value)
+#zone_name_prefix=<None>
+
+# Principal switch WWN of the fabric (string value)
+#principal_switch_wwn=<None>
+
+
+[CISCO_FABRIC_EXAMPLE]
+
+#
+# Options defined in cinder.zonemanager.drivers.cisco.cisco_fabric_opts
+#
+
+# Management IP of fabric (string value)
+#cisco_fc_fabric_address=
+
+# Fabric user ID (string value)
+#cisco_fc_fabric_user=
+
+# Password for user (string value)
+#cisco_fc_fabric_password=
+
+# Connecting port (integer value)
+#cisco_fc_fabric_port=22
+
+# overridden zoning policy (string value)
+#cisco_zoning_policy=initiator-target
+
+# overridden zoning activation state (boolean value)
+#cisco_zone_activate=true
+
+# overridden zone name prefix (string value)
+#cisco_zone_name_prefix=<None>
+
+# VSAN of the Fabric (string value)
+#cisco_zoning_vsan=<None>
+
+
+[database]
#
-# Options defined in cinder.volume.utils
+# Options defined in oslo.db
#
-# The default block size used when copying/clearing volumes
-# (string value)
-#volume_dd_blocksize=1M
+# The file name to use with SQLite. (string value)
+#sqlite_db=oslo.sqlite
+
+# If True, SQLite uses synchronous mode. (boolean value)
+#sqlite_synchronous=true
+
+# The back end to use for the database. (string value)
+# Deprecated group/name - [DEFAULT]/db_backend
+#backend=sqlalchemy
+
+# The SQLAlchemy connection string to use to connect to the
+# database. (string value)
+# Deprecated group/name - [DEFAULT]/sql_connection
+# Deprecated group/name - [DATABASE]/sql_connection
+# Deprecated group/name - [sql]/connection
+connection=mysql://%SERVICE_USER%:%SERVICE_PASSWORD%@localhost/cinder
+
+# The SQLAlchemy connection string to use to connect to the
+# slave database. (string value)
+#slave_connection=<None>
+
+# The SQL mode to be used for MySQL sessions. This option,
+# including the default, overrides any server-set SQL mode. To
+# use whatever SQL mode is set by the server configuration,
+# set this to no value. Example: mysql_sql_mode= (string
+# value)
+#mysql_sql_mode=TRADITIONAL
+
+# Timeout before idle SQL connections are reaped. (integer
+# value)
+# Deprecated group/name - [DEFAULT]/sql_idle_timeout
+# Deprecated group/name - [DATABASE]/sql_idle_timeout
+# Deprecated group/name - [sql]/idle_timeout
+#idle_timeout=3600
+
+# Minimum number of SQL connections to keep open in a pool.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/sql_min_pool_size
+# Deprecated group/name - [DATABASE]/sql_min_pool_size
+#min_pool_size=1
+
+# Maximum number of SQL connections to keep open in a pool.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_pool_size
+# Deprecated group/name - [DATABASE]/sql_max_pool_size
+#max_pool_size=<None>
+
+# Maximum number of database connection retries during
+# startup. Set to -1 to specify an infinite retry count.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_retries
+# Deprecated group/name - [DATABASE]/sql_max_retries
+#max_retries=10
+
+# Interval between retries of opening a SQL connection.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/sql_retry_interval
+# Deprecated group/name - [DATABASE]/reconnect_interval
+#retry_interval=10
+
+# If set, use this value for max_overflow with SQLAlchemy.
+# (integer value)
+# Deprecated group/name - [DEFAULT]/sql_max_overflow
+# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
+#max_overflow=<None>
+
+# Verbosity of SQL debugging information: 0=None,
+# 100=Everything. (integer value)
+# Deprecated group/name - [DEFAULT]/sql_connection_debug
+#connection_debug=0
+
+# Add Python stack traces to SQL as comment strings. (boolean
+# value)
+# Deprecated group/name - [DEFAULT]/sql_connection_trace
+#connection_trace=false
+
+# If set, use this value for pool_timeout with SQLAlchemy.
+# (integer value)
+# Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
+#pool_timeout=<None>
+
+# Enable the experimental use of database reconnect on
+# connection lost. (boolean value)
+#use_db_reconnect=false
+
+# Seconds between database connection retries. (integer value)
+#db_retry_interval=1
+
+# If True, increases the interval between database connection
+# retries up to db_max_retry_interval. (boolean value)
+#db_inc_retry_interval=true
+
+# If db_inc_retry_interval is set, the maximum seconds between
+# database connection retries. (integer value)
+#db_max_retry_interval=10
+
+# Maximum database connection retries before error is raised.
+# Set to -1 to specify an infinite retry count. (integer
+# value)
+#db_max_retries=20
+
+
+#
+# Options defined in oslo.db.concurrency
+#
+
+# Enable the experimental use of thread pooling for all DB API
+# calls (boolean value)
+# Deprecated group/name - [DEFAULT]/dbapi_use_tpool
+#use_tpool=false
+
+
+[fc-zone-manager]
+
+#
+# Options defined in cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver
+#
+
+# Southbound connector for zoning operation (string value)
+#brcd_sb_connector=cinder.zonemanager.drivers.brocade.brcd_fc_zone_client_cli.BrcdFCZoneClientCLI
+
+
+#
+# Options defined in cinder.zonemanager.drivers.cisco.cisco_fc_zone_driver
+#
+
+# Southbound connector for zoning operation (string value)
+#cisco_sb_connector=cinder.zonemanager.drivers.cisco.cisco_fc_zone_client_cli.CiscoFCZoneClientCLI
+
+
+#
+# Options defined in cinder.zonemanager.fc_zone_manager
+#
+
+# FC Zone Driver responsible for zone management (string
+# value)
+#zone_driver=cinder.zonemanager.drivers.brocade.brcd_fc_zone_driver.BrcdFCZoneDriver
+
+# Zoning policy configured by user (string value)
+#zoning_policy=initiator-target
+
+# Comma separated list of fibre channel fabric names. This
+# list of names is used to retrieve other SAN credentials for
+# connecting to each SAN fabric (string value)
+#fc_fabric_names=<None>
+
+# FC San Lookup Service (string value)
+#fc_san_lookup_service=cinder.zonemanager.drivers.brocade.brcd_fc_san_lookup_service.BrcdFCSanLookupService
[keymgr]
@@ -1803,75 +2543,303 @@
#fixed_key=<None>
-[database]
-
#
-# Options defined in cinder.openstack.common.db.api
+# Options defined in cinder.keymgr.key_mgr
#
-# The backend to use for db (string value)
-#backend=sqlalchemy
-
-# Enable the experimental use of thread pooling for all DB API
-# calls (boolean value)
-#use_tpool=false
-
+# Authentication url for encryption service. (string value)
+#encryption_auth_url=http://localhost:5000/v2.0
+
+# Url for encryption service. (string value)
+#encryption_api_url=http://localhost:9311/v1
+
+
+[keystone_authtoken]
#
-# Options defined in cinder.openstack.common.db.sqlalchemy.session
+# Options defined in keystonemiddleware.auth_token
#
-# The SQLAlchemy connection string used to connect to the
-# database (string value)
-connection=sqlite:///$state_path/$sqlite_db
-
-# timeout before idle sql connections are reaped (integer
+# Prefix to prepend at the beginning of the path. Deprecated,
+# use identity_uri. (string value)
+#auth_admin_prefix=
+
+# Host providing the admin Identity API endpoint. Deprecated,
+# use identity_uri. (string value)
+#auth_host=127.0.0.1
+
+# Port of the admin Identity API endpoint. Deprecated, use
+# identity_uri. (integer value)
+#auth_port=35357
+
+# Protocol of the admin Identity API endpoint (http or https).
+# Deprecated, use identity_uri. (string value)
+#auth_protocol=https
+
+# Complete public Identity API endpoint (string value)
+auth_uri=http://127.0.0.1:5000/v2.0/
+
+# Complete admin Identity API endpoint. This should specify
+# the unversioned root endpoint e.g. https://localhost:35357/
+# (string value)
+identity_uri=http://127.0.0.1:35357/
+
+# API version of the admin Identity API endpoint (string
+# value)
+#auth_version=<None>
+
+# Do not handle authorization requests within the middleware,
+# but delegate the authorization decision to downstream WSGI
+# components (boolean value)
+#delay_auth_decision=false
+
+# Request timeout value for communicating with Identity API
+# server. (boolean value)
+#http_connect_timeout=<None>
+
+# How many times are we trying to reconnect when communicating
+# with Identity API Server. (integer value)
+#http_request_max_retries=3
+
+# This option is deprecated and may be removed in a future
+# release. Single shared secret with the Keystone
+# configuration used for bootstrapping a Keystone
+# installation, or otherwise bypassing the normal
+# authentication process. This option should not be used, use
+# `admin_user` and `admin_password` instead. (string value)
+#admin_token=<None>
+
+# Keystone account username (string value)
+admin_user=%SERVICE_USER%
+
+# Keystone account password (string value)
+admin_password=%SERVICE_PASSWORD%
+
+# Keystone service account tenant name to validate user tokens
+# (string value)
+admin_tenant_name=%SERVICE_TENANT_NAME%
+
+# Env key for the swift cache (string value)
+#cache=<None>
+
+# Required if Keystone server requires client certificate
+# (string value)
+#certfile=<None>
+
+# Required if Keystone server requires client certificate
+# (string value)
+#keyfile=<None>
+
+# A PEM encoded Certificate Authority to use when verifying
+# HTTPs connections. Defaults to system CAs. (string value)
+#cafile=<None>
+
+# Verify HTTPS connections. (boolean value)
+#insecure=false
+
+# Directory used to cache files related to PKI tokens (string
# value)
-#idle_timeout=3600
-
-# Minimum number of SQL connections to keep open in a pool
-# (integer value)
-#min_pool_size=1
-
-# Maximum number of SQL connections to keep open in a pool
-# (integer value)
-#max_pool_size=5
-
-# maximum db connection retries during startup. (setting -1
-# implies an infinite retry count) (integer value)
-#max_retries=10
-
-# interval between retries of opening a sql connection
-# (integer value)
-#retry_interval=10
-
-# If set, use this value for max_overflow with sqlalchemy
-# (integer value)
-#max_overflow=<None>
-
-# Verbosity of SQL debugging information. 0=None,
-# 100=Everything (integer value)
-#connection_debug=0
-
-# Add python stack traces to SQL as comment strings (boolean
+signing_dir=$state_path/keystone-signing
+
+# Optionally specify a list of memcached server(s) to use for
+# caching. If left undefined, tokens will instead be cached
+# in-process. (list value)
+# Deprecated group/name - [DEFAULT]/memcache_servers
+#memcached_servers=<None>
+
+# In order to prevent excessive effort spent validating
+# tokens, the middleware caches previously-seen tokens for a
+# configurable duration (in seconds). Set to -1 to disable
+# caching completely. (integer value)
+#token_cache_time=300
+
+# Determines the frequency at which the list of revoked tokens
+# is retrieved from the Identity service (in seconds). A high
+# number of revocation events combined with a low cache
+# duration may significantly reduce performance. (integer
+# value)
+#revocation_cache_time=10
+
+# (optional) if defined, indicate whether token data should be
+# authenticated or authenticated and encrypted. Acceptable
+# values are MAC or ENCRYPT. If MAC, token data is
+# authenticated (with HMAC) in the cache. If ENCRYPT, token
+# data is encrypted and authenticated in the cache. If the
+# value is not one of these options or empty, auth_token will
+# raise an exception on initialization. (string value)
+#memcache_security_strategy=<None>
+
+# (optional, mandatory if memcache_security_strategy is
+# defined) this string is used for key derivation. (string
# value)
-#connection_trace=false
+#memcache_secret_key=<None>
+
+# (optional) number of seconds memcached server is considered
+# dead before it is tried again. (integer value)
+#memcache_pool_dead_retry=300
+
+# (optional) max total number of open connections to every
+# memcached server. (integer value)
+#memcache_pool_maxsize=10
+
+# (optional) socket timeout in seconds for communicating with
+# a memcache server. (integer value)
+#memcache_pool_socket_timeout=3
+
+# (optional) number of seconds a connection to memcached is
+# held unused in the pool before it is closed. (integer value)
+#memcache_pool_unused_timeout=60
+
+# (optional) number of seconds that an operation will wait to
+# get a memcache client connection from the pool. (integer
+# value)
+#memcache_pool_conn_get_timeout=10
+
+# (optional) use the advanced (eventlet safe) memcache client
+# pool. The advanced pool will only work under python 2.x.
+# (boolean value)
+#memcache_use_advanced_pool=false
+
+# (optional) indicate whether to set the X-Service-Catalog
+# header. If False, middleware will not ask for service
+# catalog on token validation and will not set the X-Service-
+# Catalog header. (boolean value)
+#include_service_catalog=true
+
+# Used to control the use and type of token binding. Can be
+# set to: "disabled" to not check token binding. "permissive"
+# (default) to validate binding information if the bind type
+# is of a form known to the server and ignore it if not.
+# "strict" like "permissive" but if the bind type is unknown
+# the token will be rejected. "required" any form of token
+# binding is needed to be allowed. Finally the name of a
+# binding method that must be present in tokens. (string
+# value)
+#enforce_token_bind=permissive
+
+# If true, the revocation list will be checked for cached
+# tokens. This requires that PKI tokens are configured on the
+# Keystone server. (boolean value)
+#check_revocations_for_cached=false
+
+# Hash algorithms to use for hashing PKI tokens. This may be a
+# single algorithm or multiple. The algorithms are those
+# supported by Python standard hashlib.new(). The hashes will
+# be tried in the order given, so put the preferred one first
+# for performance. The result of the first hash will be stored
+# in the cache. This will typically be set to multiple values
+# only while migrating from a less secure algorithm to a more
+# secure one. Once all the old tokens are expired this option
+# should be set to a single value for better performance.
+# (list value)
+#hash_algorithms=md5
[matchmaker_redis]
#
-# Options defined in cinder.openstack.common.rpc.matchmaker_redis
+# Options defined in oslo.messaging
#
-# Host to locate redis (string value)
+# Host to locate redis. (string value)
#host=127.0.0.1
# Use this port to connect to redis host. (integer value)
#port=6379
-# Password for Redis server. (optional) (string value)
+# Password for Redis server (optional). (string value)
#password=<None>
-# Total option count: 401
+[matchmaker_ring]
+
+#
+# Options defined in oslo.messaging
+#
+
+# Matchmaker ring file (JSON). (string value)
+# Deprecated group/name - [DEFAULT]/matchmaker_ringfile
+#ringfile=/etc/oslo/matchmaker_ring.json
+
+
+[oslo_messaging_amqp]
+
+#
+# Options defined in oslo.messaging
+#
+# NOTE: Options in this group are supported when using oslo.messaging >=1.5.0.
+
+# address prefix used when sending to a specific server
+# (string value)
+#server_request_prefix=exclusive
+
+# address prefix used when broadcasting to all servers (string
+# value)
+#broadcast_prefix=broadcast
+
+# address prefix when sending to any server in group (string
+# value)
+#group_request_prefix=unicast
+
+# Name for the AMQP container (string value)
+#container_name=<None>
+
+# Timeout for inactive connections (in seconds) (integer
+# value)
+#idle_timeout=0
+
+# Debug: dump AMQP frames to stdout (boolean value)
+#trace=false
+
+# CA certificate PEM file for verifing server certificate
+# (string value)
+#ssl_ca_file=
+
+# Identifying certificate PEM file to present to clients
+# (string value)
+#ssl_cert_file=
+
+# Private key PEM file used to sign cert_file certificate
+# (string value)
+#ssl_key_file=
+
+# Password for decrypting ssl_key_file (if encrypted) (string
+# value)
+#ssl_key_password=<None>
+
+# Accept clients using either SSL or plain TCP (boolean value)
+#allow_insecure_clients=false
+
+
+[profiler]
+
+#
+# Options defined in cinder.service
+#
+
+# If False fully disable profiling feature. (boolean value)
+#profiler_enabled=false
+
+# If False doesn't trace SQL requests. (boolean value)
+#trace_sqlalchemy=false
+
+
+[ssl]
+
+#
+# Options defined in cinder.openstack.common.sslutils
+#
+
+# CA certificate file to use to verify connecting clients
+# (string value)
+#ca_file=<None>
+
+# Certificate file to use when starting the server securely
+# (string value)
+#cert_file=<None>
+
+# Private key file to use when starting the server securely
+# (string value)
+#key_file=<None>
+
+
--- a/components/openstack/cinder/files/cinder.exec_attr Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder.exec_attr Thu Mar 19 14:41:20 2015 -0700
@@ -1,6 +1,3 @@
-OpenStack Block Storage Management:solaris:cmd:RO::\
-/usr/bin/cinder-clear-rabbit-queues:uid=cinder;gid=cinder
-
OpenStack Block Storage Management:solaris:cmd:RO::/usr/bin/cinder-manage:\
uid=cinder;gid=cinder
--- a/components/openstack/cinder/files/cinder.prof_attr Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder.prof_attr Thu Mar 19 14:41:20 2015 -0700
@@ -1,10 +1,8 @@
OpenStack Block Storage Management:RO::\
Manage OpenStack Cinder:\
-auths=solaris.admin.edit/etc/cinder/api-paste.ini,\
-solaris.admin.edit/etc/cinder/cinder.conf,\
-solaris.admin.edit/etc/cinder/cinder_emc_config.xml,\
-solaris.admin.edit/etc/cinder/logging.conf,\
-solaris.admin.edit/etc/cinder/policy.json,\
+auths=solaris.admin.edit/etc/cinder/*.conf,\
+solaris.admin.edit/etc/cinder/*.ini,\
+solaris.admin.edit/etc/cinder/*.json,\
solaris.smf.manage.cinder,\
solaris.smf.value.cinder;\
defaultpriv={file_dac_read}\:/var/svc/log/application-openstack-*
--- a/components/openstack/cinder/files/cinder.user_attr Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/cinder.user_attr Thu Mar 19 14:41:20 2015 -0700
@@ -1,1 +1,1 @@
-cinder::RO::profiles=cinder-volume
+cinder::RO::profiles=OpenStack Block Storage Management,cinder-volume
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/components/openstack/cinder/files/solaris/solarisfc.py Thu Mar 19 14:41:20 2015 -0700
@@ -0,0 +1,162 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Generic Solaris Fibre Channel utilities."""
+
+import os
+import time
+
+from cinder.brick import exception
+from cinder.openstack.common.gettextutils import _
+from cinder.openstack.common import log as logging
+from cinder.openstack.common import processutils as putils
+
+LOG = logging.getLogger(__name__)
+
+
+class SolarisFibreChannel(object):
+ def __init__(self, *args, **kwargs):
+ self.execute = putils.execute
+
+ def _get_fc_hbas(self):
+ """Get Fibre Channel HBA information."""
+ out = None
+ try:
+ out, err = self.execute('/usr/sbin/fcinfo', 'hba-port')
+ except putils.ProcessExecutionError as err:
+ return None
+
+ if out is None:
+ LOG.info(_("Cannot find any Fibre Channel HBAs"))
+ return None
+
+ hbas = []
+ hba = {}
+ for line in out.splitlines():
+ line = line.strip()
+ # Collect the following hba-port data:
+ # 1: Port WWN
+ # 2: State (online|offline)
+ # 3: Node WWN
+ if line.startswith("HBA Port WWN:"):
+ # New HBA port entry
+ hba = {}
+ wwpn = line.split()[-1]
+ hba['port_name'] = wwpn
+ continue
+ elif line.startswith("Port Mode:"):
+ mode = line.split()[-1]
+ # Skip Target mode ports
+ if mode != 'Initiator':
+ break
+ elif line.startswith("State:"):
+ state = line.split()[-1]
+ hba['port_state'] = state
+ continue
+ elif line.startswith("Node WWN:"):
+ wwnn = line.split()[-1]
+ hba['node_name'] = wwnn
+ continue
+ if len(hba) == 3:
+ hbas.append(hba)
+ hba = {}
+ return hbas
+
+ def get_fc_wwnns(self):
+ """Get Fibre Channel WWNNs from the system, if any."""
+ hbas = self._get_fc_hbas()
+ if hbas is None:
+ return None
+
+ wwnns = []
+ for hba in hbas:
+ if hba['port_state'] == 'online':
+ wwnn = hba['node_name']
+ wwnns.append(wwnn)
+ return wwnns
+
+ def get_fc_wwpns(self):
+ """Get Fibre Channel WWPNs from the system, if any."""
+ hbas = self._get_fc_hbas()
+ if hbas is None:
+ return None
+
+ wwpns = []
+ for hba in hbas:
+ if hba['port_state'] == 'online':
+ wwpn = hba['port_name']
+ wwpns.append(wwpn)
+ return wwpns
+
+ def _refresh_connection(self):
+ """Force the link reinitialization to make the LUN present."""
+ wwpns = self.get_fc_wwpns()
+ for wwpn in wwpns:
+ self.execute('/usr/sbin/fcadm', 'force-lip', wwpn)
+
+ def get_device_path(self, wwn):
+ """Get the Device Name of the WWN"""
+ try:
+ out, err = self.execute('/usr/sbin/fcinfo', 'logical-unit', '-v')
+ except putils.ProcessExecutionError as err:
+ return None
+
+ host_dev = None
+ remote_port = None
+ if out is not None:
+ for line in [l.strip() for l in out.splitlines()]:
+ if line.startswith("OS Device Name:"):
+ host_dev = line.split()[-1]
+ if line.startswith("Remote Port WWN:"):
+ remote_port = line.split()[-1]
+ if remote_port == wwn:
+ return host_dev
+
+ return None
+
+ def connect_volume(self, connection_properties, scan_tries):
+ """Attach the volume to instance_name.
+
+ connection_properties for Fibre Channel must include:
+ target_portal - ip and optional port
+ target_iqn - iSCSI Qualified Name
+ target_lun - LUN id of the volume
+ """
+ device_info = {'type': 'block'}
+ target_wwn = connection_properties['target_wwn']
+ # Check for multiple target_wwn values in a list
+ if isinstance(target_wwn, list):
+ wwn = target_wwn[0]
+
+ # The scsi_vhci disk node is not always present immediately.
+ # Sometimes we need to reinitialize the connection to trigger
+ # a refresh.
+ for i in range(1, scan_tries):
+ LOG.debug("Looking for Fibre Channel device")
+ host_dev = self.get_device_path(wwn)
+
+ if host_dev is not None and os.path.exists(host_dev):
+ break
+ else:
+ self._refresh_connection()
+ time.sleep(i ** 2)
+ else:
+ msg = _("Fibre Channel volume device not found.")
+ LOG.error(msg)
+ raise exception.NoFibreChannelVolumeDeviceFound()
+
+ device_info['path'] = host_dev
+ return device_info
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/components/openstack/cinder/files/solaris/solarisiscsi.py Thu Mar 19 14:41:20 2015 -0700
@@ -0,0 +1,118 @@
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# Copyright (c) 2015, Oracle and/or its affiliates. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Generic Solaris iSCSI utilities."""
+
+import os
+import time
+
+from cinder.brick import exception
+from cinder.openstack.common.gettextutils import _
+from cinder.openstack.common import log as logging
+from cinder.openstack.common import processutils as putils
+
+LOG = logging.getLogger(__name__)
+
+
+class SolarisiSCSI(object):
+ def __init__(self, *args, **kwargs):
+ self.execute = putils.execute
+
+ def disconnect_iscsi(self):
+ """Disable the iSCSI discovery method to detach the volume
+ from instance_name.
+ """
+ self.execute('/usr/sbin/iscsiadm', 'modify', 'discovery',
+ '--sendtargets', 'disable')
+
+ def _get_device_path(self, connection_properties):
+ """Get the device path from the target info."""
+ (out, _err) = self.execute('/usr/sbin/iscsiadm', 'list',
+ 'target', '-S',
+ connection_properties['target_iqn'])
+
+ for line in [l.strip() for l in out.splitlines()]:
+ if line.startswith("OS Device Name:"):
+ dev_path = line.split()[-1]
+ return dev_path
+ else:
+ LOG.error(_("No device is found for the target %s.") %
+ connection_properties['target_iqn'])
+ raise
+
+ def get_initiator(self):
+ """Return the iSCSI initiator node name IQN"""
+ out, err = self.execute('/usr/sbin/iscsiadm', 'list', 'initiator-node')
+
+ # Sample first line of command output:
+ # Initiator node name: iqn.1986-03.com.sun:01:e00000000000.4f757217
+ initiator_name_line = out.splitlines()[0]
+ return initiator_name_line.rsplit(' ', 1)[1]
+
+ def _connect_to_iscsi_portal(self, connection_properties):
+ # TODO(Strony): handle the CHAP authentication
+ target_ip = connection_properties['target_portal'].split(":")[0]
+ self.execute('/usr/sbin/iscsiadm', 'add', 'discovery-address',
+ target_ip)
+ self.execute('/usr/sbin/iscsiadm', 'modify', 'discovery',
+ '--sendtargets', 'enable')
+ (out, _err) = self.execute('/usr/sbin/iscsiadm', 'list',
+ 'discovery-address', '-v',
+ target_ip)
+
+ lines = out.splitlines()
+ if not lines[0].strip().startswith('Discovery Address: ') or \
+ lines[1].strip().startswith('Unable to get targets.'):
+ msg = _("No iSCSI target is found.")
+ LOG.error(msg)
+ raise
+
+ target_iqn = connection_properties['target_iqn']
+ for line in [l.strip() for l in lines]:
+ if line.startswith("Target name:") and \
+ line.split()[-1] == target_iqn:
+ return
+ else:
+ LOG.error(_("No active session is found for the target %s.") %
+ target_iqn)
+ raise
+
+ def connect_volume(self, connection_properties, scan_tries):
+ """Attach the volume to instance_name.
+
+ connection_properties for iSCSI must include:
+ target_portal - ip and optional port
+ target_iqn - iSCSI Qualified Name
+ target_lun - LUN id of the volume
+ """
+ device_info = {'type': 'block'}
+
+ # TODO(Strony): support the iSCSI multipath on Solaris.
+ self._connect_to_iscsi_portal(connection_properties)
+
+ host_device = self._get_device_path(connection_properties)
+
+ # check if it is a valid device path.
+ for i in range(1, scan_tries):
+ if os.path.exists(host_device):
+ break
+ else:
+ time.sleep(i ** 2)
+ else:
+ raise exception.VolumeDeviceNotFound(device=host_device)
+
+ device_info['path'] = host_device
+ return device_info
--- a/components/openstack/cinder/files/solaris/zfs.py Fri Mar 20 03:13:26 2015 -0700
+++ b/components/openstack/cinder/files/solaris/zfs.py Thu Mar 19 14:41:20 2015 -0700
@@ -2,7 +2,7 @@
# Copyright (c) 2012 OpenStack LLC.
# All Rights Reserved.
#
-# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
+# Copyright (c) 2014, 2015, Oracle and/or its affiliates. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
@@ -20,13 +20,13 @@
"""
import abc
-import os
import time
from oslo.config import cfg
from cinder import exception
from cinder.image import image_utils
+from cinder.i18n import _
from cinder.openstack.common import log as logging
from cinder.openstack.common import processutils
from cinder.volume import driver
@@ -117,7 +117,9 @@
well after it is removed.
"""
zvol = self._get_zvol_path(volume)
- if not os.path.exists(zvol):
+ try:
+ (out, _err) = self._execute('/usr/bin/ls', zvol)
+ except processutils.ProcessExecutionError:
LOG.debug(_("The volume path '%s' doesn't exist") % zvol)
return
@@ -171,10 +173,13 @@
"""Initialize the connection and returns connection info."""
volume_path = '%s/volume-%s' % (self.configuration.zfs_volume_base,
volume['id'])
+ properties = {}
+ properties['device_path'] = self._get_zvol_path(volume)
+
return {
'driver_volume_type': 'local',
'volume_path': volume_path,
- 'data': {}
+ 'data': properties
}
def terminate_connection(self, volume, connector, **kwargs):
@@ -378,9 +383,9 @@
view_and_lun['lun'] = int(line.split()[2])
if view_and_lun['view'] is None or view_and_lun['lun'] is None:
- LOG.error(_("Failed to get the view_entry or LUN of the LU '%s'.")
- % lu)
- raise
+ err_msg = (_("Failed to get the view_entry or LUN of the LU '%s'.")
+ % lu)
+ raise exception.VolumeBackendAPIException(data=err_msg)
else:
LOG.debug(_("The view_entry and LUN of LU '%s' are '%s' and '%d'.")
% (lu, view_and_lun['view'], view_and_lun['lun']))
@@ -422,7 +427,7 @@
# Add a view entry to the logical unit with the specified LUN, 8776
if luid is not None:
- self._stmf_execute('/usr/sbin/stmfadm', 'add-view', '-n', 8776,
+ self._stmf_execute('/usr/sbin/stmfadm', 'add-view', '-n', '8776',
'-t', target_group, luid)
def remove_export(self, context, volume):
@@ -486,6 +491,7 @@
properties['target_discovered'] = True
properties['target_iqn'] = target_name
+
properties['target_portal'] = ('%s:%d' %
(self.configuration.iscsi_ip_address,
self.configuration.iscsi_port))
--- a/components/openstack/cinder/files/zfssa/__init__.py Fri Mar 20 03:13:26 2015 -0700
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,15 +0,0 @@
-# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-
-# Empty for this release
--- a/components/openstack/cinder/files/zfssa/restclient.py Fri Mar 20 03:13:26 2015 -0700
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,353 +0,0 @@
-# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-ZFS Storage Appliance REST API Client Programmatic Interface
-"""
-
-import httplib
-import json
-import time
-import urllib2
-import StringIO
-
-from cinder.openstack.common import log
-
-LOG = log.getLogger(__name__)
-
-
-class Status:
- """Result HTTP Status"""
-
- def __init__(self):
- pass
-
- #: Request return OK
- OK = httplib.OK
-
- #: New resource created successfully
- CREATED = httplib.CREATED
-
- #: Command accepted
- ACCEPTED = httplib.ACCEPTED
-
- #: Command returned OK but no data will be returned
- NO_CONTENT = httplib.NO_CONTENT
-
- #: Bad Request
- BAD_REQUEST = httplib.BAD_REQUEST
-
- #: User is not authorized
- UNAUTHORIZED = httplib.UNAUTHORIZED
-
- #: The request is not allowed
- FORBIDDEN = httplib.FORBIDDEN
-
- #: The requested resource was not found
- NOT_FOUND = httplib.NOT_FOUND
-
- #: The request is not allowed
- NOT_ALLOWED = httplib.METHOD_NOT_ALLOWED
-
- #: Request timed out
- TIMEOUT = httplib.REQUEST_TIMEOUT
-
- #: Invalid request
- CONFLICT = httplib.CONFLICT
-
- #: Service Unavailable
- BUSY = httplib.SERVICE_UNAVAILABLE
-
-
-class RestResult(object):
- """Result from a REST API operation"""
- def __init__(self, response=None, err=None):
- """Initialize a RestResult containing the results from a REST call
- :param response: HTTP response
- """
- self.response = response
- self.error = err
- self.data = ""
- self.status = 0
- if self.response is not None:
- self.status = self.response.getcode()
- result = self.response.read()
- while result:
- self.data += result
- result = self.response.read()
-
- if self.error is not None:
- self.status = self.error.code
- self.data = httplib.responses[self.status]
-
- LOG.debug('response code: %s' % self.status)
- LOG.debug('response data: %s' % self.data)
-
- def get_header(self, name):
- """Get an HTTP header with the given name from the results
-
- :param name: HTTP header name
- :return: The header value or None if no value is found
- """
- if self.response is None:
- return None
- info = self.response.info()
- return info.getheader(name)
-
-
-class RestClientError(Exception):
- """Exception for ZFS REST API client errors"""
- def __init__(self, status, name="ERR_INTERNAL", message=None):
-
- """Create a REST Response exception
-
- :param status: HTTP response status
- :param name: The name of the REST API error type
- :param message: Descriptive error message returned from REST call
- """
- Exception.__init__(self, message)
- self.code = status
- self.name = name
- self.msg = message
- if status in httplib.responses:
- self.msg = httplib.responses[status]
-
- def __str__(self):
- return "%d %s %s" % (self.code, self.name, self.msg)
-
-
-class RestClientURL(object):
- """ZFSSA urllib2 client"""
- def __init__(self, url, **kwargs):
- """
- Initialize a REST client.
-
- :param url: The ZFSSA REST API URL
- :key session: HTTP Cookie value of x-auth-session obtained from a
- normal BUI login.
- :key timeout: Time in seconds to wait for command to complete.
- (Default is 60 seconds)
- """
- self.url = url
- self.local = kwargs.get("local", False)
- self.base_path = kwargs.get("base_path", "/api")
- self.timeout = kwargs.get("timeout", 60)
- self.headers = None
- if kwargs.get('session'):
- self.headers['x-auth-session'] = kwargs.get('session')
-
- self.headers = {"content-type": "application/json"}
- self.do_logout = False
- self.auth_str = None
-
- def _path(self, path, base_path=None):
- """build rest url path"""
- if path.startswith("http://") or path.startswith("https://"):
- return path
- if base_path is None:
- base_path = self.base_path
- if not path.startswith(base_path) and not (
- self.local and ("/api" + path).startswith(base_path)):
- path = "%s%s" % (base_path, path)
- if self.local and path.startswith("/api"):
- path = path[4:]
- return self.url + path
-
- def authorize(self):
- """Performs authorization setting x-auth-session"""
- self.headers['authorization'] = 'Basic %s' % self.auth_str
- if 'x-auth-session' in self.headers:
- del self.headers['x-auth-session']
-
- try:
- result = self.post("/access/v1")
- del self.headers['authorization']
- if result.status == httplib.CREATED:
- self.headers['x-auth-session'] = \
- result.get_header('x-auth-session')
- self.do_logout = True
- LOG.info('ZFSSA version: %s' %
- result.get_header('x-zfssa-version'))
-
- elif result.status == httplib.NOT_FOUND:
- raise RestClientError(result.status, name="ERR_RESTError",
- message="REST Not Available: \
- Please Upgrade")
-
- except RestClientError as err:
- del self.headers['authorization']
- raise err
-
- def login(self, auth_str):
- """
- Login to an appliance using a user name and password and start
- a session like what is done logging into the BUI. This is not a
- requirement to run REST commands, since the protocol is stateless.
- What is does is set up a cookie session so that some server side
- caching can be done. If login is used remember to call logout when
- finished.
-
- :param auth_str: Authorization string (base64)
- """
- self.auth_str = auth_str
- self.authorize()
-
- def logout(self):
- """Logout of an appliance"""
- result = None
- try:
- result = self.delete("/access/v1", base_path="/api")
- except RestClientError:
- pass
-
- self.headers.clear()
- self.do_logout = False
- return result
-
- def islogin(self):
- """return if client is login"""
- return self.do_logout
-
- @staticmethod
- def mkpath(*args, **kwargs):
- """Make a path?query string for making a REST request
-
- :cmd_params args: The path part
- :cmd_params kwargs: The query part
- """
- buf = StringIO()
- query = "?"
- for arg in args:
- buf.write("/")
- buf.write(arg)
- for k in kwargs:
- buf.write(query)
- if query == "?":
- query = "&"
- buf.write(k)
- buf.write("=")
- buf.write(kwargs[k])
- return buf.getvalue()
-
- def request(self, path, request, body=None, **kwargs):
- """Make an HTTP request and return the results
-
- :param path: Path used with the initiazed URL to make a request
- :param request: HTTP request type (GET, POST, PUT, DELETE)
- :param body: HTTP body of request
- :key accept: Set HTTP 'Accept' header with this value
- :key base_path: Override the base_path for this request
- :key content: Set HTTP 'Content-Type' header with this value
- """
- out_hdrs = dict.copy(self.headers)
- if kwargs.get("accept"):
- out_hdrs['accept'] = kwargs.get("accept")
-
- if body is not None:
- if isinstance(body, dict):
- body = str(json.dumps(body))
-
- if body and len(body):
- out_hdrs['content-length'] = len(body)
-
- zfssaurl = self._path(path, kwargs.get("base_path"))
- req = urllib2.Request(zfssaurl, body, out_hdrs)
- req.get_method = lambda: request
- maxreqretries = kwargs.get("maxreqretries", 10)
- retry = 0
- response = None
-
- LOG.debug('request: %s %s' % (request, zfssaurl))
- LOG.debug('out headers: %s' % out_hdrs)
- if body is not None and body != '':
- LOG.debug('body: %s' % body)
-
- while retry < maxreqretries:
- try:
- response = urllib2.urlopen(req, timeout=self.timeout)
- except urllib2.HTTPError as err:
- LOG.error('REST Not Available: %s' % err.code)
- if err.code == httplib.SERVICE_UNAVAILABLE and \
- retry < maxreqretries:
- retry += 1
- time.sleep(1)
- LOG.error('Server Busy retry request: %s' % retry)
- continue
- if (err.code == httplib.UNAUTHORIZED or
- err.code == httplib.INTERNAL_SERVER_ERROR) and \
- '/access/v1' not in zfssaurl:
- try:
- LOG.error('Authorizing request retry: %s, %s' %
- (zfssaurl, retry))
- self.authorize()
- req.add_header('x-auth-session',
- self.headers['x-auth-session'])
- except RestClientError:
- pass
- retry += 1
- time.sleep(1)
- continue
-
- return RestResult(err=err)
-
- except urllib2.URLError as err:
- LOG.error('URLError: %s' % err.reason)
- raise RestClientError(-1, name="ERR_URLError",
- message=err.reason)
-
- break
-
- if response and response.getcode() == httplib.SERVICE_UNAVAILABLE and \
- retry >= maxreqretries:
- raise RestClientError(response.getcode(), name="ERR_HTTPError",
- message="REST Not Available: Disabled")
-
- return RestResult(response=response)
-
- def get(self, path, **kwargs):
- """
- Make an HTTP GET request
-
- :param path: Path to resource.
- """
- return self.request(path, "GET", **kwargs)
-
- def post(self, path, body="", **kwargs):
- """Make an HTTP POST request
-
- :param path: Path to resource.
- :param body: Post data content
- """
- return self.request(path, "POST", body, **kwargs)
-
- def put(self, path, body="", **kwargs):
- """Make an HTTP PUT request
-
- :param path: Path to resource.
- :param body: Put data content
- """
- return self.request(path, "PUT", body, **kwargs)
-
- def delete(self, path, **kwargs):
- """Make an HTTP DELETE request
-
- :param path: Path to resource that will be deleted.
- """
- return self.request(path, "DELETE", **kwargs)
-
- def head(self, path, **kwargs):
- """Make an HTTP HEAD request
-
- :param path: Path to resource.
- """
- return self.request(path, "HEAD", **kwargs)
--- a/components/openstack/cinder/files/zfssa/zfssaiscsi.py Fri Mar 20 03:13:26 2015 -0700
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,376 +0,0 @@
-# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-ZFS Storage Appliance Cinder Volume Driver
-"""
-import base64
-
-from cinder import exception
-from cinder.openstack.common import log
-from cinder.volume import driver
-from oslo.config import cfg
-
-from cinder.volume.drivers.zfssa import zfssarest
-
-
-CONF = cfg.CONF
-LOG = log.getLogger(__name__)
-
-ZFSSA_OPTS = [
- cfg.StrOpt('zfssa_host', required=True,
- help='ZFSSA management IP address'),
- cfg.StrOpt('zfssa_auth_user', required=True, secret=True,
- help='ZFSSA management authorized user\'s name'),
- cfg.StrOpt('zfssa_auth_password', required=True, secret=True,
- help='ZFSSA management authorized user\'s password'),
- cfg.StrOpt('zfssa_pool', required=True,
- help='ZFSSA storage pool name'),
- cfg.StrOpt('zfssa_project', required=True,
- help='ZFSSA project name'),
- cfg.StrOpt('zfssa_lun_volblocksize', default='8k',
- help='Block size: 512, 1k, 2k, 4k, 8k, 16k, 32k, 64k, 128k'),
- cfg.BoolOpt('zfssa_lun_sparse', default=False,
- help='Flag to enable sparse (thin-provisioned): True, False'),
- cfg.StrOpt('zfssa_lun_compression', default='',
- help='Data compression-off, lzjb, gzip-2, gzip, gzip-9'),
- cfg.StrOpt('zfssa_lun_logbias', default='',
- help='Synchronous write bias-latency, throughput'),
- cfg.StrOpt('zfssa_initiator_group', default='',
- help='iSCSI initiator group'),
- cfg.StrOpt('zfssa_initiator', default='',
- help='iSCSI initiator IQNs (comma separated)'),
- cfg.StrOpt('zfssa_initiator_user', default='',
- help='iSCSI initiator CHAP user'),
- cfg.StrOpt('zfssa_initiator_password', default='',
- help='iSCSI initiator CHAP password'),
- cfg.StrOpt('zfssa_target_group', default='tgt-grp',
- help='iSCSI target group name'),
- cfg.StrOpt('zfssa_target_user', default='',
- help='iSCSI target CHAP user'),
- cfg.StrOpt('zfssa_target_password', default='',
- help='iSCSI target CHAP password'),
- cfg.StrOpt('zfssa_target_portal', required=True,
- help='iSCSI target portal (Data-IP:Port, w.x.y.z:3260)'),
- cfg.StrOpt('zfssa_target_interfaces', required=True,
- help='Network interfaces of iSCSI targets (comma separated)')
-]
-
-CONF.register_opts(ZFSSA_OPTS)
-
-SIZE_GB = 1073741824
-
-
-#pylint: disable=R0904
-class ZFSSAISCSIDriver(driver.ISCSIDriver):
- """ZFSSA Cinder volume driver"""
-
- VERSION = '1.0.0'
- protocol = 'iSCSI'
-
- def __init__(self, *args, **kwargs):
- super(ZFSSAISCSIDriver, self).__init__(*args, **kwargs)
- self.configuration.append_config_values(ZFSSA_OPTS)
- self.zfssa = None
- self._stats = None
-
- def _get_target_alias(self):
- """return target alias"""
- return self.configuration.zfssa_target_group
-
- def do_setup(self, context):
- """Setup - create project, initiators, initiatorgroup, target,
- targetgroup
- """
- self.configuration._check_required_opts()
- lcfg = self.configuration
-
- LOG.info('Connecting to host: %s' % lcfg.zfssa_host)
- self.zfssa = zfssarest.ZFSSAApi(lcfg.zfssa_host)
- auth_str = base64.encodestring('%s:%s' %
- (lcfg.zfssa_auth_user,
- lcfg.zfssa_auth_password))[:-1]
- self.zfssa.login(auth_str)
- self.zfssa.create_project(lcfg.zfssa_pool, lcfg.zfssa_project,
- compression=lcfg.zfssa_lun_compression,
- logbias=lcfg.zfssa_lun_logbias)
-
- if (lcfg.zfssa_initiator != '' and
- (lcfg.zfssa_initiator_group == '' or
- lcfg.zfssa_initiator_group == 'default')):
- LOG.warning('zfssa_initiator= %s wont be used on \
- zfssa_initiator_group= %s' %
- (lcfg.zfssa_initiator,
- lcfg.zfssa_initiator_group))
-
- # Setup initiator and initiator group
- if lcfg.zfssa_initiator != '' and \
- lcfg.zfssa_initiator_group != '' and \
- lcfg.zfssa_initiator_group != 'default':
- for initiator in lcfg.zfssa_initiator.split(','):
- self.zfssa.create_initiator(initiator,
- lcfg.zfssa_initiator_group + '-' +
- initiator,
- chapuser=
- lcfg.zfssa_initiator_user,
- chapsecret=
- lcfg.zfssa_initiator_password)
- self.zfssa.add_to_initiatorgroup(initiator,
- lcfg.zfssa_initiator_group)
- # Parse interfaces
- interfaces = []
- for interface in lcfg.zfssa_target_interfaces.split(','):
- if interface == '':
- continue
- interfaces.append(interface)
-
- # Setup target and target group
- iqn = self.zfssa.create_target(
- self._get_target_alias(),
- interfaces,
- tchapuser=lcfg.zfssa_target_user,
- tchapsecret=lcfg.zfssa_target_password)
-
- self.zfssa.add_to_targetgroup(iqn, lcfg.zfssa_target_group)
-
- def check_for_setup_error(self):
- """Check that driver can login and pool, project, initiators,
- initiatorgroup, target, targetgroup exist
- """
- lcfg = self.configuration
-
- self.zfssa.verify_pool(lcfg.zfssa_pool)
- self.zfssa.verify_project(lcfg.zfssa_pool, lcfg.zfssa_project)
-
- if lcfg.zfssa_initiator != '' and \
- lcfg.zfssa_initiator_group != '' and \
- lcfg.zfssa_initiator_group != 'default':
- for initiator in lcfg.zfssa_initiator.split(','):
- self.zfssa.verify_initiator(initiator)
-
- self.zfssa.verify_target(self._get_target_alias())
-
- def _get_provider_info(self, volume):
- """return provider information"""
- lcfg = self.configuration
- lun = self.zfssa.get_lun(lcfg.zfssa_pool,
- lcfg.zfssa_project, volume['name'])
- iqn = self.zfssa.get_target(self._get_target_alias())
- loc = "%s %s %s" % (lcfg.zfssa_target_portal, iqn, lun['number'])
- LOG.debug('_export_volume: provider_location: %s' % loc)
- provider = {'provider_location': loc}
- if lcfg.zfssa_target_user != '' and lcfg.zfssa_target_password != '':
- provider['provider_auth'] = 'CHAP %s %s' % \
- (lcfg.zfssa_target_user,
- lcfg.zfssa_target_password)
- return provider
-
- def create_volume(self, volume):
- """Create a volume on ZFSSA"""
- LOG.debug('zfssa.create_volume: volume=' + volume['name'])
- lcfg = self.configuration
- volsize = str(volume['size']) + 'g'
- self.zfssa.create_lun(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- volume['name'],
- volsize,
- targetgroup=lcfg.zfssa_target_group,
- volblocksize=lcfg.zfssa_lun_volblocksize,
- sparse=lcfg.zfssa_lun_sparse,
- compression=lcfg.zfssa_lun_compression,
- logbias=lcfg.zfssa_lun_logbias)
-
- return self._get_provider_info(volume)
-
- def delete_volume(self, volume):
- """Deletes a volume with the given volume['name']."""
- LOG.debug('zfssa.delete_volume: name=' + volume['name'])
- lcfg = self.configuration
- lun2del = self.zfssa.get_lun(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- volume['name'])
- """Delete clone's temp snapshot. see create_cloned_volume()"""
- """clone is deleted as part of the snapshot delete."""
- tmpsnap = 'tmp-snapshot-%s' % volume['id']
- if 'origin' in lun2del and lun2del['origin']['snapshot'] == tmpsnap:
- self.zfssa.delete_snapshot(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- lun2del['origin']['share'],
- lun2del['origin']['snapshot'])
- return
-
- self.zfssa.delete_lun(pool=lcfg.zfssa_pool,
- project=lcfg.zfssa_project,
- lun=volume['name'])
-
- def create_snapshot(self, snapshot):
- """Creates a snapshot with the given snapshot['name'] of the
- snapshot['volume_name']
- """
- LOG.debug('zfssa.create_snapshot: snapshot=' + snapshot['name'])
- lcfg = self.configuration
- self.zfssa.create_snapshot(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- snapshot['volume_name'],
- snapshot['name'])
-
- def delete_snapshot(self, snapshot):
- """Deletes a snapshot."""
- LOG.debug('zfssa.delete_snapshot: snapshot=' + snapshot['name'])
- lcfg = self.configuration
- has_clones = self.zfssa.has_clones(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- snapshot['volume_name'],
- snapshot['name'])
- if has_clones:
- LOG.error('snapshot %s: has clones' % snapshot['name'])
- raise exception.SnapshotIsBusy(snapshot_name=snapshot['name'])
-
- self.zfssa.delete_snapshot(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- snapshot['volume_name'],
- snapshot['name'])
-
- def create_volume_from_snapshot(self, volume, snapshot):
- """Creates a volume from a snapshot - clone a snapshot"""
- LOG.debug('zfssa.create_volume_from_snapshot: volume=' +
- volume['name'])
- LOG.debug('zfssa.create_volume_from_snapshot: snapshot=' +
- snapshot['name'])
- if not self._verify_clone_size(snapshot, volume['size'] * SIZE_GB):
- exception_msg = (_('Error verifying clone size on '
- 'Volume clone: %(clone)s '
- 'Size: %(size)d on'
- 'Snapshot: %(snapshot)s')
- % {'clone': volume['name'],
- 'size': volume['size'],
- 'snapshot': snapshot['name']})
- LOG.error(exception_msg)
- raise exception.InvalidInput(reason=exception_msg)
-
- lcfg = self.configuration
- self.zfssa.clone_snapshot(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- snapshot['volume_name'],
- snapshot['name'],
- volume['name'])
-
- def _update_volume_status(self):
- """Retrieve status info from volume group."""
- LOG.debug("Updating volume status")
- self._stats = None
- data = {}
- data["volume_backend_name"] = self.__class__.__name__
- data["vendor_name"] = 'Oracle'
- data["driver_version"] = self.VERSION
- data["storage_protocol"] = self.protocol
-
- lcfg = self.configuration
- (avail, used) = self.zfssa.get_pool_stats(lcfg.zfssa_pool)
- if avail is None or used is None:
- return
- total = int(avail) + int(used)
-
- if lcfg.zfssa_lun_sparse:
- data['total_capacity_gb'] = 'infinite'
- else:
- data['total_capacity_gb'] = total / SIZE_GB
- data['free_capacity_gb'] = int(avail) / SIZE_GB
- data['reserved_percentage'] = 0
- data['QoS_support'] = False
- self._stats = data
-
- def get_volume_stats(self, refresh=False):
- """Get volume status.
- If 'refresh' is True, run update the stats first.
- """
- if refresh:
- self._update_volume_status()
- return self._stats
-
- def _export_volume(self, volume):
- """Export the volume - set the initiatorgroup property."""
- LOG.debug('_export_volume: volume name: %s' % volume['name'])
- lcfg = self.configuration
-
- self.zfssa.set_lun_initiatorgroup(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- volume['name'],
- lcfg.zfssa_initiator_group)
- return self._get_provider_info(volume)
-
- def create_export(self, context, volume):
- """Driver entry point to get the export info for a new volume."""
- LOG.debug('create_export: volume name: %s' % volume['name'])
- return self._export_volume(volume)
-
- def remove_export(self, context, volume):
- """Driver entry point to remove an export for a volume."""
- LOG.debug('remove_export: volume name: %s' % volume['name'])
- lcfg = self.configuration
- self.zfssa.set_lun_initiatorgroup(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- volume['name'],
- '')
-
- def ensure_export(self, context, volume):
- """Driver entry point to get the export info for an existing volume."""
- LOG.debug('ensure_export: volume name: %s' % volume['name'])
- return self._export_volume(volume)
-
- def copy_image_to_volume(self, context, volume, image_service, image_id):
- self.ensure_export(context, volume)
- super(ZFSSAISCSIDriver, self).copy_image_to_volume(
- context, volume, image_service, image_id)
-
- def extend_volume(self, volume, new_size):
- """Driver entry point to extent volume size."""
- LOG.debug('extend_volume: volume name: %s' % volume['name'])
- lcfg = self.configuration
- self.zfssa.set_lun_size(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- volume['name'],
- new_size * SIZE_GB)
-
- def create_cloned_volume(self, volume, src_vref):
- """Create a clone of the specified volume."""
- zfssa_snapshot = {'volume_name': src_vref['name'],
- 'name': 'tmp-snapshot-%s' % volume['id']}
- self.create_snapshot(zfssa_snapshot)
- try:
- self.create_volume_from_snapshot(volume, zfssa_snapshot)
- except exception.VolumeBackendAPIException:
- LOG.error("Clone Volume '%s' failed from source volume '%s'"
- % (volume['name'], src_vref['name']))
- # Cleanup snapshot
- self.delete_snapshot(zfssa_snapshot)
-
- def local_path(self, volume):
- """Not implemented"""
- pass
-
- def backup_volume(self, context, backup, backup_service):
- """Not implemented"""
- pass
-
- def restore_backup(self, context, backup, volume, backup_service):
- """Not implemented"""
- pass
-
- def _verify_clone_size(self, snapshot, size):
- """Check whether the clone size is the same as the parent volume"""
- lcfg = self.configuration
- lun = self.zfssa.get_lun(lcfg.zfssa_pool,
- lcfg.zfssa_project,
- snapshot['volume_name'])
- return (lun['size'] == size)
--- a/components/openstack/cinder/files/zfssa/zfssarest.py Fri Mar 20 03:13:26 2015 -0700
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,614 +0,0 @@
-# Copyright (c) 2014, Oracle and/or its affiliates. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you may
-# not use this file except in compliance with the License. You may obtain
-# a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations
-# under the License.
-"""
-ZFS Storage Appliance Proxy
-"""
-import json
-import socket
-
-from cinder import exception
-from cinder.openstack.common import log
-
-from cinder.volume.drivers.zfssa import restclient
-
-LOG = log.getLogger(__name__)
-
-
-#pylint: disable=R0913
-#pylint: disable=R0904
-class ZFSSAApi(object):
- """ZFSSA API proxy class"""
- def __init__(self, host):
- self.host = host
- self.url = "https://" + self.host + ":215"
- self.rclient = restclient.RestClientURL(self.url)
-
- def __del__(self):
- if self.rclient and self.rclient.islogin():
- self.rclient.logout()
-
- def _is_pool_owned(self, pdata):
- """returns True if the pool's owner is the
- same as the host.
- """
- svc = '/api/system/v1/version'
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- exception_msg = (_('Error getting version: '
- 'svc: %(svc)s.'
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s.')
- % {'svc': svc,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- vdata = json.loads(ret.data)
- return vdata['version']['asn'] == pdata['pool']['asn'] and \
- vdata['version']['nodename'] == pdata['pool']['owner']
-
- def login(self, auth_str):
- """Login to the appliance"""
- if self.rclient:
- self.rclient.login(auth_str)
-
- def get_pool_stats(self, pool):
- """Get space_available and used properties of a pool
- returns (avail, used)
- """
- svc = '/api/storage/v1/pools/' + pool
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- exception_msg = (_('Error Getting Pool Stats: '
- 'Pool: %(pool)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s.')
- % {'pool': pool,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.InvalidVolume(reason=exception_msg)
-
- val = json.loads(ret.data)
-
- if not self._is_pool_owned(val):
- exception_msg = (_('Error Pool ownership: '
- 'Pool %(pool)s is not owned '
- 'by %(host)s.')
- % {'pool': pool,
- 'host': self.host})
- LOG.error(exception_msg)
- raise exception.InstanceNotFound(instance_id=pool)
-
- avail = val['pool']['usage']['available']
- used = val['pool']['usage']['used']
-
- return (avail, used)
-
- def create_project(self, pool, project, compression=None, logbias=None):
- """Create a project on a pool
- Check first whether the pool exists.
- """
- self.verify_pool(pool)
- svc = '/api/storage/v1/pools/' + pool + '/projects/' + project
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- svc = '/api/storage/v1/pools/' + pool + '/projects'
- arg = {
- 'name': project
- }
- if compression and compression != '':
- arg.update({'compression': compression})
- if logbias and logbias != '':
- arg.update({'logbias': logbias})
-
- ret = self.rclient.post(svc, arg)
- if ret.status != restclient.Status.CREATED:
- exception_msg = (_('Error Creating Project: '
- '%(project)s on '
- 'Pool: %(pool)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s .')
- % {'project': project,
- 'pool': pool,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- def create_initiator(self, initiator, alias, chapuser=None,
- chapsecret=None):
- """Create an iSCSI initiator"""
-
- svc = '/api/san/v1/iscsi/initiators/alias=' + alias
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- svc = '/api/san/v1/iscsi/initiators'
- arg = {
- 'initiator': initiator,
- 'alias': alias
- }
- if chapuser and chapuser != '' and chapsecret and chapsecret != '':
- arg.update({'chapuser': chapuser,
- 'chapsecret': chapsecret})
-
- ret = self.rclient.post(svc, arg)
- if ret.status != restclient.Status.CREATED:
- exception_msg = (_('Error Creating Initator: '
- '%(initiator)s on '
- 'Alias: %(alias)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s .')
- % {'initiator': initiator,
- 'alias': alias,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- def add_to_initiatorgroup(self, initiator, initiatorgroup):
- """Add an iSCSI initiator to initiatorgroup"""
- svc = '/api/san/v1/iscsi/initiator-groups/' + initiatorgroup
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- svc = '/api/san/v1/iscsi/initiator-groups'
- arg = {
- 'name': initiatorgroup,
- 'initiators': [initiator]
- }
- ret = self.rclient.post(svc, arg)
- if ret.status != restclient.Status.CREATED:
- exception_msg = (_('Error Adding Initator: '
- '%(initiator)s on group'
- 'InitiatorGroup: %(initiatorgroup)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s .')
- % {'initiator': initiator,
- 'initiatorgroup': initiatorgroup,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
- else:
- svc = '/api/san/v1/iscsi/initiator-groups/' + initiatorgroup
- arg = {
- 'initiators': [initiator]
- }
- ret = self.rclient.put(svc, arg)
- if ret.status != restclient.Status.ACCEPTED:
- exception_msg = (_('Error Adding Initator: '
- '%(initiator)s on group'
- 'InitiatorGroup: %(initiatorgroup)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s .')
- % {'initiator': initiator,
- 'initiatorgroup': initiatorgroup,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- def create_target(self, alias, interfaces=None, tchapuser=None,
- tchapsecret=None):
- """Create an iSCSI target
- interfaces: an array with network interfaces
- tchapuser, tchapsecret: target's chapuser and chapsecret
- returns target iqn
- """
- svc = '/api/san/v1/iscsi/targets/alias=' + alias
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- svc = '/api/san/v1/iscsi/targets'
- arg = {
- 'alias': alias
- }
-
- if tchapuser and tchapuser != '' and tchapsecret and \
- tchapsecret != '':
- arg.update({'targetchapuser': tchapuser,
- 'targetchapsecret': tchapsecret,
- 'auth': 'chap'})
-
- if interfaces is not None and len(interfaces) > 0:
- arg.update({'interfaces': interfaces})
-
- ret = self.rclient.post(svc, arg)
- if ret.status != restclient.Status.CREATED:
- exception_msg = (_('Error Creating Target: '
- '%(alias)s'
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s .')
- % {'alias': alias,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- val = json.loads(ret.data)
- return val['target']['iqn']
-
- def get_target(self, alias):
- """Get an iSCSI target iqn"""
- svc = '/api/san/v1/iscsi/targets/alias=' + alias
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- exception_msg = (_('Error Getting Target: '
- '%(alias)s'
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s .')
- % {'alias': alias,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- val = json.loads(ret.data)
- return val['target']['iqn']
-
- def add_to_targetgroup(self, iqn, targetgroup):
- """Add an iSCSI target to targetgroup"""
- svc = '/api/san/v1/iscsi/target-groups/' + targetgroup
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- svccrt = '/api/san/v1/iscsi/target-groups'
- arg = {
- 'name': targetgroup,
- 'targets': [iqn]
- }
-
- ret = self.rclient.post(svccrt, arg)
- if ret.status != restclient.Status.CREATED:
- exception_msg = (_('Error Creating TargetGroup: '
- '%(targetgroup)s with'
- 'IQN: %(iqn)s'
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s .')
- % {'targetgroup': targetgroup,
- 'iqn': iqn,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- return
-
- arg = {
- 'targets': [iqn]
- }
-
- ret = self.rclient.put(svc, arg)
- if ret.status != restclient.Status.ACCEPTED:
- exception_msg = (_('Error Adding to TargetGroup: '
- '%(targetgroup)s with'
- 'IQN: %(iqn)s'
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s.')
- % {'targetgroup': targetgroup,
- 'iqn': iqn,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- def verify_pool(self, pool):
- """Checks whether pool exists"""
- svc = '/api/storage/v1/pools/' + pool
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- exception_msg = (_('Error Verifying Pool: '
- '%(pool)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s.')
- % {'pool': pool,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- def verify_project(self, pool, project):
- """Checks whether project exists"""
- svc = '/api/storage/v1/pools/' + pool + '/projects/' + project
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- exception_msg = (_('Error Verifying '
- 'Project: %(project)s on '
- 'Pool: %(pool)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s.')
- % {'project': project,
- 'pool': pool,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=exception_msg)
-
- def verify_initiator(self, iqn):
- """Check whether initiator iqn exists"""
- svc = '/api/san/v1/iscsi/initiators/' + iqn
- ret = self.rclient.get(svc)
- if ret.status != restclient.Status.OK:
- exception_msg = (_('Error Verifying '
- 'Initiator: %(iqn)s '
- 'Return code: %(ret.status)d '
- 'Message: %(ret.data)s.')
- % {'initiator': iqn,
- 'ret.status': ret.status,
- 'ret.data': ret.data})
- LOG.error(exception_msg)
- raise exception.VolumeBackendAPIException(data=excepti