Project: keystone Series: grizzly Blueprint: ad-ldap-identity-backend Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/ad-ldap-identity-backend Spec URL: None Create an Active Directory authentication backend Project: cinder Series: grizzly Blueprint: add-cloning-support-to-cinder Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/add-cloning-support-to-cinder Spec URL: None Add the ability to call clone_volume to Cinder that will result in a new volume object ready to be attached and used and of course is a "clone" of the originating volume. Current implementation is an additional option to volume_create. This looks like: 'cinder create --src-volume xxxxx 10' The process for the base LVM cas is: 1. cinder.volume.api calls create on the same host as the existing/src volume 2. cinder.volume.manager.create calls driver.create_cloned_volume() 3. cinder.volume.driver.create_cloned_volume a. Creates a temporary snapshot of the src volume (so we don't require detach) b. uses copy to volume from snapshot same as we do for create_volume_from_snapshot c. deletes the temporary snapshot Initially would be implemented to work only across common storage types (LVM, Ceph etc), with the potential of being expanded in the future. In addition another blueprint should be submitted to add to snapshot capabilities. This would add a method like: "volume_revert(volume, snapshot_id)", that would restore a volume to it's state when provided snapshot was taken. Project: horizon Series: grizzly Blueprint: add-security-group-to-instance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/add-security-group-to-instance Spec URL: None novaclient cli supports adding/removing security groups to existing instances. It would be useful for horizon to support it too. Project: oslo Series: grizzly Blueprint: advanced-matchmaking Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/advanced-matchmaking Spec URL: None The matchmaker per Folsom is simple and static. This was fine for Folsom's Nova and cinder, but does not scale nicely for notifications or the requirements in Ceilometer or Quantum. The static nature of the current implementations limits dependent drivers from having dynamic queue/exchange registrations as is performed in other implementations. Currently, only the ZeroMQ driver depends on the matchmaker. However, it is designed to be used by any number of drivers. It is necessary for peer-to-peer messaging, but can also be used as part of message discovery. These changes may build upon the service-group-api changes, if merged. The intention of this blueprint is to extend the rpc and matchmaker abstractions such that the matchmaker can dynamically register hosts and to provide an example implementation of a Matchmaker module that does such. Project: nova Series: grizzly Blueprint: aggregate-based-availability-zones Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/aggregate-based-availability-zones Spec URL: None Allow for setting a compute node's availability zone via the API by implementing Availability Zones internally as General Host Aggregates This is a continuation of work started in Folsom (https://blueprints.launchpad.net/nova/+spec/general-host-aggregates). Project: oslo Series: grizzly Blueprint: amqp-rpc-fast-reply-queue Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/amqp-rpc-fast-reply-queue Spec URL: None This blueprint proposes a change to AMQP based OpenStack RPC implementations, specifically, RabbitMQ and Qpid, to improve the maximum throughput of RPC. The proposal is to replace the dynamically created and deleted response queues and exchanges per RPC call with one per RPC process. This improvement also resolves a RabbitMQ scalability problem in which maximum RPC throughput decreases as cluster nodes are added. A Dell study on the performance benefits for this change can be found here: https://docs.google.com/file/d/0B- droFdkDaVhVzhsN3RKRlFLODQ/edit Project: ceilometer Series: grizzly Blueprint: api-aggregate-average Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/api-aggregate-average Spec URL: None The following API calls need be added GET /v1/projects/(project)/meters/(meter)/volume/average GET /v1/resources/(project)/meters/(meter)/volume/average Project: cinder Series: grizzly Blueprint: api-pagination Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/api-pagination Spec URL: None Add Markers and Pagination in line with the other openstack projects. Project: neutron Series: grizzly Blueprint: api-sec-grp-id-quantum-test Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/api-sec-grp-id-quantum-test Spec URL: None Test to check - create quantum security group - delete quantum security group by checking the GET/DELETE for API call v1.1/{tenant_id }/security-groups/{security_group_id} Project: neutron Series: grizzly Blueprint: api-sec-grp-quantum-test Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/api-sec-grp-quantum-test Spec URL: None APi test to check following operations: List Security Groups Create Security Group to test GET/POST API call v1.1/{tenant_id}/security- gr Project: neutron Series: grizzly Blueprint: api-sec-grp-rules-quantum-test Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/api-sec-grp-rules-quantum-test Spec URL: None Test to check, - Create security group rule - Delete security group rule by checking the POST/Delete for v1.1/{tenant_id}/security-group- rules Project: ceilometer Series: grizzly Blueprint: api-server-pecan-wsme Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/api-server-pecan-wsme Spec URL: http://wiki.openstack.org/spec-ceilometer-api-server-pecan-wsme#preview rebuild API server with pecan and WSME (part of WSGI framework changes for oslo) Project: nova Series: grizzly Blueprint: api-tests-speed Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/api-tests-speed Spec URL: None API tests run slowly because they repeat unnecessary initialisation for every class / test. Limiting both the number of loaded routes and number of enabled extensions to the bare minimum should improve the situation a lot. (6-7x improvement of the first test execution time) Project: nova Series: grizzly Blueprint: apis-for-nova-manage Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/apis-for-nova-manage Spec URL: None The general direction is that nova-manage should be deprecated for everything except the db-sync command. There are however pieces of functionality needed for OpenStack, like ip pool creation, which only exist in nova-manage. That should be moved out into the API by enhancing existing extensions so that nova-manage no longer contains functions which don't exist anywhere else. Project: horizon Series: grizzly Blueprint: app-project-proper-separation Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/app-project-proper-separation Spec URL: None Horizon is a framework for building dashboards (and a Django app). It should be completely agnostic to what project is built on top of it. By comparison, OpenStack Dashboard should contain 100% o the OpenStack Dashboard-related code. The main piece that needs to be moved is the entire "dashboards" directory in the Horizon module. Project: neutron Series: grizzly Blueprint: argparse-based-cfg Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/argparse-based-cfg Spec URL: None Update to the latest openstack common code that supports argparse based cfg Project: horizon Series: grizzly Blueprint: associate-ip-one-click Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/associate-ip-one-click Spec URL: None Up to the limit of available floating IPs (either in the pool or in the quota) a user should just be able to "one-click" assign a floating IP to an instance. This *should* be a configurable switch, though, since in some settings it may be desirable to let a user select the pool and/or specific IP that should be assigned. Project: keystone Series: grizzly Blueprint: authtoken-to-keystoneclient-repo Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/authtoken-to-keystoneclient-repo Spec URL: None Per discussion here: http://lists.openstack.org/pipermail/openstack- dev/2012-September/001184.html and here: http://lists.openstack.org/pipermail/openstack- dev/2012-September/001334.html auth_token needs to be a package separate from keystone, and keystoneclient looks to be a good repository to place it into. Project: nova Series: grizzly Blueprint: auto-cpu-pinning Design: Discussion Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning Spec URL: None A tool for pinning automatically each running virtual CPU to a physical one in the most efficient way, balancing load across sockets/cores and maximizing cache sharing/minimizing cache misses. Ideally able to be run on-demand, as a periodic job, or be triggered by events on the host (vm spawn/destroy). Project: heat Series: grizzly Blueprint: autoscale-update-stack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/autoscale-update-stack Spec URL: None Implement UpdateStack support for AutoScalingGroup - currently any attempt to update results in replacement, so we should align better with the AWS UpdateStack behavior for this resource. At a minimum it should be possible to update the properties which affect the scaling (MinSize, MaxSize, Cooldown) without replacing the group (and thus all the instances) Project: heat Series: grizzly Blueprint: aws-cloudformation-init Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/aws-cloudformation-init Spec URL: http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-commands It looks like we currently have support for files, packages, services, and sources. Remaining are configsets, groups, commands, users, and return value. Project: nova Series: grizzly Blueprint: backing-file-options Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/backing-file-options Spec URL: None When a qcow2 image has a backing file and a child is being based on that backing file ensure that it picks up certain properties from that backing file (right now just cluster_size, preallocation state and encryption mode). Project: neutron Series: grizzly Blueprint: brocade-quantum-plugin Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/brocade-quantum-plugin Spec URL: http://wiki.openstack.org/brocade-quantum-plugin Plugin to for orchestration of Brocade VCS cluster of switches for L2 networks. Project: oslo Series: grizzly Blueprint: cfg-argparse Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/cfg-argparse Spec URL: None The cfg API is currently based on optparse which has been deprecated in favour of argparse. We should switch to argparse, hopefully without needing to change the API too radically. With argparse supported added, we can also consider exposing some argparse APIs through the cfg API - for example, we could add a add_subparsers() method which would allow cfg API users to parse sub-commands using argparse. Project: oslo Series: grizzly Blueprint: cfg-filter-view Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/cfg-filter-view Spec URL: None At the moment, if a module requires a configuration option from another module, we do: CONF.import_opt('source.module', 'option_name') but, in fact, all options from the imported module are available for use. An alternative would be to enforce which options we available within a module e.g. CONF = cfg.FilterView(cfg.CONF, 'option_name') Project: oslo Series: grizzly Blueprint: cfg-move-opts-between-groups Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/cfg-move-opts-between-groups Spec URL: None It's quite a common requirement to want to move an option from e.g. the DEFAULT group to a more specialized group. When you do so, you want to support existing use of the option in the previous group and issue a deprecation warning when doing so. It's also quite common to want to rename an option while doing so e.g. CONF.rabbit_host -> CONF.rabbit.host Right now, you need to do something like this: CONF.register_opt(StrOpt('root_helper')) CONF.register_opt(StrOpt('root_helper'), group='agent') def get_root_helper(): root_helper = CONF.agent.root_helper if root_helper is not None: return root_helper root_helper = CONF.root_helper: if root_helper is not None: warn('DEFAULT.root_helper is deprecated!') return root_helper return 'foo' but it would be much nicer if you could do e.g. rabbit_opts = [ cfg.StrOpt('host', default='localhost', deprecated_name='DEFAULT', 'rabbit_host')), ] CONF.register_opt(rabbit_opts, group='rabbit') What is not ideal about that is that an Opt object doesn't know what group(s) it is registered with, yet here we are encoding information about what group it was *previously* registered with. Perhaps: rabbit_opts = [ cfg.StrOpt('host', default='localhost'), ] deprecated_opts = { 'host': ('DEFAULT', 'rabbit_host'), } CONF.register_opts(rabbit_opt, group='rabbit') CONF.register_deprecated_opts(deprecated_opts, group='rabbit') and in the singular: CONF.register_opt(cfg.StrOpt('host', default='localhost'), group='rabbit') CONF.register_deprecated_opt('host', ('DEFAULT', 'rabbot_host'), group='rabbit') I guess there are three ways you could represent a group and option ... 1) As a string is nasty but simple 'DEFAULT.rabbit_host' 2) As a tuple which is nicer but the semantics of the order of the elements is a little magic: ('DEFAULT', 'rabbit_host') 3) As a dict which is much more clear, but also more verbose: dict(name='rabbit_host', group='DEFAULT') although, in the case of the default group, maybe we can just do 'rabbit_host' and assume DEFAULT because no group is specified. Meh. Project: cinder Series: grizzly Blueprint: cinder-apiv2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-apiv2 Spec URL: http://wiki.openstack.org/CinderAPIv2 After talking with the rest of the Cinder team, there is a general consensus to follow how Glance handles api versioning. The changes to the cinder api require us to split off into another version due interface and response changes. The purpose of this blueprint is to record the API improvements discussed at the Grizzley design summit to create the v2.0 cinder API. The etherpad from the original discussion is available at: https://etherpad.openstack.org/grizzly-cinder-api2-0 General Requirements ==================== 1. The current v1.0 cinder API will still be made available for backwards compatibility, but marked and documented as deprecated. 2. The v2.0 API will start with the current v1.0 API, updated with the new features. 3. The v2.0 API will have a version prefix of /v2/ Project: cinder Series: grizzly Blueprint: cinder-common-rootwrap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-common-rootwrap Spec URL: None Rootwrap is moving to openstack-common. Once this is completed, Cinder should make use of the openstack.common version of rootwrap. Project: cinder Series: grizzly Blueprint: cinder-hosts-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-hosts-extension Spec URL: None Currently Cinder has no service management, not even things like cinder-manage service list. I started looking at a service extension, but we should make this consistent with what Nova has and have a hosts extension. This is an admin extension to the OpenStack API implementing methods similar to that which Nova offers. At the very least having a status report of Cinder services and what nodes they're running on would be very helpful. This could be expanded to cover a number of things in the future as well, check iscsi targets? Verify connections? Project: neutron Series: grizzly Blueprint: cisco-plugin-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-cleanup Spec URL: None Scope: Removal of unused code from the Cisco plugin. Use Cases: Quantum with the Cisco plugin. Implementation Overview: There is a bit of old unused code in some of the Cisco plugin classes. This needs to be removed to make the code more readable and easier to understand/debug. Support for the ucs plugin also needs to be removed. Data Model Changes: n/a Configuration variables: This will remove all configuration values dealing with the ucs/vm-fex plugin. API's: n/a Plugin Interface: n/a Required Plugin support: n/a Dependencies: n/a CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: Remove test cases no longer required. Project: neutron Series: grizzly Blueprint: cisco-plugin-enhancements Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-enhancements Spec URL: None Scope: A few enhancements to the Cisco Nexus plugin to support multiple switches and some intelligence in the plugin when trunking vlans on those switches. Use Cases: Quantum with the Cisco plugin (nexus subplugin) Implementation Overview: The plugin communicates with Nova api when an instance is networked during creation, grabs the host that the instance is running on. Cross references that host with a topology config that tells the plugin what port/switch that host is connected to and trunks it on that switch/port. Data Model Changes: Two fields added to model NexusPortBinding switch_ip = Column(String(255)) instance_id = Column(String(255)) Configuration variables: nexus.ini config changed to- [SWITCH] # Ip address of the switch [[172.18.112.37]] # Hostname of the node [[[asomya- controller.cisco.com]]] # Port this node is connected to on the nexus switch ports=1/1 # Hostname of the node [[[asomya-compute.cisco.com]]] # Port this node is connected to on the nexus switch ports=1/2 # Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default) [[[ssh_port]]] ssh_port=22 credentials.ini should include keystone credentials and endpoint- # Provide credentials and endpoint # for keystone here [keystone] auth_url=http://172.18.112.47:35357/v2.0 username=admin password=abcd API's: n/a Plugin Interface: n/a Required Plugin support: The cisco plugin configuration should include the aforementioned values. Dependencies: nova-api, keystone CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: n/a Project: oslo Series: grizzly Blueprint: common-binaries Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-binaries Spec URL: None In order to enable the move of rootwrap to openstack-common, it is necessary to add support for copying binaries over. That involves renaming them on copy as well as making a few text substitutions. update.sh needs to be transmogrified to support that. Project: oslo Series: grizzly Blueprint: common-db Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-db Spec URL: None Multiple projects in OpenStack are sharing the same database code. Additionally, some code proposed for common is seeking to use the database. To maximize reuse and to facilitate using databases from within common, the database must be brought into common. Project: oslo Series: grizzly Blueprint: common-filters Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-filters Spec URL: None Filter scheduler is being used by multiple projects (so far, Nova and Cinder). The implementation of filter scheduler in these projects share quite a bit of common code. For starter, filters and cost functions. Getting the common code into oslo can reduce a lot of porting (copying/pasting) work for projects that use filter scheduler. Project: oslo Series: grizzly Blueprint: common-rootwrap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-rootwrap Spec URL: None Rootwrap is used in Nova, Cinder and Quantum. A common version should live in openstack-common instead. We'll start by moving nova- rootwrap, then cinder-rootwrap and, time permitting, quantum-rootwrap. Project: oslo Series: grizzly Blueprint: common-weights Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-weights Spec URL: None Filter scheduler is being used by multiple projects (so far, Nova and Cinder). The implementation of filter scheduler in these projects share quite a bit of common code. Getting the common code into oslo can reduce a lot of porting (copying/pasting) work for projects that use filter scheduler. This is the blueprint for weighing function (a.k.a cost fuction). Project: nova Series: grizzly Blueprint: compute-driver-events Design: Pending Approval Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/compute-driver-events Spec URL: http://wiki.openstack.org/ComputeDriverEvents This provides infrastructure for compute drivers to emit asynchronous events to report on important changes to the state of virtual machine instances. The compute manager will be adapted to make use of these events, as an alternative to running a periodic task to reconcile events. The libvirt driver is enhanced to emit such events. The result is that Nova will immediately see when any virtual instance is stopped, instead of suffering (upto) a 10 minute delay. Project: cinder Series: grizzly Blueprint: coraid-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/coraid-volume-driver Spec URL: None A volume driver will be provided to support CORAID hardware storage appliances and AoE (using software initiator released under GPL). The following operations will be supported : --Volume Creation --Volume Deletion --Volume Attach --Volume Detach --Snapshot Creation --Snapshot Deletion --Create Volume from Snapshot -- Volume Stats Volumes types and EtherCloud automation features will be added to provide a fully automated provisioning workflow and a scale-out SAN : http://www.coraid.com/products/management_automation This initiator is supported on all Linux initiators (released under GPL), ATA over Ethernet (AoE) Linux driver for all 3.x and 2.6 kernels is available here : http://support.coraid.com/support/linux/ and ftp://ftp.alyseo.com/pub/partners/Coraid/Drivers/Linux/ The driver will only work when operating on EtherCloud ESM, VSX and SRX (Coraid hardware) : http://www.coraid.com/products/scale_out_san REM : Linux software targets (vblade, kvblade, ggaoed...) are not supported. Project: nova Series: grizzly Blueprint: coverage-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/coverage-extension Spec URL: None To better understand how Tempest and other external test tools exercise nova we should have a way to enable coverage reporting from within nova services by an external program for test runs. Once available in nova, tempest and devstack gate will be enhanced to make this a nightly runable report. Project: nova Series: grizzly Blueprint: db-api-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-api-cleanup Spec URL: None Database access layer has grown organically, and could use some house keeping. For both nova/db/api and nova/db/sqlalchemy/api, let's clean up the API itself. * identify and remove unused methods * consolidate duplicate methods when possible * ensure SQLAlchemy objects are not leaking out of the API * ensure related methods are grouped together and named consistently Project: nova Series: grizzly Blueprint: db-archiving Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-archiving Spec URL: http://etherpad.openstack.org/DatabaseArchiving Outcome of the talk at the Grizzly summit was a general agreement that: * table bloat with deleted records is bad for performance * some deployers may want to keep deleted records around for various reasons * some processes may rely on recently-deleted records There seemed to be several ways to deal with this, some short-term and some long-term. 1. event/cron that moves deleted=1 records to a shadow table 2. on_delete trigger that moves records to a shadow table 3. amqp message to broadcast deleted records The current plan is to move forward with (1). Project: nova Series: grizzly Blueprint: db-unique-keys Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-unique-keys Spec URL: None * change soft delete to `deleted`=`id` instead of `deleted`=1. * Add unique indexes on (`col`, `deleted`)for critical tables. Project: ceilometer Series: grizzly Blueprint: default-config-file Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/default-config-file Spec URL: None The devstack code for setting up a default configuration for ceilometer copies the nova configuration file and modifies it. We should provide our own default files, like cinder does. After we add new default files to ceilometer's code we can update the devstack script to use those files as their source. Project: keystone Series: grizzly Blueprint: default-domain Design: Discussion Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/default-domain Spec URL: None With the introduction of domains in Identity API v3, all projects (tenants) and users must be owned by a specific domain. However, the v2 API is not domain-aware. Example issues: - When an admin user creates a new tenant or user on the v2 API, which domain is that resource owned by according to the v3 API? - When an admin user lists all tenants or users in the system, which resources are returned according to the v3 API? v2 clients won't understand the domain_id attribute - (with domain-scoped user names in v3) If a user attempts to authenticate with a username, which domain should that user exist in? - (with domain-scoped project names in v3) If a user attempts to authorize with a project by name, which domain should that project exist in? To ease the migration path from v2 to v3, it would be useful if all existing projects & users were explicitly assigned a domain for use on v3, and all v2 operations were assumed to apply to that one domain. Therefore, all of the questions above can be answered in the scope of this 'default' domain. For deployments using the SQL- based identity driver, a data migration could create the default domain (id='default', name='Default'), and then attach all existing projects & users to it (if any exist). A new configuration variable, `default_domain_id`, could be used to allow users of all identity backends to specify which the domain upon which operations on the v2 API should apply. This variable could default to simple `default`. LDAP deployments would need to ensure that this domain_id exists if they intend to maintain support for v2 API users. The value of the `default_domain_id` should have no impact on the v3 API, with one exception. That is: API users should not be allowed to delete this domain. DELETE /v3/domains/{default_domain_id} should result in a 403 Forbidden. Actually deleting this domain should be a carefully orchestrated manual process, as configuration changes would also be involved (e.g. removing the v2 API pipeline from the deployment) to avoid breaking the deployment. Projects moved out of the default_domain_id on the v3 API would then become inaccessible from the v2 API, etc. The following pairs of calls would then be equivalent: GET /v2.0/users GET /v3/users?domain_id={default_domain_id} GET /v2.0/tenants GET /v3/projects?domain_id={default_domain_id} POST /v2.0/tokens {'auth': {'projectName': 'foobar'}} POST /v3/auth {'auth': {'projects': [{'name': 'foobar', 'domain_id': 'default'}]} etc Project: nova Series: grizzly Blueprint: default-rules-for-default-security-group Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/default-rules-for-default-security-group Spec URL: None Currently, no rules are added when default security group is created. Thus instances could only be accessed by instances from the same group, as long as you don't modify the default security group or use another one. Nova should provide a hook mechanism to add customized rules when creating default security groups, so that we don't have to remind users to modify default security group at the first time they create instances. HP Could which is built on openstack now permits instances be sshed or pinged in the default security group. This should be the case. Project: nova Series: grizzly Blueprint: delete-nova-volume Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/delete-nova-volume Spec URL: None Nova-volume was deprecated in folsom. We need it gone in grizzly! Project: nova Series: grizzly Blueprint: direct-file-copy Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/direct-file-copy Spec URL: None Glance v2 allows an administrator to enable direct_URL meta-data to be delivered to the glance client. Under the right circumstances this information can be used to more efficiently get the image. Nova- compute could benefit from this by invoking a copy when it knows it has access to the same file system as glance. A configuration option would enable this behavior. Project: keystone Series: grizzly Blueprint: domain-name-spaces Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/domain-name-spaces Spec URL: https://docs.google.com/document/d/1c6Tvr_zRMOP2mJCQN9lrfJjxGaXAExXiFlwMDvmXbl4/edit With the v3 API, the Domain concept is designed to encapsulate users and projects representing some kind of logical entity (e.g. division in an enterprise, customer of a service provider etc.). However, in an effort preserve backward compatibility with the v2 API, the name space for user-defined identifiers (e.g. user name, project name) are required to still be globally unique, rather than simply unique to the holding domain. This requirement for global uniqueness will cause problems for certain cloud providers - hence what is needed is for each domain to have a private name space. From an implementation point of view, migrations from v2 Identity implementations are protected, since they will be initially stored in a single (Default) domain (which means the v2 uniqueness requirements will of course still hold true within that single domain) Project: keystone Series: grizzly Blueprint: domain-role-assignment Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/domain-role-assignment Spec URL: https://review.openstack.org/#/c/18706/ The current v3 API allows the assignment of role to a domain-user pair as well as a project-user pair. In the former case, I believe the intention is that this is interpreted as a request to assign the role in question to all enclosed projects within that domain. However, there may be cases where this is not the desired outcome - for instance a role for which a user is only allowed to CRUD users within that domain (this is the classic requirement for Domain Admins). Initially, only authentication to keystone access itself would honor a domain in the token (now that, in v3, keystone supports RBAC on its own apis), but over time some other projects may wish to do so (e.g. glance so that we can have images that are domain-wide with an administrator suitable permissioned) The API call for role assignment to a domain should therefore be re-defined to mean assigning a role to the domain container. Project: keystone Series: grizzly Blueprint: domain-scoping Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/domain-scoping Spec URL: https://docs.google.com/document/d/14l6Kuc5Vrdi-5BXlqsYRekgwqUclezHZqr_3uXHiEPI/edit The v3 API has introduced the concept of Domains, being the container that holds users and projects. For many cloud providers, the domain will be the object that really maps to a hosted customer, within which that customer will CRUD their users and projects. To facilitate this, the customer will want to create users that have "roles" that are domain wide (e.g. on-board new users, maintain a set of standard images for all projects etc.). To aid this, we should support the scoping of a token to a Domain (either at authentication or subsequent /tokens call) Project: cinder Series: grizzly Blueprint: driver-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/driver-cleanup Spec URL: None Cleanup the volume drivers and have only a single driver per file. List of files needing to be cleaned up: - driver.py (VolumeDriver, ISCSIDriver, FakeISCSIDriver, RBDDriver, SheepDogDriver, LoggingVolumeDriver(probably needs to be deleted)) - netapp.py (NetAppISCSIDriver, NetAppCmodeISCSIDriver) - san.py (SanISCSIDriver, SolarisISCSIDriver, HpSanISCSIDriver) Might also be good to extract the LVMDriver from the base VolumeDriver. Planning on putting all the drivers under cinder/volume/drivers/* Look at options to get away from using module names for drivers. Look at how glance is doing it maybe? Project: cinder Series: grizzly Blueprint: emc-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/emc-volume-driver Spec URL: http://wiki.openstack.org/Cinder/EMCVolumeDriver A Volume Driver will be provided to support EMC storage in the backend. It uses an EMC software SMI-S to communicate with VNX or VMAX/VMAXe arrays. Project: ceilometer Series: grizzly Blueprint: enable-disable-plugin-load Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/enable-disable-plugin-load Spec URL: None The original architecture design document proposed asking plugins at startup whether or not they should be enabled to avoid polling them later when we know they cannot (or will not) return any useful data. That was never implemented, and needs to be. Project: nova Series: grizzly Blueprint: extra-specs-in-nova-client Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/extra-specs-in-nova-client Spec URL: None Add support to python-novaclient so that it can list/set/unset extra_specs. Project: cinder Series: grizzly Blueprint: fibre-channel-block-storage Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/fibre-channel-block-storage Spec URL: http://wiki.openstack.org/Cinder/FibreChannelSupport Currently block storage can be attached to hosts via iSCSI. Adding support for block storage attaching to hosts via Fibre Channel SANs as well. Project: horizon Series: grizzly Blueprint: fix-legacy-dashboard-names Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/fix-legacy-dashboard-names Spec URL: None The names "nova" and "syspanel" in the code exist only for legacy reasons and confuse new contributors. They should be renamed to reflect their proper "project" and "admin" names respectively. Project: horizon Series: grizzly Blueprint: flavor-extra-specs Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/flavor-extra-specs Spec URL: None Nova supports "extra specs" on flavors which allow for more intelligent scheduling and other interesting use cases. Horizon should provide an interface to that data. Project: nova Series: grizzly Blueprint: general-bare-metal-provisioning-framework Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/general-bare-metal-provisioning-framework Spec URL: http://etherpad.openstack.org/N8NsHk447X We have already implemented bare-metal provisioning of compute nodes for Tilera TILEmpower 64-core tiled processor systems. Now we (USC/ISI + NTT DOCOMO + VirtualTech Japan Inc.) want to propose a general baremetal provisioning framework to support (1) PXE and non- PXE (Tilera) provisioning with bare-metal DB (Review#1) (2) Architecture-specific provisioning entity (Review#1) (3) Fault tolerance of bare-metal nodes (Review#2) (4) Openflow related stuff (Review#3) http://wiki.openstack.org/GeneralBareMetalProvisioningFramework http://etherpad.openstack.org/FolsomBareMetalCloud Project: cinder Series: grizzly Blueprint: generic-iscsi-copy-vol-image Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/generic-iscsi-copy-vol-image Spec URL: None Implements a generic version of copy_volume_to_image and copy_image_to_volume for iSCSI drivers. Project: nova Series: grizzly Blueprint: get-password Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/get-password Spec URL: None Some guests need a password in order to be used. We need a secure way to generate an encrypted password and let the user retrieve it securely. Although we can do this using the console and an init script[1] it would be much nicer to have support in the api for such a thing. The high-level goal is: nova get-password (returns the password for the vm) The steps involved are: a) Add a post location to nova-api-metadata that can send encrypted password (should be write once) b) Add an extension to the api allowing get_password and reset_password (reset simply clears the value c) Allow an alternative method for xenapi (password could be encrypted and written by nova or guest agent) d) Work with cloud-init to for it to support generating an encrypted password and posting it e) Work with hyper-v team to make sure their cloud-init support includes it f) Add code to python- novaclient for decrypting password [1] https://gist.github.com/4008762 Project: glance Series: grizzly Blueprint: glance-api-v2-image-sharing Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-api-v2-image-sharing Spec URL: None This is a placeholder blueprint to cover the work to be done in Grizzly to expose a to-be-determined image sharing API. Project: glance Series: grizzly Blueprint: glance-common-image-properties Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-common-image-properties Spec URL: http://wiki.openstack.org/glance-common-image-properties In order to make images more easily searchable in different openstack installations, it would be useful to add some common properties to Glance images that identify operating system characteristics . Project: glance Series: grizzly Blueprint: glance-domain-logic-layer Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-domain-logic-layer Spec URL: None There is a lot of logic that lives in the db layer and the v1/v2 API layers that should really be handled in a single layer. - policy checking - notifications - etc. Project: glance Series: grizzly Blueprint: glance-simple-db-parity Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-simple-db-parity Spec URL: None The 'simple' database driver is a subset of the 'sqlalchemy' driver, but it should really match 100%. Let's use this opportunity to beef up the db testing at the same time. Project: cinder Series: grizzly Blueprint: glusterfs-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/glusterfs-support Spec URL: None Use GlusterFS as a volume backend Currently Cinder allows use of an NFS export to host volume data. This blueprint aims to enable support for GlusterFS to be used the same way that NFS is used. Like the NFS driver, it supports basic volume operations, but not snapshots or clones. This means introducing a new Cinder GlusterFS Driver, and Nova support for mounting it. Since the semantics of using Gluster are similar to NFS, the current plan is to have a base "Remote FS" driver class that both NFS and GlusterFS drivers can use to share some common code. http://www.gluster.org/ Project: nova Series: grizzly Blueprint: grizzly-hyper-v-nova-compute Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/grizzly-hyper-v-nova-compute Spec URL: None The Folsom release saw the reintroduction of a Hyper-V compute driver. This blueprint is related to the new features under development targeting the Grizzly release Project: ceilometer Series: grizzly Blueprint: hbase-storage-backend Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/hbase-storage-backend Spec URL: http://wiki.openstack.org/Ceilometer/blueprints/hbase-storage-backend Add the HBase storage backend to Ceilometer, in addition to the current backend options: MongoDB, SQLAlchemy. With all the power of HDFS and HBase, the HBase storage backend will make Ceilometer more adoptive to providers' existing architectures, it will also provide extensive data analysis possibilities through MapReduce frameworks, such as Hive. Project: neutron Series: grizzly Blueprint: high-available-quantum-queues-in-rabbitmq Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/high-available-quantum-queues-in-rabbitmq Spec URL: None Quantum supports now rabbitmq, which can be easily started in active/passive mode with Pacemaker + DRBD. But that could be interesting to integrate active / active feature and to declare the queues adding an x-ha-policy entry. It would be nice to add a config entry to be able to declare the queues in that way. The code is inspired from Nova : https://review.openstack.org/#/c/13665/ (By Eugene Kirpichov) Code here : https://review.openstack.org/#/c/13760/ RabbitMQ HA page : http://www.rabbitmq.com/ha.html Project: nova Series: grizzly Blueprint: host-api-prep-for-cells Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/host-api-prep-for-cells Spec URL: None Moves logic from nova/api/openstack/compute/contrib/hosts.py into nova/compute/api.py .. This is in preparation for cells, which provides its own compute api and has to proxy some calls to child cells. Project: cinder Series: grizzly Blueprint: hp3par-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/hp3par-volume-driver Spec URL: None Add a cinder volume driver to support the HP 3Par array. This is an iSCSI driver. It should support * Volume Creation * Volume Deletion * Snapshot Creation * Snapshot Deletion * Create Volume from Snapshot * Volume Attach * Volume Detach. Once the Volume Types is ironed out, the driver should use the volume type metadata for volume creation. 3Par arrays have the ability to create volumes on different Common Provisioning Groups (CPG's). CPGs can specify the type of volume RAID level for volumes created on that CPG. Cinder Volume Types should be able to be mapped to different CPG types to support the creation of different RAID level volumes. Project: cinder Series: grizzly Blueprint: huawei-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/huawei-volume-driver Spec URL: None A volume driver will be provided to support HUAWEI storage. This is an iSCSI driver. The following operations will be supported on OceanStor T series V100 and Dorado series arrays: --Volume Creation --Volume Deletion --Volume Attach --Volume Detach The following operations will be supported on OceanStor T series V100 and Dorado 5100 arrays: --Snapshot Creation --Snapshot Deletion --Create Volume from Snapshot Project: nova Series: grizzly Blueprint: hyper-v-compute-resize Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-compute-resize Spec URL: None The Nova Compute Hyper-V driver is currently lacking the resize feature due to the complexity related to managing non native filesystems on WIndows (e.g. ext3 / ext4). A first implementation will handle migration and virtual disk resize support. A subsequent patch targeting Havana will implement guest OS resize through worker VMs spawned from lightweight Linux images (e.g. OpenWRT) to resize attached ext2/3/4 volumes while NTFS volumes can be mounted as local loopback devices on the Hypervisor for resizing (Windows Server 2012). Implementation details ------------------------------- Resize / cold migration is implemented by copying the local disks to a remote SMB share, identified by the configuration option HYPERV.instances_path_share or, if empty, by an administrative share with a remote path corresponding to the configuration option instances_path. The source instance directory is renamed by adding a suffix "_revert" and preserved until the migration is confirmed or reverted. In the former case the directory will be deleted and in the latter renamed to the original name. The VM corresponding to the instance is deleted on the source host and recreated on the target. Any mapped volume is disconnected on the source and reattached to the new VM on the target host. In case of resize operations, the local VHD file is resized according to the new flavor limits. Due to VHD limitations, an attempt to resize a disk to a smaller size will result in an exception. In case of differencing disks (CoW), should the base disk be missing in the target host's cache, it will be downloaded and reconnected to the copied differencing disk. Same host migrations are supported by using a temporary directory with suffix "_tmp" during disk file copy. Project: nova Series: grizzly Blueprint: hyper-v-config-drive-v2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-config-drive-v2 Spec URL: None Support for config drive v2 configuration on Hyper-V, based on https://blueprints.launchpad.net/nova/+spec/config-drive-v2 Based on the creation of an ISO image converted as a raw VHD for compliance with the current Cloud-Init specs. Project: nova Series: grizzly Blueprint: hyper-v-testing-serialization-improvements Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-testing-serialization-improvements Spec URL: None The current implementation of the Hyper-V tests, uses serialized stubs in pickled and gzipped format, as documented here: https://github.com/ openstack/nova/blob/master/nova/tests/hyperv/README.rst The serialized binary format generates management issues in Git and concerns related to the opacity of the blobs and needs to be changed to Json, as dicussed in the following Nova meeting: http://eavesdrop.o penstack.org/meetings/nova/2012/nova.2012-11-29-21.01.html Project: horizon Series: grizzly Blueprint: iconify-buttons Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/iconify-buttons Spec URL: None Bootstrap includes a good set of icons, and we can both slim down our table header space usage and make things more visually intuitive by using icons on our action buttons instead of text. The classes are already on the buttons, it mostly just involves writing a little bit of CSS. Project: horizon Series: grizzly Blueprint: image-upload Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/image-upload Spec URL: None Ability to upload image from a file on disk via UI. Strong preference goes to a solution that does not involve proxying the file through the Horizon server (since allowing arbitrary upload of potentially very large files is dangerous). Project: cinder Series: grizzly Blueprint: implement-lvm-thin-provisioning Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/implement-lvm-thin-provisioning Spec URL: None As of LVM2 version 2.02.89 the ability to do thin provisioning was made available in LVM, this provides some cool new features but also addresses some problems with things like terribel LVM LVM snapshot performance. Currently the version of LVM in Ubuntu 12.04 does NOT support LVM thin, however an experimental PPA from brightbox which is a backport from Quantal has been proposed to Cannonical to be pulled in. For some users the experimental PPA is a better option than dealing with some of the current issues in the standard LVM2 version of Precise (including the dd hangs on secure delete). For Precise: Prereqs:    LVM version: 2.02.95(2) (2012-03-06)    Library version: 1.02.74 (2012-03-06)    Driver version: 4.22.0 To get these on precise we need an experimental PPA from brightbox:     sudo add- apt-repository ppa:brightbox/experimental  sudo apt-get install lvm2 Uses pool_size config option to determine how large of a thin pool to create. Defaults to '0' which will use the entire VG. Change is introduced as a new driver, basicly just inherits from the existing LVM driver and would be used by adding the following driver selection to your cinder.conf file: volume_driver=cinder.volume.drivers.lvm.ThinLVMVolumeDriver Project: keystone Series: grizzly Blueprint: implement-v3-core-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/implement-v3-core-api Spec URL: None Initial implementation and tests around https://blueprints.launchpad.net/keystone/+spec/draft-v3-blueprint Project: glance Series: grizzly Blueprint: importing-rootwarp Design: Obsolete Lifecycle: Complete Impl: Good progress Link: https://blueprints.launchpad.net/glance/+spec/importing-rootwarp Spec URL: None Allow Glance execute system command from the functional code: 1. Import Oslo processutils to Glance. 2. Add necessary command execution wrap functions to utils.py within Glance. Project: horizon Series: grizzly Blueprint: improve-quantum-summary-table Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/improve-quantum-summary-table Spec URL: None Improves summary table and detail info view for each network resource. In the Folsom implementation, some useful fields are not displayed. Project: nova Series: grizzly Blueprint: instance-actions Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/instance-actions Spec URL: http://wiki.openstack.org/NovaInstanceActions Create a new instance_actions table, and API extension to access it. This would provide a mechanism for better error reporting, and provide users insight into what has been done with their instance. Project: cinder Series: grizzly Blueprint: instance-attached-field Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/instance-attached-field Spec URL: None Add a field to display what instance is being attached to when in the "attaching" state. Attaching can take sometime so it would be nice to see what it will be attached to once it completes. Project: heat Series: grizzly Blueprint: instance-update-stack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/instance-update-stack Spec URL: None Currently any update to an Instance resource passed into UpdateStack will result in the instance being replaced, implement the instance handle_update hook so our update behavior is closer to that defined for AWS instances, and in particular we should allow instance metadata to be updated such that instance reconfiguration via cfn-hup is possible. Project: cinder Series: grizzly Blueprint: iscsi-chap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/iscsi-chap Spec URL: http://wiki.openstack.org/IscsiChapSupport Add chap support to basic volume driver. Verify that chap support works. The spec is on this page: http://wiki.openstack.org/IscsiChapSupport. Project: nova Series: grizzly Blueprint: iscsi-multipath Design: New Lifecycle: Not started Impl: Unknown Link: https://blueprints.launchpad.net/nova/+spec/iscsi-multipath Spec URL: None Use iscsi and multipath device directly instead of copy base image from ISCSI target service, if the base image is serving as an ISCSI/IET remote target, this function will speed up VM booting process for the first time to booting a base image on nova compute. Project: keystone Series: grizzly Blueprint: keystone-ipv6-support Design: Discussion Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/keystone-ipv6-support Spec URL: None Originally listed as https://bugs.launchpad.net/keystone/+bug/856887, this is a blueprint to add IPv6 support to Keystone Project: neutron Series: grizzly Blueprint: lbaas-namespace-agent Design: Pending Approval Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-namespace-agent Spec URL: None This agent will utilize network namespaces and HAProxy to provide and open source LBaaS implementation. Project: neutron Series: grizzly Blueprint: lbaas-plugin-api-crud Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-plugin-api-crud Spec URL: None Work items for LBaaS Python APIs / CRUD Operations - Python plugin API (one-to-one mapping of WS API) - SQLAlchemy data models - CRUD operations (this should enable use of the API with what is effectively a "null" driver) Project: neutron Series: grizzly Blueprint: lbaas-restapi-tenant Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-restapi-tenant Spec URL: http://wiki.openstack.org/Quantum/LBaaS/API_1.0 This BP describes the tenant LBaaS REST API. It specifies object model, API definitions, and service operations. Project: nova Series: grizzly Blueprint: libvirt-aoe Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-aoe Spec URL: None Adding support for block storage attaching to hosts via AoE (ATA over Ethernet) SANs. This blueprint will be for the nova changes required to perform the attach/detach of the AoE / Coraid storage to a KVM VM. The nova and cinder specification URL is : https://blueprints.launchpad.net/cinder/+spec/coraid-volume-driver This initiator driver is supported on all Linux initiators (released under GPL), ATA over Ethernet (AoE) Linux driver for all 3.x and 2.6 kernels is available here : http://support.coraid.com/support/linux/ and ftp://ftp.alyseo.com/pub/partners/Coraid/Drivers/Linux/ Project: nova Series: grizzly Blueprint: libvirt-custom-hardware Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-custom-hardware Spec URL: http://wiki.openstack.org/LibvirtCustomHardware Currently the libvirt driver mostly hardcodes the drivers it uses for disk/nic devices in guests according to the libvirt hypervisor in use. There is a crude global option "libvirt_use_virtio_for_bridges" to force use of virtio for NICs. This is not satisfactory since to have broad guest OS support, choice of drivers needs to be per-VM. This blueprint will introduce 2 new metadata options for disk images in glance, which will be used by the libvirt driver to override its default choice of NIC/disk driver when spawning VMs. Project: nova Series: grizzly Blueprint: libvirt-fibre-channel Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-fibre-channel Spec URL: None Currently block storage can be attached to hosts via iSCSI. Adding support for block storage attaching to hosts via Fibre Channel SANs as well. This blueprint will be for the nova changes required to perform the attach/detach of the fibre channel storage to a KVM VM. The nova and cinder specification URL is http://wiki.openstack.org/Cinder/FibreChannelSupport Project: nova Series: grizzly Blueprint: libvirt-live-snapshots Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-live-snapshots Spec URL: None The current implementation of snapshots via the libvirt driver operates completely externally to libvirtd. This is accomplished by suspending (virDomainManagedSave) the instance, then manipulating the underlying backing files via qemu-img or similar tools. The limitation of this approach is that the instance being snapshotted must be shutdown (qemu/kvm process stopped), as operating live has the possibility to corrupt the backing file. There was no other option at the time of implementation, keeping in mind the goal remains to always have instance_dir/disk be the active backing root. With Qemu 1.3 and Libvirt 1.0, functionality was introduced to allow us to execute snapshots of running instances. There are several new block management API calls, such as virDomainBlockRebase, virDomainBlockCommit, virDomainBlockPull and so on. Using these new methods and associated Qemu functionality, we can perform snapshots without changing the instance's power state (running or stopped). We cannot expect to have the latest versions of Qemu and Libvirt available in all deployments. Thus, the current snapshot approach will also be preserved. Users who do satisfy the dependencies will be able to enable the new live snapshot functionality via a configuration option. If this option is set to True, we will additionally validate the appropriate Qemu/Libvirt are available to us and fall back to the legacy snapshot method accordingly. Live snapshots will be disabled by default. Project: nova Series: grizzly Blueprint: libvirt-spice Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-spice Spec URL: None Nova has long had support for VNC consoles to guests. The VNC protocol is fairly limited, lacking support for multiple monitors, bi- directional audio, reliable cut+paste, video streaming and more. SPICE is a new protocol which aims to address all the limitations in VNC, to provide good remote desktop support. As such Nova should support SPICE in parallel with VNC. The work will cover four areas of OpenStack. SPICE enablement in Nova libvirt driver, and Nova RPC API, support for new commands in python-novaclient, integration into Horizon dashboard UI and integration into devstack. spice-html5 along with a websockets proxy will provide an equivalent to noVNC. Project: nova Series: grizzly Blueprint: libvirt-vif-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-vif-driver Spec URL: http://wiki.openstack.org/LibvirtVIFDrivers Currently a great burden is placed on the nova sysadmin to correctly configure libvirt VIF driver choices. All of this can & should be done automatically based on information about the type of network Nova is connecting to. The Nova Network driver can trivially provide sufficient data already. The Quantum server can now provide the 'vif_type' data, and the Nova Quantum plugin can fill out most of the rest of the data, until the Quantum server is able to directly return it. The end result will be a single GenericVifDriver impl for libvirt which will work out of the box for all in-tree Quantum / Nova Network drivers. The vif_driver config param will remain to cope with the (hopefully unlikely) case where an out of tree Quantum plugin doesn't work with this generic driver. Project: nova Series: grizzly Blueprint: lintstack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/lintstack Spec URL: None Leverage Nova's git history to significantly detect and remove pylint false positives to make it a useful gating function for gerrit. Project: cinder Series: grizzly Blueprint: lio-iscsi-support Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/lio-iscsi-support Spec URL: None Currently Cinder most often uses tgtd to create iSCSI targets for volumes. This blueprint aims to enable use of LIO, a more modern alternative, by interfacing with python-rtslib. A new iSCSI TargetAdmin class will be created for this. LIO: http://www.linux- iscsi.org/ This came out of the mailing list discussion about the lio-support-via-targetd blueprint, as it is a more straightforward method to support LIO before implementing a targetd driver. Project: cinder Series: grizzly Blueprint: list-bootable-volumes Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/list-bootable-volumes Spec URL: None For UI easy of design - Purely volume created from glance? - API call to set the flag for a volume? Project: ceilometer Series: grizzly Blueprint: listener-user-public-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/listener-user-public-api Spec URL: None Notification listener should use the public API, which requires an oslo change first. Project: nova Series: grizzly Blueprint: live-migration-scheduling Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/live-migration-scheduling Spec URL: https://docs.google.com/document/d/1AiMLo2GEqQFNOWMsNATdHhlK5aq7q_vpVYBssybTH60/edit Currently live-migration operation wants us to specify destination host for VM. It would be usefull to have ability to utilize scheduler for choosing destination host. Project: neutron Series: grizzly Blueprint: make-string-localizable Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/make-string-localizable Spec URL: None Currently many strings in Quantum are not defined with gettext and not localizable. So the main goal of this blueprint is to make user visible strings localizable. In order to spread the task and reduce the difficulty of code review, I will split the commit into multiple isolated patches. Each module and each plugin will be a separate patch. Project: nova Series: grizzly Blueprint: memcached-service-heartbeat Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/memcached-service-heartbeat Spec URL: None Today the heartbeat information of Nova services/nodes is maintained in the DB, while each service updates the corresponding record in the Service table periodically (by default -- every 10 seconds), specifying the timestamp of the last update. This mechanism is highly inefficient and does not scale. E.g., maintaining the heartbeat information for 1,000 nodes/services would require 100 DB updates per second (just for the heartbeat). A much more lightweight heartbeat mechanism can be implemented using Memcached. Project: neutron Series: grizzly Blueprint: metadata-non-routed Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/metadata-non-routed Spec URL: None This is an extension to mark's original metadata for overlapping IPs patch. The idea is to run the metadata proxy in the dhcp namespace, and inject routes to the VMs via DHCP to have them send traffic to 169.254.169.254 via the DHCP server address. Project: neutron Series: grizzly Blueprint: metadata-overlapping-networks Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/metadata-overlapping-networks Spec URL: https://docs.google.com/document/d/1wixS-CrHe37Fv4my9MxUVeQKDb3mUJJCwPnireQ1gn8/edit When an OpenStack instance has multiple networks using the same IP address space the metadata service does not function as expected. Project: ceilometer Series: grizzly Blueprint: meters-discovery Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/meters-discovery Spec URL: None Implement /meters to make discovery "nicer" from the client The point of this api is to make discovery (esp. from a casual user) easier. So you don't really want to dump all the raw samples out just to see what is there. So instead "ceilometer meter-list" will GET /v1/meters (or /{proj|user|source}/{id}/meters) and this will just return a description (name, type, resource, user, etc) of the available meters, not each sample point. After this you will probably go and look at the samples that you are actually interested in. It is a kind of dynamic version of doc/source/measurements.rst Project: heat Series: grizzly Blueprint: metsrv-remove Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/metsrv-remove Spec URL: https://github.com/heat-api/heat/wiki/Cloudwatch-Architecture-rework Work is underway to remove the (unauthenticated) heat-metadata server, so all metadata, waitcondition and metric interaction with the in- instance agents (cfn-hup, cfn-signal and cfn-push-stats) happens via the (authenticated) cloudformation and cloudwatch APIs, this is part of the originally discussed cloudwatch architecture rework Project: neutron Series: grizzly Blueprint: midonet-quantum-plugin Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/midonet-quantum-plugin Spec URL: http://wiki.openstack.org/Spec-QuantumMidoNetPlugin Quantum plugin to enable MidoNet, Midokura's L2, L3 and L4 virtual networking solution, in Quantum. Project: horizon Series: grizzly Blueprint: migrate-instance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/migrate-instance Spec URL: None In the syspanel, I would expect the ability to migrate a single server. Steps: 0) login as admin 1) go to syspanel 2) go to instances and find the instance you want to migrate 3) click migrate --- The novaclient library exposes the API: $ nova help migrate usage: nova migrate Migrate a server. Positional arguments: Name or ID of server. Project: nova Series: grizzly Blueprint: migration-testing-with-data Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/migration-testing-with-data Spec URL: None The summit session identified the need to do migration tests with more than an empty database to catch consistency issues. Migration tests should insert sample data into the database to make sure that data is not lost, corrupted, and that the migrations succeed. Project: ceilometer Series: grizzly Blueprint: move-listener-framework-oslo Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/move-listener-framework-oslo Spec URL: http://wiki.openstack.org/Ceilometer/blueprints/move-listener-framework-oslo move listener framework to oslo for reuse by horizon and other projects (requested by horizon) Project: nova Series: grizzly Blueprint: multi-boot-instance-naming Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/multi-boot-instance-naming Spec URL: None Based on this bug https://bugs.launchpad.net/nova/+bug/1054212: When creating more than one instance in the scope of a single API call, Nova should automatically do something to make sure that the host names are unique. Not doing so effectively makes the min/max options useless for anyone who wants to add their VMs into a DNS domain. Project: ceilometer Series: grizzly Blueprint: multi-dimensions Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/multi-dimensions Spec URL: http://wiki.openstack.org/Ceilometer/blueprints/multi-dimensions In order to be able to perform smart and fast aggregation in the API, we need to be able to perform queries that would aggregate counters based on additional keypair values than the simple tenant/user/ressource triplet. This spec proposes to handle meta data in a new and different way that would allow to solve this. Project: keystone Series: grizzly Blueprint: multi-factor-authn Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/multi-factor-authn Spec URL: http://etherpad.openstack.org/FolsomMultifactorAuth BigCo has a big OpenStack private cloud. BigCo has an R&D division with resources in a separate tenant from other BigCo resources. To gain access to BigCo R&D resources, an employee needs to provide more than just one set of credentials.R&D employees that only supply 1 set of credentials can still access BigCo non-R&D resources. The additional sets of credentials needed to access R&D resources could involve a hardware token, push-notification approval, SMS texts with pins, or even voice approval. As an enterprise using OpenStack, I would expect Keystone support the notion of a half-token - a token that isn't fully authenticated but could still allow access to some services/resources that only require the credentials I supplied. As an enterprise using OpenStack, I would expect Keystone to support APIs that allow me to configure how many (and what form) of authentication is required to access specific tenants with the ability to exclude some users from the tenant configuration so that I can manage MFA at a granular level without breaking service-like users. As an implementer of OpenStack, I would expect the MFA backend in Keystone to be configurable - allowing me to substitute and code against a MFA vendor my choosing. As a tester of Keystone, I would expect the test scripts to test as much as they can without needed the implementation of a full fledged MFA vendor. Project: ceilometer Series: grizzly Blueprint: multi-publisher Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/multi-publisher Spec URL: http://wiki.openstack.org/Ceilometer/blueprints/multi-publisher Allow to use multiple publisher and not only the one targeting ceilometer-collector. Project: nova Series: grizzly Blueprint: multi-tenancy-aggregates Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/multi-tenancy-aggregates Spec URL: None Multi-tenancy isolation with aggregates. The goal is to schedule instances from specific tenants to selected aggregate(s). In different cases is necessary to isolate instances from specific tenant(s). This means that they can only be created in a set of hosts. To define the set of hosts we can use "aggregates". The idea is to create a new scheduler filter "AggregateMultiTenancyIsolation" that handles this use-case: If an aggregate has the metadata filter_tenant_id= all hosts that are in the aggregate can only create instances from that tenant_id. An host can belong to different aggregates. So, a host can create instances from different tenants if the different aggregates have defined the metadata filter_tenant_id=. If a host doesn't belongs to any aggregate it can create instances from all tenants. Also, if a host belongs to aggregates that don't define the metadata filter_tenant_id it can create instances from all tenants. Using Availability Zones can't solve this problem because a host can only be in one availability zone, also the filter "AggregateInstanceExtraSpecsFilter" doesn't help because it requires creating new and exclusive flavors for each tenant that needs isolation. Project: cinder Series: grizzly Blueprint: multi-volume-backends Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/multi-volume-backends Spec URL: http://wiki.openstack.org/Cinder/MultiVolumeBackend Allow managing multi volume backends from a single volume manager. Right now there's a 1-1 mapping of manager-driver. This blueprint aims to provide suport for 1-n manager-drivers, where by certain volume drivers that really don't depend on local host storage can take advantage of this to manager multi backends without having to run multi volume managers. The thought is to use the existing configuration sections to distinguish the various drivers to load for a single volume manager. Current limitation of the multi backend is that there is 1 backend to volume_type. A volume_type must be set up and that must also correspond to the flag set in each [backend]. See example below. Project: cinder Series: grizzly Blueprint: name-attr-consistency Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/name-attr-consistency Spec URL: None Change "display_name" attribute to "name" in the API for consistency with other services. Project: heat Series: grizzly Blueprint: native-rest-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/native-rest-api Spec URL: None Currently Heat supports an OpenStack RPC API and an AWS CloudFormation-compatible HTTP/XML-RPC API. Add an OpenStack ReST API to allow access to Heat through the standard OpenStack mechanism. Old bug: https://bugs.launchpad.net/heat/+bug/1072945 Project: neutron Series: grizzly Blueprint: nec-security-group Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nec-security-group Spec URL: None Security group support is just a porting of security group in OVS plugin. It reuses both plugin and agent sides of OVS plugin support including RPC and add some plugin specific codes. port security extension support is tightly couple with security group extension to some degree, so if adding port security extension is small port security extension will be included in this blueprint. The change is limited to NEC plugin and does not affect others. * Scope: Same as the scope of Security Group Extension * Use Cases: Same as the scope of Security Group Extension (but limited to iptables based implementation) * Implementation Overview: The implementation is just a porting of security group in OVS plugin. It reuses both plugin and agent sides of OVS plugin support including RPC and add some plugin specific codes. * Data Model Changes: No data model changes and just add nec plugin to the list of security group db migration script. * Configuration variables: There may be a plugin specific configuration option which enables/disables quantum security group extension. It depends on Nova VIF plugging implementation. * API's: No change * Plugin Interface: no change Project: cinder Series: grizzly Blueprint: netapp-cluster-nfs-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/netapp-cluster-nfs-driver Spec URL: None Add the support for NFS files stored on clustered ontap to be used as virtual block storage. The driver is an interface from openstack cinder to clustered ontap storage system to manage NFS files on the NFS exports provided by cluster storage to be used as virtual block storage. Project: cinder Series: grizzly Blueprint: netapp-direct-volume-drivers Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/netapp-direct-volume-drivers Spec URL: None The current NetApp drivers for iSCSI and NFS require NetApp management softwares like OnCommand DFM etc. to be installed as a mid layer interface to do management operations on NetApp storage. The direct drivers provide an alternate mechanism via NetApp api(ontapi) to do storage management operations without the need of any additional management software in between openstack and NetApp storage. The idea is to implement direct to storage drivers achieving the same functionality as already submitted NetApp drivers. Project: nova Series: grizzly Blueprint: network-adapter-hotplug Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/network-adapter-hotplug Spec URL: None it is usefull for user who uses openstack instances that can plug/unplug vif at any time. 1、create a vif which has ip and mac. 2、associate it to the spcified instance. we need to add an api for nova to execute plug/unplug action . and add the option to use this feather in novaclient Project: oslo Series: grizzly Blueprint: new-policy-language Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/new-policy-language Spec URL: http://wiki.openstack.org/Openstack-Common/Fine-Grained-Policy Add a new policy language with "and" and "or" operators to replace the old list-of-lists syntax. New 'not', '@' and '!' operators are also added. This new language will enable us add more advanced features than the old syntax would have allowed. Backwards compat support for the old list-of-list syntax is retained. Project: nova Series: grizzly Blueprint: no-db-compute Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/no-db-compute Spec URL: None Make all of the necessary changes so that nova-compute no longer has direct access to the database. Project: nova Series: grizzly Blueprint: no-db-compute-manager Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/no-db-compute-manager Spec URL: None The compute manager should not have any direct database calls, but rely on conductor Project: nova Series: grizzly Blueprint: no-db-virt Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/no-db-virt Spec URL: None Remove any and all direct database queries from the nova/virt drivers in preparation of bp no-db-compute Project: nova Series: grizzly Blueprint: non-blocking-db Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/non-blocking-db Spec URL: None Add eventlet db_pool use for mysql This adds the use of eventlet's db_pool module so that we can make mysql calls without blocking the whole process. New config options are introduced: sql_dbpool_enable -- Enables the use of eventlet's db_pool sql_min_pool_size -- Set the minimum number of SQL connections The default for sql_dbpool_enable is False for now, so there is no forced behavior changes for those using mysql. sql_min_pool_size is defaulted to 1 to match behavior if not using db_pool. Adds a new test module for our sqlalchemy code, testing this new option as much as is possible without requiring mysql server to be running. DocImpact Change-Id: I99833f447df05c1beba5a3925b201dfccca72cae Project: keystone Series: grizzly Blueprint: normalize-sql Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/normalize-sql Spec URL: None We currently serialize an object blob as JSON inside the SQL store. This has lead to several bugs. The user table at a minimum has user {"password": "" "enabled": true, "email": "