Project: swift Series: grizzly Blueprint: account-quota Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/account-quota Spec URL: None Limit an account by a certain amount of bytes on an account via the header X-Account-Meta-Quota-Bytes the header can be updated only by the reseller account (or the user with the ResellerAdmin role in keystone). A value of -1 mean unlimited no header mean unlimited as well. the logic when blocking work as this pseudo block : new_size = total_bytes + (client_request_header('content_length') or 0) quota = container_header('X-Account-Meta-Quota-Bytes') or -1. if 0 <= quota < new_size:     return HTTPRequestEntityTooLarge() Project: cinder Series: grizzly Blueprint: add-cloning-support-to-cinder Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/add-cloning-support-to-cinder Spec URL: None Add the ability to call clone_volume to Cinder that will result in a new volume object ready to be attached and used and of course is a "clone" of the originating volume. Current implementation is an additional option to volume_create. This looks like: 'cinder create --src-volume xxxxx 10' The process for the base LVM cas is: 1. cinder.volume.api calls create on the same host as the existing/src volume 2. cinder.volume.manager.create calls driver.create_cloned_volume() 3. cinder.volume.driver.create_cloned_volume a. Creates a temporary snapshot of the src volume (so we don't require detach) b. uses copy to volume from snapshot same as we do for create_volume_from_snapshot c. deletes the temporary snapshot Initially would be implemented to work only across common storage types (LVM, Ceph etc), with the potential of being expanded in the future. In addition another blueprint should be submitted to add to snapshot capabilities. This would add a method like: "volume_revert(volume, snapshot_id)", that would restore a volume to it's state when provided snapshot was taken. Project: horizon Series: grizzly Blueprint: add-security-group-to-instance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/add-security-group-to-instance Spec URL: None novaclient cli supports adding/removing security groups to existing instances. It would be useful for horizon to support it too. Project: swift Series: grizzly Blueprint: adjustable-replica-counts Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/adjustable-replica-counts Spec URL: None Example: $ swift-ring-builder account.builder set_replicas 4 $ swift- ring-builder rebalance This is a prerequisite for supporting globally-distributed clusters, as operators of such clusters will probably want at least as many replicas as they have regions. Therefore, adding a region requires adding a replica. Similarly, removing a region lets an operator remove a replica and save some money on disks. In order to not hose clusters with lots of data, swift-ring-builder now allows for setting of fractional replicas. Thus, one can gradually increase the replica count at a rate that does not adversely affect cluster performance. Example: $ swift-ring- builder object.builder set_replicas 3.01 $ swift-ring-builder object.builder rebalance $ swift-ring- builder object.builder set_replicas 3.02 $ swift-ring-builder object.builder rebalance ... Obviously, fractional replicas are nonsensical for a single partition. A fractional replica count is for the whole ring, not for any individual partition, and indicates the average number of replicas of each partition. For example, a replica count of 3.2 means that 20% of partitions have 4 replicas and 80% have 3 replicas. Changes do not take effect until after the ring is rebalanced. Thus, if you mean to go from 3 replicas to 3.01 but you accidentally type 2.01, no data is lost. Project: nova Series: grizzly Blueprint: aggregate-based-availability-zones Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/aggregate-based-availability-zones Spec URL: None Allow for setting a compute node's availability zone via the API by implementing Availability Zones internally as General Host Aggregates This is a continuation of work started in Folsom (https://blueprints.launchpad.net/nova/+spec/general-host-aggregates). Project: cinder Series: grizzly Blueprint: api-pagination Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/api-pagination Spec URL: None Add Markers and Pagination in line with the other openstack projects. Project: neutron Series: grizzly Blueprint: api-sec-grp-id-quantum-test Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/api-sec-grp-id-quantum-test Spec URL: None Test to check - create quantum security group - delete quantum security group by checking the GET/DELETE for API call v1.1/{tenant_id }/security-groups/{security_group_id} Project: neutron Series: grizzly Blueprint: api-sec-grp-quantum-test Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/api-sec-grp-quantum-test Spec URL: None APi test to check following operations: List Security Groups Create Security Group to test GET/POST API call v1.1/{tenant_id}/security- gr Project: neutron Series: grizzly Blueprint: api-sec-grp-rules-quantum-test Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/api-sec-grp-rules-quantum-test Spec URL: None Test to check, - Create security group rule - Delete security group rule by checking the POST/Delete for v1.1/{tenant_id}/security-group- rules Project: nova Series: grizzly Blueprint: api-tests-speed Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/api-tests-speed Spec URL: None API tests run slowly because they repeat unnecessary initialisation for every class / test. Limiting both the number of loaded routes and number of enabled extensions to the bare minimum should improve the situation a lot. (6-7x improvement of the first test execution time) Project: nova Series: grizzly Blueprint: apis-for-nova-manage Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/apis-for-nova-manage Spec URL: None The general direction is that nova-manage should be deprecated for everything except the db-sync command. There are however pieces of functionality needed for OpenStack, like ip pool creation, which only exist in nova-manage. That should be moved out into the API by enhancing existing extensions so that nova-manage no longer contains functions which don't exist anywhere else. Project: horizon Series: grizzly Blueprint: app-project-proper-separation Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/app-project-proper-separation Spec URL: None Horizon is a framework for building dashboards (and a Django app). It should be completely agnostic to what project is built on top of it. By comparison, OpenStack Dashboard should contain 100% o the OpenStack Dashboard-related code. The main piece that needs to be moved is the entire "dashboards" directory in the Horizon module. Project: neutron Series: grizzly Blueprint: argparse-based-cfg Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/argparse-based-cfg Spec URL: None Update to the latest openstack common code that supports argparse based cfg Project: horizon Series: grizzly Blueprint: associate-ip-one-click Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/associate-ip-one-click Spec URL: None Up to the limit of available floating IPs (either in the pool or in the quota) a user should just be able to "one-click" assign a floating IP to an instance. This *should* be a configurable switch, though, since in some settings it may be desirable to let a user select the pool and/or specific IP that should be assigned. Project: nova Series: grizzly Blueprint: auto-cpu-pinning Design: Discussion Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/auto-cpu-pinning Spec URL: None A tool for pinning automatically each running virtual CPU to a physical one in the most efficient way, balancing load across sockets/cores and maximizing cache sharing/minimizing cache misses. Ideally able to be run on-demand, as a periodic job, or be triggered by events on the host (vm spawn/destroy). Project: heat Series: grizzly Blueprint: autoscale-update-stack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/autoscale-update-stack Spec URL: None Implement UpdateStack support for AutoScalingGroup - currently any attempt to update results in replacement, so we should align better with the AWS UpdateStack behavior for this resource. At a minimum it should be possible to update the properties which affect the scaling (MinSize, MaxSize, Cooldown) without replacing the group (and thus all the instances) Project: heat Series: grizzly Blueprint: aws-cloudformation-init Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/aws-cloudformation-init Spec URL: http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html#aws-resource-init-commands It looks like we currently have support for files, packages, services, and sources. Remaining are configsets, groups, commands, users, and return value. Project: nova Series: grizzly Blueprint: backing-file-options Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/backing-file-options Spec URL: None When a qcow2 image has a backing file and a child is being based on that backing file ensure that it picks up certain properties from that backing file (right now just cluster_size, preallocation state and encryption mode). Project: neutron Series: grizzly Blueprint: brocade-quantum-plugin Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/brocade-quantum-plugin Spec URL: http://wiki.openstack.org/brocade-quantum-plugin Plugin to for orchestration of Brocade VCS cluster of switches for L2 networks. Project: swift Series: grizzly Blueprint: bulk-midleware Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/bulk-midleware Spec URL: None Adds bulk delete functionality Adds the ability to upload a tarball and have its contents stored as individual objects Project: cinder Series: grizzly Blueprint: cinder-apiv2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-apiv2 Spec URL: http://wiki.openstack.org/CinderAPIv2 After talking with the rest of the Cinder team, there is a general consensus to follow how Glance handles api versioning. The changes to the cinder api require us to split off into another version due interface and response changes. The purpose of this blueprint is to record the API improvements discussed at the Grizzley design summit to create the v2.0 cinder API. The etherpad from the original discussion is available at: https://etherpad.openstack.org/grizzly-cinder-api2-0 General Requirements ==================== 1. The current v1.0 cinder API will still be made available for backwards compatibility, but marked and documented as deprecated. 2. The v2.0 API will start with the current v1.0 API, updated with the new features. 3. The v2.0 API will have a version prefix of /v2/ Project: cinder Series: grizzly Blueprint: cinder-common-rootwrap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-common-rootwrap Spec URL: None Rootwrap is moving to openstack-common. Once this is completed, Cinder should make use of the openstack.common version of rootwrap. Project: cinder Series: grizzly Blueprint: cinder-hosts-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-hosts-extension Spec URL: None Currently Cinder has no service management, not even things like cinder-manage service list. I started looking at a service extension, but we should make this consistent with what Nova has and have a hosts extension. This is an admin extension to the OpenStack API implementing methods similar to that which Nova offers. At the very least having a status report of Cinder services and what nodes they're running on would be very helpful. This could be expanded to cover a number of things in the future as well, check iscsi targets? Verify connections? Project: neutron Series: grizzly Blueprint: cisco-plugin-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-cleanup Spec URL: None Scope: Removal of unused code from the Cisco plugin. Use Cases: Quantum with the Cisco plugin. Implementation Overview: There is a bit of old unused code in some of the Cisco plugin classes. This needs to be removed to make the code more readable and easier to understand/debug. Support for the ucs plugin also needs to be removed. Data Model Changes: n/a Configuration variables: This will remove all configuration values dealing with the ucs/vm-fex plugin. API's: n/a Plugin Interface: n/a Required Plugin support: n/a Dependencies: n/a CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: Remove test cases no longer required. Project: neutron Series: grizzly Blueprint: cisco-plugin-enhancements Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-enhancements Spec URL: None Scope: A few enhancements to the Cisco Nexus plugin to support multiple switches and some intelligence in the plugin when trunking vlans on those switches. Use Cases: Quantum with the Cisco plugin (nexus subplugin) Implementation Overview: The plugin communicates with Nova api when an instance is networked during creation, grabs the host that the instance is running on. Cross references that host with a topology config that tells the plugin what port/switch that host is connected to and trunks it on that switch/port. Data Model Changes: Two fields added to model NexusPortBinding switch_ip = Column(String(255)) instance_id = Column(String(255)) Configuration variables: nexus.ini config changed to- [SWITCH] # Ip address of the switch [[172.18.112.37]] # Hostname of the node [[[asomya- controller.cisco.com]]] # Port this node is connected to on the nexus switch ports=1/1 # Hostname of the node [[[asomya-compute.cisco.com]]] # Port this node is connected to on the nexus switch ports=1/2 # Port number where the SSH will be running at the Nexus Switch, e.g.: 22 (Default) [[[ssh_port]]] ssh_port=22 credentials.ini should include keystone credentials and endpoint- # Provide credentials and endpoint # for keystone here [keystone] auth_url=http://172.18.112.47:35357/v2.0 username=admin password=abcd API's: n/a Plugin Interface: n/a Required Plugin support: The cisco plugin configuration should include the aforementioned values. Dependencies: nova-api, keystone CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: n/a Project: nova Series: grizzly Blueprint: compute-driver-events Design: Pending Approval Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/compute-driver-events Spec URL: http://wiki.openstack.org/ComputeDriverEvents This provides infrastructure for compute drivers to emit asynchronous events to report on important changes to the state of virtual machine instances. The compute manager will be adapted to make use of these events, as an alternative to running a periodic task to reconcile events. The libvirt driver is enhanced to emit such events. The result is that Nova will immediately see when any virtual instance is stopped, instead of suffering (upto) a 10 minute delay. Project: swift Series: grizzly Blueprint: config-eventlet-debug Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/config-eventlet-debug Spec URL: None add a config option to turn on/off eventlet debug messages Project: swift Series: grizzly Blueprint: configurable-constraints Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/configurable-constraints Spec URL: None Allow cluster constraints (maximum object size, name limits, etc) to be settable via config. Project: cinder Series: grizzly Blueprint: coraid-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/coraid-volume-driver Spec URL: None A volume driver will be provided to support CORAID hardware storage appliances and AoE (using software initiator released under GPL). The following operations will be supported : --Volume Creation --Volume Deletion --Volume Attach --Volume Detach --Snapshot Creation --Snapshot Deletion --Create Volume from Snapshot -- Volume Stats Volumes types and EtherCloud automation features will be added to provide a fully automated provisioning workflow and a scale-out SAN : http://www.coraid.com/products/management_automation This initiator is supported on all Linux initiators (released under GPL), ATA over Ethernet (AoE) Linux driver for all 3.x and 2.6 kernels is available here : http://support.coraid.com/support/linux/ and ftp://ftp.alyseo.com/pub/partners/Coraid/Drivers/Linux/ The driver will only work when operating on EtherCloud ESM, VSX and SRX (Coraid hardware) : http://www.coraid.com/products/scale_out_san REM : Linux software targets (vblade, kvblade, ggaoed...) are not supported. Project: swift Series: grizzly Blueprint: cors-actual-requests Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/cors-actual-requests Spec URL: None Properly handle the "actual requests" made by a CORS client after the preflight request has been made Project: swift Series: grizzly Blueprint: cors-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/cors-support Spec URL: None functionality to support CORS Project: nova Series: grizzly Blueprint: coverage-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/coverage-extension Spec URL: None To better understand how Tempest and other external test tools exercise nova we should have a way to enable coverage reporting from within nova services by an external program for test runs. Once available in nova, tempest and devstack gate will be enhanced to make this a nightly runable report. Project: swift Series: grizzly Blueprint: cross-tenant-acls Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/cross-tenant-acls Spec URL: None Need the flexibility to support ":" and :"), to improve usability. Project: swift Series: grizzly Blueprint: custom-log-handlers Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/custom-log-handlers Spec URL: None Support setting custom log handlers. Settable in a config file, this will allow integration with external log tools. Project: nova Series: grizzly Blueprint: db-api-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-api-cleanup Spec URL: None Database access layer has grown organically, and could use some house keeping. For both nova/db/api and nova/db/sqlalchemy/api, let's clean up the API itself. * identify and remove unused methods * consolidate duplicate methods when possible * ensure SQLAlchemy objects are not leaking out of the API * ensure related methods are grouped together and named consistently Project: nova Series: grizzly Blueprint: db-archiving Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-archiving Spec URL: http://etherpad.openstack.org/DatabaseArchiving Outcome of the talk at the Grizzly summit was a general agreement that: * table bloat with deleted records is bad for performance * some deployers may want to keep deleted records around for various reasons * some processes may rely on recently-deleted records There seemed to be several ways to deal with this, some short-term and some long-term. 1. event/cron that moves deleted=1 records to a shadow table 2. on_delete trigger that moves records to a shadow table 3. amqp message to broadcast deleted records The current plan is to move forward with (1). Project: swift Series: grizzly Blueprint: db-audit-speed-limit Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/db-audit-speed-limit Spec URL: None Add a config parameter to limit how fast the DB auditors work Project: nova Series: grizzly Blueprint: db-unique-keys Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-unique-keys Spec URL: None * change soft delete to `deleted`=`id` instead of `deleted`=1. * Add unique indexes on (`col`, `deleted`)for critical tables. Project: nova Series: grizzly Blueprint: default-rules-for-default-security-group Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/default-rules-for-default-security-group Spec URL: None Currently, no rules are added when default security group is created. Thus instances could only be accessed by instances from the same group, as long as you don't modify the default security group or use another one. Nova should provide a hook mechanism to add customized rules when creating default security groups, so that we don't have to remind users to modify default security group at the first time they create instances. HP Could which is built on openstack now permits instances be sshed or pinged in the default security group. This should be the case. Project: nova Series: grizzly Blueprint: delete-nova-volume Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/delete-nova-volume Spec URL: None Nova-volume was deprecated in folsom. We need it gone in grizzly! Project: swift Series: grizzly Blueprint: deterministic-ring-serialization Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/deterministic-ring-serialization Spec URL: None A ring with the same data in it should produce the same serialized output Project: nova Series: grizzly Blueprint: direct-file-copy Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/direct-file-copy Spec URL: None Glance v2 allows an administrator to enable direct_URL meta-data to be delivered to the glance client. Under the right circumstances this information can be used to more efficiently get the image. Nova- compute could benefit from this by invoking a copy when it knows it has access to the same file system as glance. A configuration option would enable this behavior. Project: swift Series: grizzly Blueprint: dispersion-report-options Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/dispersion-report-options Spec URL: None Add two optional flags that let you limit swift-dispersion-report to only reporting on containers OR objects Project: swift Series: grizzly Blueprint: drive-audit-log-rotation Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/drive-audit-log-rotation Spec URL: None Handle log rotation in swift-drive-audit Project: cinder Series: grizzly Blueprint: driver-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/driver-cleanup Spec URL: None Cleanup the volume drivers and have only a single driver per file. List of files needing to be cleaned up: - driver.py (VolumeDriver, ISCSIDriver, FakeISCSIDriver, RBDDriver, SheepDogDriver, LoggingVolumeDriver(probably needs to be deleted)) - netapp.py (NetAppISCSIDriver, NetAppCmodeISCSIDriver) - san.py (SanISCSIDriver, SolarisISCSIDriver, HpSanISCSIDriver) Might also be good to extract the LVMDriver from the base VolumeDriver. Planning on putting all the drivers under cinder/volume/drivers/* Look at options to get away from using module names for drivers. Look at how glance is doing it maybe? Project: cinder Series: grizzly Blueprint: emc-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/emc-volume-driver Spec URL: http://wiki.openstack.org/Cinder/EMCVolumeDriver A Volume Driver will be provided to support EMC storage in the backend. It uses an EMC software SMI-S to communicate with VNX or VMAX/VMAXe arrays. Project: nova Series: grizzly Blueprint: extra-specs-in-nova-client Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/extra-specs-in-nova-client Spec URL: None Add support to python-novaclient so that it can list/set/unset extra_specs. Project: cinder Series: grizzly Blueprint: fibre-channel-block-storage Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/fibre-channel-block-storage Spec URL: http://wiki.openstack.org/Cinder/FibreChannelSupport Currently block storage can be attached to hosts via iSCSI. Adding support for block storage attaching to hosts via Fibre Channel SANs as well. Project: horizon Series: grizzly Blueprint: fix-legacy-dashboard-names Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/fix-legacy-dashboard-names Spec URL: None The names "nova" and "syspanel" in the code exist only for legacy reasons and confuse new contributors. They should be renamed to reflect their proper "project" and "admin" names respectively. Project: horizon Series: grizzly Blueprint: flavor-extra-specs Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/flavor-extra-specs Spec URL: None Nova supports "extra specs" on flavors which allow for more intelligent scheduling and other interesting use cases. Horizon should provide an interface to that data. Project: swift Series: grizzly Blueprint: future-at-risk-tools Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/future-at-risk-tools Spec URL: None When a drive fails, all ring partitions that were on that drive immediately have only 2 copies left in the cluster (assuming 3 replicas here). As the replicators on the servers containing those other copies get to it, they will make an extra handoff copy to get back to 3 copies in the cluster. Replication cycles can take a while though, so we'd like to have some tools to make this happen faster. 1) A tool that will list the ring partitions for a given device or a list of common ring partitions for a set of devices (for multi-device failures). 2) A tool to immediately start extra replication of a list of partitions from 1). Project: nova Series: grizzly Blueprint: general-bare-metal-provisioning-framework Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/general-bare-metal-provisioning-framework Spec URL: http://etherpad.openstack.org/N8NsHk447X We have already implemented bare-metal provisioning of compute nodes for Tilera TILEmpower 64-core tiled processor systems. Now we (USC/ISI + NTT DOCOMO + VirtualTech Japan Inc.) want to propose a general baremetal provisioning framework to support (1) PXE and non- PXE (Tilera) provisioning with bare-metal DB (Review#1) (2) Architecture-specific provisioning entity (Review#1) (3) Fault tolerance of bare-metal nodes (Review#2) (4) Openflow related stuff (Review#3) http://wiki.openstack.org/GeneralBareMetalProvisioningFramework http://etherpad.openstack.org/FolsomBareMetalCloud Project: cinder Series: grizzly Blueprint: generic-iscsi-copy-vol-image Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/generic-iscsi-copy-vol-image Spec URL: None Implements a generic version of copy_volume_to_image and copy_image_to_volume for iSCSI drivers. Project: nova Series: grizzly Blueprint: get-password Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/get-password Spec URL: None Some guests need a password in order to be used. We need a secure way to generate an encrypted password and let the user retrieve it securely. Although we can do this using the console and an init script[1] it would be much nicer to have support in the api for such a thing. The high-level goal is: nova get-password (returns the password for the vm) The steps involved are: a) Add a post location to nova-api-metadata that can send encrypted password (should be write once) b) Add an extension to the api allowing get_password and reset_password (reset simply clears the value c) Allow an alternative method for xenapi (password could be encrypted and written by nova or guest agent) d) Work with cloud-init to for it to support generating an encrypted password and posting it e) Work with hyper-v team to make sure their cloud-init support includes it f) Add code to python- novaclient for decrypting password [1] https://gist.github.com/4008762 Project: glance Series: grizzly Blueprint: glance-api-v2-image-sharing Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-api-v2-image-sharing Spec URL: None This is a placeholder blueprint to cover the work to be done in Grizzly to expose a to-be-determined image sharing API. Project: glance Series: grizzly Blueprint: glance-common-image-properties Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-common-image-properties Spec URL: http://wiki.openstack.org/glance-common-image-properties In order to make images more easily searchable in different openstack installations, it would be useful to add some common properties to Glance images that identify operating system characteristics . Project: glance Series: grizzly Blueprint: glance-domain-logic-layer Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-domain-logic-layer Spec URL: None There is a lot of logic that lives in the db layer and the v1/v2 API layers that should really be handled in a single layer. - policy checking - notifications - etc. Project: glance Series: grizzly Blueprint: glance-simple-db-parity Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-simple-db-parity Spec URL: None The 'simple' database driver is a subset of the 'sqlalchemy' driver, but it should really match 100%. Let's use this opportunity to beef up the db testing at the same time. Project: cinder Series: grizzly Blueprint: glusterfs-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/glusterfs-support Spec URL: None Use GlusterFS as a volume backend Currently Cinder allows use of an NFS export to host volume data. This blueprint aims to enable support for GlusterFS to be used the same way that NFS is used. Like the NFS driver, it supports basic volume operations, but not snapshots or clones. This means introducing a new Cinder GlusterFS Driver, and Nova support for mounting it. Since the semantics of using Gluster are similar to NFS, the current plan is to have a base "Remote FS" driver class that both NFS and GlusterFS drivers can use to share some common code. http://www.gluster.org/ Project: nova Series: grizzly Blueprint: grizzly-hyper-v-nova-compute Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/grizzly-hyper-v-nova-compute Spec URL: None The Folsom release saw the reintroduction of a Hyper-V compute driver. This blueprint is related to the new features under development targeting the Grizzly release Project: swift Series: grizzly Blueprint: healthcheck-failure-flag-file Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/healthcheck-failure-flag-file Spec URL: None Allow the existence of a file on disk to cause the healthcheck middleware to return a failure Project: neutron Series: grizzly Blueprint: high-available-quantum-queues-in-rabbitmq Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/high-available-quantum-queues-in-rabbitmq Spec URL: None Quantum supports now rabbitmq, which can be easily started in active/passive mode with Pacemaker + DRBD. But that could be interesting to integrate active / active feature and to declare the queues adding an x-ha-policy entry. It would be nice to add a config entry to be able to declare the queues in that way. The code is inspired from Nova : https://review.openstack.org/#/c/13665/ (By Eugene Kirpichov) Code here : https://review.openstack.org/#/c/13760/ RabbitMQ HA page : http://www.rabbitmq.com/ha.html Project: nova Series: grizzly Blueprint: host-api-prep-for-cells Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/host-api-prep-for-cells Spec URL: None Moves logic from nova/api/openstack/compute/contrib/hosts.py into nova/compute/api.py .. This is in preparation for cells, which provides its own compute api and has to proxy some calls to child cells. Project: cinder Series: grizzly Blueprint: hp3par-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/hp3par-volume-driver Spec URL: None Add a cinder volume driver to support the HP 3Par array. This is an iSCSI driver. It should support * Volume Creation * Volume Deletion * Snapshot Creation * Snapshot Deletion * Create Volume from Snapshot * Volume Attach * Volume Detach. Once the Volume Types is ironed out, the driver should use the volume type metadata for volume creation. 3Par arrays have the ability to create volumes on different Common Provisioning Groups (CPG's). CPGs can specify the type of volume RAID level for volumes created on that CPG. Cinder Volume Types should be able to be mapped to different CPG types to support the creation of different RAID level volumes. Project: cinder Series: grizzly Blueprint: huawei-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/huawei-volume-driver Spec URL: None A volume driver will be provided to support HUAWEI storage. This is an iSCSI driver. The following operations will be supported on OceanStor T series V100 and Dorado series arrays: --Volume Creation --Volume Deletion --Volume Attach --Volume Detach The following operations will be supported on OceanStor T series V100 and Dorado 5100 arrays: --Snapshot Creation --Snapshot Deletion --Create Volume from Snapshot Project: nova Series: grizzly Blueprint: hyper-v-compute-resize Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-compute-resize Spec URL: None The Nova Compute Hyper-V driver is currently lacking the resize feature due to the complexity related to managing non native filesystems on WIndows (e.g. ext3 / ext4). A first implementation will handle migration and virtual disk resize support. A subsequent patch targeting Havana will implement guest OS resize through worker VMs spawned from lightweight Linux images (e.g. OpenWRT) to resize attached ext2/3/4 volumes while NTFS volumes can be mounted as local loopback devices on the Hypervisor for resizing (Windows Server 2012). Implementation details ------------------------------- Resize / cold migration is implemented by copying the local disks to a remote SMB share, identified by the configuration option HYPERV.instances_path_share or, if empty, by an administrative share with a remote path corresponding to the configuration option instances_path. The source instance directory is renamed by adding a suffix "_revert" and preserved until the migration is confirmed or reverted. In the former case the directory will be deleted and in the latter renamed to the original name. The VM corresponding to the instance is deleted on the source host and recreated on the target. Any mapped volume is disconnected on the source and reattached to the new VM on the target host. In case of resize operations, the local VHD file is resized according to the new flavor limits. Due to VHD limitations, an attempt to resize a disk to a smaller size will result in an exception. In case of differencing disks (CoW), should the base disk be missing in the target host's cache, it will be downloaded and reconnected to the copied differencing disk. Same host migrations are supported by using a temporary directory with suffix "_tmp" during disk file copy. Project: nova Series: grizzly Blueprint: hyper-v-config-drive-v2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-config-drive-v2 Spec URL: None Support for config drive v2 configuration on Hyper-V, based on https://blueprints.launchpad.net/nova/+spec/config-drive-v2 Based on the creation of an ISO image converted as a raw VHD for compliance with the current Cloud-Init specs. Project: nova Series: grizzly Blueprint: hyper-v-testing-serialization-improvements Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-testing-serialization-improvements Spec URL: None The current implementation of the Hyper-V tests, uses serialized stubs in pickled and gzipped format, as documented here: https://github.com/ openstack/nova/blob/master/nova/tests/hyperv/README.rst The serialized binary format generates management issues in Git and concerns related to the opacity of the blobs and needs to be changed to Json, as dicussed in the following Nova meeting: http://eavesdrop.o penstack.org/meetings/nova/2012/nova.2012-11-29-21.01.html Project: horizon Series: grizzly Blueprint: iconify-buttons Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/iconify-buttons Spec URL: None Bootstrap includes a good set of icons, and we can both slim down our table header space usage and make things more visually intuitive by using icons on our action buttons instead of text. The classes are already on the buttons, it mostly just involves writing a little bit of CSS. Project: horizon Series: grizzly Blueprint: image-upload Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/image-upload Spec URL: None Ability to upload image from a file on disk via UI. Strong preference goes to a solution that does not involve proxying the file through the Horizon server (since allowing arbitrary upload of potentially very large files is dangerous). Project: cinder Series: grizzly Blueprint: implement-lvm-thin-provisioning Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/implement-lvm-thin-provisioning Spec URL: None As of LVM2 version 2.02.89 the ability to do thin provisioning was made available in LVM, this provides some cool new features but also addresses some problems with things like terribel LVM LVM snapshot performance. Currently the version of LVM in Ubuntu 12.04 does NOT support LVM thin, however an experimental PPA from brightbox which is a backport from Quantal has been proposed to Cannonical to be pulled in. For some users the experimental PPA is a better option than dealing with some of the current issues in the standard LVM2 version of Precise (including the dd hangs on secure delete). For Precise: Prereqs:    LVM version: 2.02.95(2) (2012-03-06)    Library version: 1.02.74 (2012-03-06)    Driver version: 4.22.0 To get these on precise we need an experimental PPA from brightbox:     sudo add- apt-repository ppa:brightbox/experimental  sudo apt-get install lvm2 Uses pool_size config option to determine how large of a thin pool to create. Defaults to '0' which will use the entire VG. Change is introduced as a new driver, basicly just inherits from the existing LVM driver and would be used by adding the following driver selection to your cinder.conf file: volume_driver=cinder.volume.drivers.lvm.ThinLVMVolumeDriver Project: glance Series: grizzly Blueprint: importing-rootwarp Design: Obsolete Lifecycle: Complete Impl: Good progress Link: https://blueprints.launchpad.net/glance/+spec/importing-rootwarp Spec URL: None Allow Glance execute system command from the functional code: 1. Import Oslo processutils to Glance. 2. Add necessary command execution wrap functions to utils.py within Glance. Project: horizon Series: grizzly Blueprint: improve-quantum-summary-table Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/improve-quantum-summary-table Spec URL: None Improves summary table and detail info view for each network resource. In the Folsom implementation, some useful fields are not displayed. Project: nova Series: grizzly Blueprint: instance-actions Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/instance-actions Spec URL: http://wiki.openstack.org/NovaInstanceActions Create a new instance_actions table, and API extension to access it. This would provide a mechanism for better error reporting, and provide users insight into what has been done with their instance. Project: cinder Series: grizzly Blueprint: instance-attached-field Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/instance-attached-field Spec URL: None Add a field to display what instance is being attached to when in the "attaching" state. Attaching can take sometime so it would be nice to see what it will be attached to once it completes. Project: heat Series: grizzly Blueprint: instance-update-stack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/instance-update-stack Spec URL: None Currently any update to an Instance resource passed into UpdateStack will result in the instance being replaced, implement the instance handle_update hook so our update behavior is closer to that defined for AWS instances, and in particular we should allow instance metadata to be updated such that instance reconfiguration via cfn-hup is possible. Project: cinder Series: grizzly Blueprint: iscsi-chap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/iscsi-chap Spec URL: http://wiki.openstack.org/IscsiChapSupport Add chap support to basic volume driver. Verify that chap support works. The spec is on this page: http://wiki.openstack.org/IscsiChapSupport. Project: nova Series: grizzly Blueprint: iscsi-multipath Design: New Lifecycle: Not started Impl: Unknown Link: https://blueprints.launchpad.net/nova/+spec/iscsi-multipath Spec URL: None Use iscsi and multipath device directly instead of copy base image from ISCSI target service, if the base image is serving as an ISCSI/IET remote target, this function will speed up VM booting process for the first time to booting a base image on nova compute. Project: neutron Series: grizzly Blueprint: lbaas-namespace-agent Design: Pending Approval Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-namespace-agent Spec URL: None This agent will utilize network namespaces and HAProxy to provide and open source LBaaS implementation. Project: neutron Series: grizzly Blueprint: lbaas-plugin-api-crud Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-plugin-api-crud Spec URL: None Work items for LBaaS Python APIs / CRUD Operations - Python plugin API (one-to-one mapping of WS API) - SQLAlchemy data models - CRUD operations (this should enable use of the API with what is effectively a "null" driver) Project: neutron Series: grizzly Blueprint: lbaas-restapi-tenant Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-restapi-tenant Spec URL: http://wiki.openstack.org/Quantum/LBaaS/API_1.0 This BP describes the tenant LBaaS REST API. It specifies object model, API definitions, and service operations. Project: nova Series: grizzly Blueprint: libvirt-aoe Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-aoe Spec URL: None Adding support for block storage attaching to hosts via AoE (ATA over Ethernet) SANs. This blueprint will be for the nova changes required to perform the attach/detach of the AoE / Coraid storage to a KVM VM. The nova and cinder specification URL is : https://blueprints.launchpad.net/cinder/+spec/coraid-volume-driver This initiator driver is supported on all Linux initiators (released under GPL), ATA over Ethernet (AoE) Linux driver for all 3.x and 2.6 kernels is available here : http://support.coraid.com/support/linux/ and ftp://ftp.alyseo.com/pub/partners/Coraid/Drivers/Linux/ Project: nova Series: grizzly Blueprint: libvirt-custom-hardware Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-custom-hardware Spec URL: http://wiki.openstack.org/LibvirtCustomHardware Currently the libvirt driver mostly hardcodes the drivers it uses for disk/nic devices in guests according to the libvirt hypervisor in use. There is a crude global option "libvirt_use_virtio_for_bridges" to force use of virtio for NICs. This is not satisfactory since to have broad guest OS support, choice of drivers needs to be per-VM. This blueprint will introduce 2 new metadata options for disk images in glance, which will be used by the libvirt driver to override its default choice of NIC/disk driver when spawning VMs. Project: nova Series: grizzly Blueprint: libvirt-fibre-channel Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-fibre-channel Spec URL: None Currently block storage can be attached to hosts via iSCSI. Adding support for block storage attaching to hosts via Fibre Channel SANs as well. This blueprint will be for the nova changes required to perform the attach/detach of the fibre channel storage to a KVM VM. The nova and cinder specification URL is http://wiki.openstack.org/Cinder/FibreChannelSupport Project: nova Series: grizzly Blueprint: libvirt-live-snapshots Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-live-snapshots Spec URL: None The current implementation of snapshots via the libvirt driver operates completely externally to libvirtd. This is accomplished by suspending (virDomainManagedSave) the instance, then manipulating the underlying backing files via qemu-img or similar tools. The limitation of this approach is that the instance being snapshotted must be shutdown (qemu/kvm process stopped), as operating live has the possibility to corrupt the backing file. There was no other option at the time of implementation, keeping in mind the goal remains to always have instance_dir/disk be the active backing root. With Qemu 1.3 and Libvirt 1.0, functionality was introduced to allow us to execute snapshots of running instances. There are several new block management API calls, such as virDomainBlockRebase, virDomainBlockCommit, virDomainBlockPull and so on. Using these new methods and associated Qemu functionality, we can perform snapshots without changing the instance's power state (running or stopped). We cannot expect to have the latest versions of Qemu and Libvirt available in all deployments. Thus, the current snapshot approach will also be preserved. Users who do satisfy the dependencies will be able to enable the new live snapshot functionality via a configuration option. If this option is set to True, we will additionally validate the appropriate Qemu/Libvirt are available to us and fall back to the legacy snapshot method accordingly. Live snapshots will be disabled by default. Project: nova Series: grizzly Blueprint: libvirt-spice Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-spice Spec URL: None Nova has long had support for VNC consoles to guests. The VNC protocol is fairly limited, lacking support for multiple monitors, bi- directional audio, reliable cut+paste, video streaming and more. SPICE is a new protocol which aims to address all the limitations in VNC, to provide good remote desktop support. As such Nova should support SPICE in parallel with VNC. The work will cover four areas of OpenStack. SPICE enablement in Nova libvirt driver, and Nova RPC API, support for new commands in python-novaclient, integration into Horizon dashboard UI and integration into devstack. spice-html5 along with a websockets proxy will provide an equivalent to noVNC. Project: nova Series: grizzly Blueprint: libvirt-vif-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/libvirt-vif-driver Spec URL: http://wiki.openstack.org/LibvirtVIFDrivers Currently a great burden is placed on the nova sysadmin to correctly configure libvirt VIF driver choices. All of this can & should be done automatically based on information about the type of network Nova is connecting to. The Nova Network driver can trivially provide sufficient data already. The Quantum server can now provide the 'vif_type' data, and the Nova Quantum plugin can fill out most of the rest of the data, until the Quantum server is able to directly return it. The end result will be a single GenericVifDriver impl for libvirt which will work out of the box for all in-tree Quantum / Nova Network drivers. The vif_driver config param will remain to cope with the (hopefully unlikely) case where an out of tree Quantum plugin doesn't work with this generic driver. Project: nova Series: grizzly Blueprint: lintstack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/lintstack Spec URL: None Leverage Nova's git history to significantly detect and remove pylint false positives to make it a useful gating function for gerrit. Project: cinder Series: grizzly Blueprint: lio-iscsi-support Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/lio-iscsi-support Spec URL: None Currently Cinder most often uses tgtd to create iSCSI targets for volumes. This blueprint aims to enable use of LIO, a more modern alternative, by interfacing with python-rtslib. A new iSCSI TargetAdmin class will be created for this. LIO: http://www.linux- iscsi.org/ This came out of the mailing list discussion about the lio-support-via-targetd blueprint, as it is a more straightforward method to support LIO before implementing a targetd driver. Project: cinder Series: grizzly Blueprint: list-bootable-volumes Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/list-bootable-volumes Spec URL: None For UI easy of design - Purely volume created from glance? - API call to set the flag for a volume? Project: nova Series: grizzly Blueprint: live-migration-scheduling Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/live-migration-scheduling Spec URL: https://docs.google.com/document/d/1AiMLo2GEqQFNOWMsNATdHhlK5aq7q_vpVYBssybTH60/edit Currently live-migration operation wants us to specify destination host for VM. It would be usefull to have ability to utilize scheduler for choosing destination host. Project: neutron Series: grizzly Blueprint: make-string-localizable Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/make-string-localizable Spec URL: None Currently many strings in Quantum are not defined with gettext and not localizable. So the main goal of this blueprint is to make user visible strings localizable. In order to spread the task and reduce the difficulty of code review, I will split the commit into multiple isolated patches. Each module and each plugin will be a separate patch. Project: swift Series: grizzly Blueprint: memcache-ring-compat Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/memcache-ring-compat Spec URL: None MemcacheRing has a few named parameters that are different that other memcache libraries. Change MemcacheRing to be compatible Project: nova Series: grizzly Blueprint: memcached-service-heartbeat Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/memcached-service-heartbeat Spec URL: None Today the heartbeat information of Nova services/nodes is maintained in the DB, while each service updates the corresponding record in the Service table periodically (by default -- every 10 seconds), specifying the timestamp of the last update. This mechanism is highly inefficient and does not scale. E.g., maintaining the heartbeat information for 1,000 nodes/services would require 100 DB updates per second (just for the heartbeat). A much more lightweight heartbeat mechanism can be implemented using Memcached. Project: neutron Series: grizzly Blueprint: metadata-non-routed Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/metadata-non-routed Spec URL: None This is an extension to mark's original metadata for overlapping IPs patch. The idea is to run the metadata proxy in the dhcp namespace, and inject routes to the VMs via DHCP to have them send traffic to 169.254.169.254 via the DHCP server address. Project: neutron Series: grizzly Blueprint: metadata-overlapping-networks Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/metadata-overlapping-networks Spec URL: https://docs.google.com/document/d/1wixS-CrHe37Fv4my9MxUVeQKDb3mUJJCwPnireQ1gn8/edit When an OpenStack instance has multiple networks using the same IP address space the metadata service does not function as expected. Project: heat Series: grizzly Blueprint: metsrv-remove Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/metsrv-remove Spec URL: https://github.com/heat-api/heat/wiki/Cloudwatch-Architecture-rework Work is underway to remove the (unauthenticated) heat-metadata server, so all metadata, waitcondition and metric interaction with the in- instance agents (cfn-hup, cfn-signal and cfn-push-stats) happens via the (authenticated) cloudformation and cloudwatch APIs, this is part of the originally discussed cloudwatch architecture rework Project: neutron Series: grizzly Blueprint: midonet-quantum-plugin Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/midonet-quantum-plugin Spec URL: http://wiki.openstack.org/Spec-QuantumMidoNetPlugin Quantum plugin to enable MidoNet, Midokura's L2, L3 and L4 virtual networking solution, in Quantum. Project: horizon Series: grizzly Blueprint: migrate-instance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/migrate-instance Spec URL: None In the syspanel, I would expect the ability to migrate a single server. Steps: 0) login as admin 1) go to syspanel 2) go to instances and find the instance you want to migrate 3) click migrate --- The novaclient library exposes the API: $ nova help migrate usage: nova migrate Migrate a server. Positional arguments: Name or ID of server. Project: nova Series: grizzly Blueprint: migration-testing-with-data Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/migration-testing-with-data Spec URL: None The summit session identified the need to do migration tests with more than an empty database to catch consistency issues. Migration tests should insert sample data into the database to make sure that data is not lost, corrupted, and that the migrations succeed. Project: nova Series: grizzly Blueprint: multi-boot-instance-naming Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/multi-boot-instance-naming Spec URL: None Based on this bug https://bugs.launchpad.net/nova/+bug/1054212: When creating more than one instance in the scope of a single API call, Nova should automatically do something to make sure that the host names are unique. Not doing so effectively makes the min/max options useless for anyone who wants to add their VMs into a DNS domain. Project: swift Series: grizzly Blueprint: multi-core-bench Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/multi-core-bench Spec URL: None Allow swift-bench to be run across multiple servers and cores Project: swift Series: grizzly Blueprint: multi-range-gets Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/multi-range-gets Spec URL: None Support the full HTTP spec for Range requests. Currently, Swift only supports one range per request. The spec allows for multiple ranges in a single request. Project: nova Series: grizzly Blueprint: multi-tenancy-aggregates Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/multi-tenancy-aggregates Spec URL: None Multi-tenancy isolation with aggregates. The goal is to schedule instances from specific tenants to selected aggregate(s). In different cases is necessary to isolate instances from specific tenant(s). This means that they can only be created in a set of hosts. To define the set of hosts we can use "aggregates". The idea is to create a new scheduler filter "AggregateMultiTenancyIsolation" that handles this use-case: If an aggregate has the metadata filter_tenant_id= all hosts that are in the aggregate can only create instances from that tenant_id. An host can belong to different aggregates. So, a host can create instances from different tenants if the different aggregates have defined the metadata filter_tenant_id=. If a host doesn't belongs to any aggregate it can create instances from all tenants. Also, if a host belongs to aggregates that don't define the metadata filter_tenant_id it can create instances from all tenants. Using Availability Zones can't solve this problem because a host can only be in one availability zone, also the filter "AggregateInstanceExtraSpecsFilter" doesn't help because it requires creating new and exclusive flavors for each tenant that needs isolation. Project: cinder Series: grizzly Blueprint: multi-volume-backends Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/multi-volume-backends Spec URL: http://wiki.openstack.org/Cinder/MultiVolumeBackend Allow managing multi volume backends from a single volume manager. Right now there's a 1-1 mapping of manager-driver. This blueprint aims to provide suport for 1-n manager-drivers, where by certain volume drivers that really don't depend on local host storage can take advantage of this to manager multi backends without having to run multi volume managers. The thought is to use the existing configuration sections to distinguish the various drivers to load for a single volume manager. Current limitation of the multi backend is that there is 1 backend to volume_type. A volume_type must be set up and that must also correspond to the flag set in each [backend]. See example below. Project: cinder Series: grizzly Blueprint: name-attr-consistency Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/name-attr-consistency Spec URL: None Change "display_name" attribute to "name" in the API for consistency with other services. Project: heat Series: grizzly Blueprint: native-rest-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/native-rest-api Spec URL: None Currently Heat supports an OpenStack RPC API and an AWS CloudFormation-compatible HTTP/XML-RPC API. Add an OpenStack ReST API to allow access to Heat through the standard OpenStack mechanism. Old bug: https://bugs.launchpad.net/heat/+bug/1072945 Project: neutron Series: grizzly Blueprint: nec-security-group Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nec-security-group Spec URL: None Security group support is just a porting of security group in OVS plugin. It reuses both plugin and agent sides of OVS plugin support including RPC and add some plugin specific codes. port security extension support is tightly couple with security group extension to some degree, so if adding port security extension is small port security extension will be included in this blueprint. The change is limited to NEC plugin and does not affect others. * Scope: Same as the scope of Security Group Extension * Use Cases: Same as the scope of Security Group Extension (but limited to iptables based implementation) * Implementation Overview: The implementation is just a porting of security group in OVS plugin. It reuses both plugin and agent sides of OVS plugin support including RPC and add some plugin specific codes. * Data Model Changes: No data model changes and just add nec plugin to the list of security group db migration script. * Configuration variables: There may be a plugin specific configuration option which enables/disables quantum security group extension. It depends on Nova VIF plugging implementation. * API's: No change * Plugin Interface: no change Project: cinder Series: grizzly Blueprint: netapp-cluster-nfs-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/netapp-cluster-nfs-driver Spec URL: None Add the support for NFS files stored on clustered ontap to be used as virtual block storage. The driver is an interface from openstack cinder to clustered ontap storage system to manage NFS files on the NFS exports provided by cluster storage to be used as virtual block storage. Project: cinder Series: grizzly Blueprint: netapp-direct-volume-drivers Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/netapp-direct-volume-drivers Spec URL: None The current NetApp drivers for iSCSI and NFS require NetApp management softwares like OnCommand DFM etc. to be installed as a mid layer interface to do management operations on NetApp storage. The direct drivers provide an alternate mechanism via NetApp api(ontapi) to do storage management operations without the need of any additional management software in between openstack and NetApp storage. The idea is to implement direct to storage drivers achieving the same functionality as already submitted NetApp drivers. Project: nova Series: grizzly Blueprint: network-adapter-hotplug Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/network-adapter-hotplug Spec URL: None it is usefull for user who uses openstack instances that can plug/unplug vif at any time. 1、create a vif which has ip and mac. 2、associate it to the spcified instance. we need to add an api for nova to execute plug/unplug action . and add the option to use this feather in novaclient Project: nova Series: grizzly Blueprint: no-db-compute Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/no-db-compute Spec URL: None Make all of the necessary changes so that nova-compute no longer has direct access to the database. Project: nova Series: grizzly Blueprint: no-db-compute-manager Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/no-db-compute-manager Spec URL: None The compute manager should not have any direct database calls, but rely on conductor Project: nova Series: grizzly Blueprint: no-db-virt Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/no-db-virt Spec URL: None Remove any and all direct database queries from the nova/virt drivers in preparation of bp no-db-compute Project: nova Series: grizzly Blueprint: non-blocking-db Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/non-blocking-db Spec URL: None Add eventlet db_pool use for mysql This adds the use of eventlet's db_pool module so that we can make mysql calls without blocking the whole process. New config options are introduced: sql_dbpool_enable -- Enables the use of eventlet's db_pool sql_min_pool_size -- Set the minimum number of SQL connections The default for sql_dbpool_enable is False for now, so there is no forced behavior changes for those using mysql. sql_min_pool_size is defaulted to 1 to match behavior if not using db_pool. Adds a new test module for our sqlalchemy code, testing this new option as much as is possible without requiring mysql server to be running. DocImpact Change-Id: I99833f447df05c1beba5a3925b201dfccca72cae Project: nova Series: grizzly Blueprint: nova-api-samples Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-api-samples Spec URL: https://etherpad.openstack.org/api-samples Create tests to obtain reliable and attested samples of Nova API requests and responses using both XML and Json interfaces. This samples will be used at api site api.openstack.org and in the manuals. Project: nova Series: grizzly Blueprint: nova-common-rootwrap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-common-rootwrap Spec URL: None Rootwrap is moving to openstack-common. Once this is completed, Nova should make use of the openstack.common version of rootwrap. Project: nova Series: grizzly Blueprint: nova-compute-cells Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-compute-cells Spec URL: http://wiki.openstack.org/blueprint-nova-compute-cells This blueprint introduces the new nova-cells service. The aims of the service are: * to allow additional scaling and (geographic) distribution without complicated database or message queue clustering * to separate cell scheduling from host scheduling Project: horizon Series: grizzly Blueprint: nova-net-quantum-abstraction Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/nova-net-quantum-abstraction Spec URL: None There are a number of places where API calls need to be redirected and/or outright altered depending on which network service is being used. Providing a switchable layer would prevent a lot of ugly hacks propagating through the codebase. Project: nova Series: grizzly Blueprint: nova-quantum-security-group-proxy Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-quantum-security-group-proxy Spec URL: None Quantum now has native security group support. This blue print is for implimiting a way in nova in order to proxy security group calls directly to quantum. Current nova has a security group handler that we are currently using to proxy the calls to quantum. The problem with this though is we are still going through the nova database so if quantum is unable to complete the security group request the handler needs to delete the entry it just added in the nova database and raise. This leads to transactional issues. Project: nova Series: grizzly Blueprint: nova-rootwrap-options Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-rootwrap-options Spec URL: None Use the new ability to support options to provide a log file that will audit all commands called as root and the matching filter. Also solve wishlist bug 1013147 (provide search path for executables) Project: nova Series: grizzly Blueprint: nova-securitygroups-expansion Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-securitygroups-expansion Spec URL: None We intend on creating a new SecurityGroupsAPI that does not interact with the Nova Database. A slight modification to the current instantiation of the SecurityGroupsAPI will need to be made so it uses a flag instead of the built in API. Project: nova Series: grizzly Blueprint: nova-v2-api-audit Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-v2-api-audit Spec URL: http://wiki.openstack.org/NovaV2APIAudit go through the existing nova API with a fine toothed comb and figure out the various return usage today, and expose more of the inconsistencies we find. The output of this is additional unit tests and bugs for fix in v3 api. Described as part of the v3 API. Project: neutron Series: grizzly Blueprint: nvp-api-client-loadbalance-request Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-api-client-loadbalance-request Spec URL: None The current version of the nvp_api client does not load balance requests across controllers. Instead, it just sends all the request to one controller and if there is a controller failure it will failover to use another controller. This blue print implements the ablility to utilize all controllers at once. Project: neutron Series: grizzly Blueprint: nvp-l3-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-l3-api Spec URL: http://wiki.openstack.org/Quantum/Spec-NVPPlugin-L3-API This blueprint is about providing support for the L3 APIs to the NVP plugin. At the moment the NVP plugin does not support the L3 API extension. The change should be limited to the Nicira NVP Plugin code; no other components should be affected. Project: neutron Series: grizzly Blueprint: nvp-nwgw-api Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-nwgw-api Spec URL: None The aim of this blueprint is provide support for "Layer-2 Gateways", a NVP specific feature. This support will be provided through an extension available for the NVP plugin only, in a way similar to several extensions supported by the cisco plugin. The extension will configure Layer-2 connections between Quantum networks and external networks. Even if similar in some terms, this is however different from the provider networks feature. For this reason a different extension is being proposed instead of re-using provider networks. More details to appear in the specification Project: neutron Series: grizzly Blueprint: nvp-port-security-extension Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-port-security-extension Spec URL: None Implement api extension to prevent spoofing Project: neutron Series: grizzly Blueprint: nvp-provider-net Design: Pending Approval Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-provider-net Spec URL: http://wiki.openstack.org/Quantum/Spec-NVPPlugin-Provider-Nets This blueprint is merely about adding support for the provider networks extension to the NVP plugin. The changes, even if non trivial, are limited to the NVP Plugin. As the NVP plugin supports a different sets of network types from those specified in the standard extension, this extension need to be slightly changed, or adapted in a way such that the plugin can declare its allowed network types. More details on the wiki page. Project: neutron Series: grizzly Blueprint: nvp-qos-extension Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-qos-extension Spec URL: None https://docs.google.com/document/d/1jBz3j9bXF- OnAzNU0UDt00UfG5bKEXzTu84zR0CoInQ/edit Project: nova Series: grizzly Blueprint: optimize-nova-network Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/optimize-nova-network Spec URL: None There are a bunch of potential optimizations to nova network as outlined in this thread: http://lists.openstack.org/pipermail /openstack-dev/2013-January/004404.html Project: horizon Series: grizzly Blueprint: organised-images-display Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/organised-images-display Spec URL: None We'd like to highlight or separate public images belonging to our official tenant, to gently push users towards using images that our cloud officially supports (i.e., the images we maintain as admins). It would also help to highlight or separate a user's own images so they're easier to find. Project: nova Series: grizzly Blueprint: pass-rxtx-factor-to-quantum Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/pass-rxtx-factor-to-quantum Spec URL: None Add small change to quantum in order to pass rxtx_factor to quantum for qos extension (https://blueprints.launchpad.net/quantum/+spec/nvp- qos-extension) Project: neutron Series: grizzly Blueprint: plumgrid-quantum-plugin Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/plumgrid-quantum-plugin Spec URL: http://wiki.openstack.org/plumgrid-quantum PLUMgrid plugin supports Quantum Core V2 APIs over an infrastructure running PLUMgrid Network Virtualization Platform. The plugin will interact directly with the Hypervisor layer to provide all the networking functionality requested by Quantum APIs. It will be based on a controller-mode implementation were all resources state will be controlled and handled by the plugin but all the operations will be performed by the controller. This controller is being referenced as NOS along the code. Project: neutron Series: grizzly Blueprint: port-security-api-base-class Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/port-security-api-base-class Spec URL: None https://docs.google.com/document/d/18trYtq3wb0eJK2CapktN415FRIVasr7UkT pWn9mLq5M/edit Project: nova Series: grizzly Blueprint: powervm-compute-enhancements Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/powervm-compute-enhancements Spec URL: None PowerVM is the virtualization solution for AIX, IBM i and Linux environments on IBM POWER machines. This blueprint is to continue to advance the PowerVM driver, refining and adding new functionality. This blueprint includes snapshot of instances for the PowerVM driver on IBM POWER hardware. Project: nova Series: grizzly Blueprint: powervm-compute-resize-migration Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/powervm-compute-resize-migration Spec URL: None This blueprint is to continue to advance the PowerVM driver, refining and adding new functionality. The scope of this blueprint include support for resizing instance and the migration of instances on IBM POWER machines. Project: nova Series: grizzly Blueprint: preallocated-images Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/preallocated-images Spec URL: None Allocate storage for VM images up front for performance improvements and space availability guarantees. This is implemented by calling fallocate where available to allocate file system blocks efficiently when the VM is initially provisioned. This will give immediate feedback if enough space isn't available. Also it should significantly improve performance on writes to new blocks, and may even improve I/O performance to prewritten blocks due to reduced fragmentation. A new config option is added: preallocate_images={none, space} With preallocate_images=space, fallocate is called on the VM images. References used when implementing this:   commands and perf tests for using fallocate on vm images: http://kashyapc.wordpress.com/2011/12/02/little-more-disk-io-perf- improvement-with-fallocateing-a-qcow2-disk/ == Future work == For performance reasons it helps to enable the preallocation=metatdata option in qcow images. In remains to be seen if this helps when done on the backing image, or is only significant for the instance images, in which case they need to be copies and can't have a backing file. So we may in future also support a preallocate_images=performance option that will do additional processing at VM startup to enable more performant I/O at run time. Reference info on preallocation=metadata: preallocation=metadata significant performance advantages http://www.redhat.com/archives/libvir-list/2010-October/msg00946.html preallocation=metadata incompat with backing_file     http://www .gossamer-threads.com/lists/openstack/dev/10592 preallocation=metadata commands and perf tests http://itscblog.tamu.edu/improve-disk-io-performance-in-kvm/   qemu notes on qcow2 performance http://wiki.qemu.org/Qcow2/PerformanceRoadmap   preallocation=metadata on base image may improve perf? I can't see how this is effective, and in my testing it's not. http://comments.gmane.org/gmane.comp.emulators.kvm.devel/95270 reason for qcow -> raw conversion in base https://github.com/openstack/nova/commit/ff9d353b2f and http://pad.lv/932180 Performance notes for various image types. Tests done by writing in a VM backed by a local file system, using: dd if=/dev/zero of=file bs=1M count=1k conv=notrunc,fdatasync oflag=append I didn't see gradual degradation as some have seen on NFS at least, but did see quite different performance depending on the formats used: disk performance outside VM = 120MB/s raw in $instance_dir/ = 105MB/s qcow copy with preallocation=metadata in $instance_dir/ = 100MB/s qcow CoW with fallocate full size in $instance_dir/ = 55MB/s Note perf a bit more stable than without fallocate I didn't test with full host disk where improvements would be more noticeable qcow CoW in $instance_dir/ = 52MB/s qcow CoW in $instance_dir/ backed by qcow with preallocation=metadata in base = 52MB/s Another thing to consider in future is having allocation supported as a flavor type rather than as a global setting. Perhaps something along the lines of "Instance disk I/O control" at: https://review.openstack.org/#/c/22105 Project: swift Series: grizzly Blueprint: proxy-affinity Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/proxy-affinity Spec URL: None Proxy servers can choose more local replicas for reads. In a single region, this may mean that a replica in the same server or same rack (eg zone) is chosen. For multi-region, this means that reads are sent to the local region first. Project: heat Series: grizzly Blueprint: python-heatclient Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/python-heatclient Spec URL: None Implement a python-heatclient in a separate repository that should consume the new openstack style REST API Project: neutron Series: grizzly Blueprint: quantum-db-upgrades Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-db-upgrades Spec URL: https://docs.google.com/document/d/1YzKDf9IzfsmlhPvtNLv9d-luap25YlihBTqtcnEjrQo/edit The goal of this blueprint is to handle DB upgrades just like all the other Openstack projects. As the plugin mechanism is a Quantum peculiarity, we need to look at plugin-specific upgrade paths. Project: horizon Series: grizzly Blueprint: quantum-floating-ip Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/quantum-floating-ip Spec URL: None Support quantum floating IP feature by calling Quantum API directly. The major difference from Nova floating IP is quantum floating ip is associated with an VIF rather than an instance. Project: neutron Series: grizzly Blueprint: quantum-floodlight-bigswitch-l3 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-floodlight-bigswitch-l3 Spec URL: None Scope: Adding support for L3 extension in RESTProxy BigSwitch/FloodLight Quantum Plugin. Use Cases: Same as those to relevant to the L3 extension (CRUD routers/floating IPs). Implementation Overview: In the spirit of the RESTProxy plugin, L3 extension calls will be processed (CRUD of logical resources) and the changes will be proxied to a backend controller. Data Model Changes: None Configuration variables: A configuration variable specific to the RESTProxy plugin is being added to identify that particular Quantum server ID. API's: No new APIs Plugin Interface: Not applicable Required Plugin support: Applicable to RESTProxy BigSwitch/FloodLight Quantum Plugin Dependencies: Tests have a dependency on jsonschema CLI Requirements: None Horizon Requirements: Not applicable Usage Example: Consistent with L3 extension Test Cases: Same as those for L3. In addition JSON schema validation test will be added. Project: neutron Series: grizzly Blueprint: quantum-gate Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-gate Spec URL: None This BP tracks the issues needed to be resolved in order to have a quantum gate for the commit process . Project: heat Series: grizzly Blueprint: quantum-integration Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/quantum-integration Spec URL: None Add quantum resource types to heat. This is limited to the quantum features in the Folsom release Project: neutron Series: grizzly Blueprint: quantum-l3-routes Design: Discussion Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-l3-routes Spec URL: https://docs.google.com/document/d/1wDQJ00PbLY-7O-BuVLDRatP0BD57PnBZzSWFQ9ljGCk/edit Adding new attribute “routes” for Router which is configuring routing table in l3-agent. https://docs.google.com/document/d/1wDQJ00PbLY-7O- BuVLDRatP0BD57PnBZzSWFQ9ljGCk/edit Project: horizon Series: grizzly Blueprint: quantum-l3-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/quantum-l3-support Spec URL: None Adding Quantum L3 router feature. Project: horizon Series: grizzly Blueprint: quantum-lbaas Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/quantum-lbaas Spec URL: http://wiki.openstack.org/Quantum/LBaaS/UI Support load balancing advanced service in Horizon. Quantum code is under development and will be completed in G3. Project: horizon Series: grizzly Blueprint: quantum-network-topology Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/quantum-network-topology Spec URL: None Add network topology graphical view. Nachi has a very attractive prototype demonstrated in G summit and it's a good startline. Project: neutron Series: grizzly Blueprint: quantum-plugin-hyper-v Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-plugin-hyper-v Spec URL: http://www.cloudbase.it/downloads/Quantum_plugin_for_HyperV_specs.pdf Quantum driver to support Hyper-V Plugin documentation link: http://www.cloudbase.it/quantum-hyper-v-plugin/ Project: neutron Series: grizzly Blueprint: quantum-scheduler Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-scheduler Spec URL: https://docs.google.com/document/d/1TJlW0_tMpeENA_ia38fvRu7ioKRt9fsWXBjivwd1mMw/edit Scheduler support for Quantum - Utilize resource's    Support scalability on quantum agent    Availability zone    works on multiple hosts - Agent management    Monitoring agents    Manage agent’s capabilities - High availability Implementation plan 1. Move service.py to openstack-common https://github.com/openstack/nova/blob/master/nova/service.py 2. Rewrite agents using service.py 3. Add host attribute for subnet and router. 4. Update each agent work with scheduler Project: neutron Series: grizzly Blueprint: quantum-security-groups Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-security-groups Spec URL: None So far with openstack, security groups where implemented by Nova using an iptables + libvirt nwfilers (also based on iptables). With Quantum, we want to have plugins implement security groups, as packet filtering is high specific to the type of networking technology being used (e.g., iptables based filtering is not compatible with SR-IOV nics). From Folsom Summit: - Dave's slide: http://www.slides hare.net/delapsley1/20120417-osdesignsummitsecuritygroupsdlapsleyfinal - nova has a flag to enable default group or not. should we have that as well? - need to add option for Amazon compat mode for the default rule. Some want default deny vs allow network ingress if no rules defined (Amazon way) Note: This blueprint may be broken into multiple blueprints - basic extension (already complete?) - implementations for various plugins. Project: neutron Series: grizzly Blueprint: quantum-security-groups-iptables-lb Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-security-groups-iptables-lb Spec URL: https://docs.google.com/presentation/d/1nXzNXKIfCfotdav5BzkceDiOfDypEkvtTfVXCGdq6rY/edit#slide=id.g33084527_0_0 Scope: This bp implements iptables version of Quantum SecurityGroup Extension. This bg targes LinuxBridge plugin. Use Cases: See https://blueprints.launchpad.net/quantum/+spec/quantum-security-groups Implementation Overview: See https://docs.google.com/presentation/d/1n XzNXKIfCfotdav5BzkceDiOfDypEkvtTfVXCGdq6rY/edit#slide=id.g33084527_0_6 0 Data Model Changes: N/A Configuration variables: firewall_driver, package name, The driver to implement firewall function API's: RPC API update_port will be notified when security group or security group rule will be update firewall.py https://github.com/nttmcl/quantum/com mit/4987b0ade5e130a38a397c40a81a9ddcfee1bf7a Plugin Interface: See https://blueprints.launchpad.net/quantum/+spec/quantum-security-groups Required Plugin support: L2-agent should call firewall module before plug the port or update port or unplug the port. Dependencies: See https://blueprints.launchpad.net/quantum/+spec/quantum-security-groups CLI Requirements: N/A Horizon Requirements: N/A Usage Example: See https://blueprints.launchpad.net/quantum/+spec/quantum-security-groups Test Cases: See https://docs.google.com/presentation/d/1nXzNXKIfCfotda v5BzkceDiOfDypEkvtTfVXCGdq6rY/edit#slide=id.g33084527_0_60 Project: neutron Series: grizzly Blueprint: quantum-security-groups-iptables-ovs Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-security-groups-iptables-ovs Spec URL: https://docs.google.com/presentation/d/1nXzNXKIfCfotdav5BzkceDiOfDypEkvtTfVXCGdq6rY/edit#slide=id.g2900e35a_0_ Scope: This bp implements iptables version of Quantum SecurityGroup Extension. This bg targes OVS plugin. Project: neutron Series: grizzly Blueprint: quantum-service-framework Design: Discussion Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-service-framework Spec URL: http://wiki.openstack.org/Quantum/ServiceIntegration Spec provides high-level overview of whats needed to be done to create flexible service framework allowing advanced services to be added to quantum. Project: neutron Series: grizzly Blueprint: quantum-service-type Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-service-type Spec URL: https://docs.google.com/document/d/1g0qMog_8BusLRc6cEUiVgc4S3G9_tDSaHFaHXfTctpw/edit This blueprint is about defining a framework of API and supporting methods for inserting L4/L7 services on Quantum logical topologies. Slides from the summit: Deck 1 - http://www.slideshare.net/salv_orlando/advanced-network-services- insertions-framework Deck 2 - [provide link to Sasha's slides] Etherpad Discussion: https://etherpad.openstack.org/grizzly-quantum- svc-insertion High-level discussion on the topic available here: http://wiki.openstack.org/Quantum/ServiceInsertion this is mostly DB/API level work. We will have an extension for accessing and managing service types. (with a 'default' service type which could be specified in the configuration file). The service type manager will also expose methods to be used by advanced service plugins to increase decrease a reference count of the service type itself NOTE: Here an alternative being consider is whether plugin can register a foreign key to service type. The code will be provided by a dummy plugin (with its own API), to validate how service types can be specified in requests. The code will also extend the 'service plugins' work to allow for assigning short names to plugins. UPDATE 2012-12-31 Patch set 6 is ready for review. There are a few bits which still should be addressed however. Go to gerrit for more details. Project: neutron Series: grizzly Blueprint: quantum-v2-api-xml Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-v2-api-xml Spec URL: None This feature will look at adding XML support for the Quantum v2 API, which is currently only support JSON. There is an existing branch under review (https://review.openstack.org/#/c/10856/), but it was deemed incomplete. This work would need to include: - XML support for all core APIs, including the ability to express "null" values. - XML support for all existing v2 API extensions (providernets and quantum-router come to mind) - Full test coverage on par with JSON tests, without causing code duplication Project: horizon Series: grizzly Blueprint: quantum-vnic-ordering Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/quantum-vnic-ordering Spec URL: None VNIC ordering is important (someone wants to connect eth0 to his management network and eth1 to the service network), but we cannot specify an ordering of vNICs now. Project: nova Series: grizzly Blueprint: quota-instance-resource Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/quota-instance-resource Spec URL: http://wiki.openstack.org/InstanceResourceQuota libvirt can use tc and cgroup to implement resources quota such as cpu blkio ,network traffic . all these feathers are essential for public cloud . Project: nova Series: grizzly Blueprint: rebuild-for-ha Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/rebuild-for-ha Spec URL: http://wiki.openstack.org/Evacuate Rebuildforha feature can boot instances which went down due to host failure, to other hosts while keeping their original identity. Project: swift Series: grizzly Blueprint: recon-repl-oldest-newest Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/recon-repl-oldest-newest Spec URL: None This adds support for reporting the oldest replication pass completion as well as the most recent. This is quite useful for finding those odd replicators that have hung up for some reason and need intervention. Project: swift Series: grizzly Blueprint: recon-top-full Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/recon-top-full Spec URL: None add a --top flag to swift-recon -d to show the top full drives Project: swift Series: grizzly Blueprint: region-tier Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/region-tier Spec URL: None Add a region tier above zones. This allows for the existing "unique- as-possible" placement strategy to continue to work across a distributed cluster and ensures that data is as protected from failures as possible. Project: neutron Series: grizzly Blueprint: remove-v1-code-cisco-plugin Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/remove-v1-code-cisco-plugin Spec URL: None Perform clean up of v1 code from the Cisco plugin (follows https://blueprints.launchpad.net/quantum/+spec/remove-v1-related- code). Project: swift Series: grizzly Blueprint: remove-webob Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/remove-webob Spec URL: None remove webob as a dependency Project: heat Series: grizzly Blueprint: resource-type-internetgateway Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-type-internetgateway Spec URL: None AWS::EC2::InternetGateway http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws- resource-ec2-internet-gateway.html Creates a new Internet gateway in your AWS account. After creating the Internet gateway, you then attach it to a VPC. The VPC creation wizard in the AWS Management Console automatically adds an Internet gateway to your VPC (depending on which scenario you select). However, you might have an existing VPC with only a virtual private gateway, and you might want to add an Internet gateway. When you add an Internet gateway to your VPC, your goal is to have a subnet that contains public instances (instances with public IP addresses, such as web servers or a NAT instance). Parameters Tags Maps to a Quantum Router gateway quantum router-create --tenant_id [DEMO_TENANT_ID] router[x] quantum router-gateway-set [ROUTER_ID] [EXTERNAL_NETWORK_ID] # for each subnet in the network quantum router- interface-add [ROUTER_ID] [SUBNET_ID] Comments Heat will have to be configured for policy to know what EXTERNAL_NETWORK_ID to use. An InternetGateway is assigned to a VPC, so every Quantum subnet in a net must be assigned to the router that has the gateway set on it. Project: heat Series: grizzly Blueprint: resource-type-networkinterface Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-type-networkinterface Spec URL: None AWS::EC2::NetworkInterface http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide /aws-resource-ec2-network-interface.html Describes a network interface in an Elastic Compute Cloud (EC2) instance for AWS CloudFormation. This is provided in a list in the NetworkInterfaces property of AWS::EC2::Instance. Parameters Description GroupSet PrivateIpAddress SourceDestCheck SubnetId Tags Maps to a Quantum Port quantum port- create --fixed-ip subnet_id=[SubnetId],ip_address=[PrivateIpAddress] net[x] Comments During nova boot of the associated AWS::EC2::Instance, need to specify the equivalent of --nic port-id =[port-id] Project: heat Series: grizzly Blueprint: resource-type-routetable Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-type-routetable Spec URL: None AWS::EC2::RouteTable http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide /aws-resource-ec2-route-table.html Creates a new route table within a VPC. After you create a new route table, you can add routes and associate the table with a subnet. Your VPC has an implicit router (represented by the R enclosed in a circle in the diagrams in this guide). Your VPC automatically comes with a modifiable main route table. You can create other route tables in your VPC (for the limit on the number you can create, see Appendix B: Limits). You can replace the main route table with a custom table you've created (if you want a different table to be the default table each new subnet is associated with). Parameters VpcId Tags Maybe maps to a Quantum Router except there is no concept of a Route quantum router-create --tenant_id [DEMO_TENANT_ID] router1 Comments According to the route table documentation: http://docs.amazonwebservices.com/AmazonVPC/latest/User Guide/VPC_Route_Tables.html a VPC comes with a default main route table. AWS::EC2::RouteTable is for specifying additional route tables to associate with subnets. Every subnet must be associated with a route table. Whereas in Quantum, a router is not associated with a network, only subnets. This also means that a network does not have a default router assigned. Project: heat Series: grizzly Blueprint: resource-type-srta Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-type-srta Spec URL: None AWS::EC2::SubnetRouteTableAssocation http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide /aws-resource-ec2-subnet-route-table-assoc.html Associates a subnet with a route table. Each subnet must be associated with a route table, which controls the routing for the subnet. If you don't explicitly associate a subnet with a particular table, the subnet uses the main route table. Parameters SubnetId RouteTableId Maps to a Quantum Router interface quantum router-interface-add [RouteTableId] [SubnetId] Project: heat Series: grizzly Blueprint: resource-type-subnet Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-type-subnet Spec URL: None AWS::EC2::Subnet http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide /aws-resource-ec2-subnet.html Creates a subnet in an existing VPC. You can create a VPC that spans multiple Availability Zones. After creating a VPC, you can add one or more subnets in each Availability Zone. Each subnet must reside entirely within one Availability Zone and cannot span Availability Zones. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones. Parameters VpcId CidrBlock AvailabilityZone Tags Maps to a Quantum Subnet quantum subnet-create --tenant-id [SERVICE_TENANT_ID] net[x] [CidrBlock] Comments Need to map from VpcId to net[x]. Project: heat Series: grizzly Blueprint: resource-type-vpc Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-type-vpc Spec URL: None AWS::EC2::VPC http://docs.amazonwebservices.com/AWSCloudFormation/latest/UserGuide /aws-resource-ec2-vpc.html Creates a Virtual Private Cloud (VPC) with the CIDR block that you specify. A VPC is the first object you create when using Amazon Virtual Private Cloud. When creating the VPC, you simply provide the set of IP addresses you want the VPC to cover. You specify this set of addresses in the form of a Classless Inter-Domain Routing (CIDR) block. For example, 10.0.0.0/16. Parameters CidrBlock InstanceTenancy Tags Maps to a Quantum Network quantum net-create --tenant-id [DEMO_TENANT_ID] net[x] Comments VPC seems to map closely to a Quantum Network, however: The VPC's CIDR appears to be used for validation only; subnets must be within the CIDR. Its not clear whether instanceTenancy is supportable or relevant. Possibly related to Network attribute shared? The UUID of the net could be used as the VpcId for subsequent operations. It may be necessary to map between the VpcId and the assigned net[x] name. Implementation In Heat, a VPC resource encapsulates a Quantum Network and a Quantum Router. A VPC Subnet encapsulates a Quantum Subnet, and when associated with the VPC, the Subnet also gets an implicit association with the Router in the VPC. Project: neutron Series: grizzly Blueprint: restproxy-plugin Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/restproxy-plugin Spec URL: http://wiki.openstack.org/Quantum/RestProxyPlugin This is a generic quantum plugin that translates quantum function calls to authenticated REST request to a set of redundant external network controllers. Project: cinder Series: grizzly Blueprint: retain-glance-metadata-for-billing Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/retain-glance-metadata-for-billing Spec URL: http://wiki.openstack.org/RetainGlanceMetadata When generating a volume from an image, it is required that the Glance metadata also be retained in the Cinder database, so that instances which are booted from the volume have access to the metadata. Specifically, this is for billable Glance images so that the charging metadata is retained in the Cinder volume (and any clones thereafter). Project: neutron Series: grizzly Blueprint: routed-service-insertion Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/routed-service-insertion Spec URL: http://wiki.openstack.org/Quantum/ServiceInsertion Scope: Provide routed service insertion so tenant has the ability to plug their service to an L3 router in their networks. Use cases: Routed service insertion is critical to some advanced services, Firewall as an example, because we need to explicitly specify where a firewall rule should be applied, such as deny telnet service from any source to any destination. Routed insertion is also a preferred service insertion mode for multi-service appliances because multiple services, which are NOT independent of each other, need to be running on the same appliance. Implementation overview: Extend L3 router and advanced service resources to support routed service insertion. The L3 resource will be extended to support "service_type_id" attribute for a service type, which defines a list of advanced services, to be inserted to an L3 router. The advanced service, LBaaS specifically as it's the only advanced service extension available today, will be extended to support "router_id" attribute, which is the id of an L3 router to which the service will be plug into. Data model changes: Add RouterSvcType to bind a router and a service type id, and add RoutedSvcInsertion to bind Vip, Pool, and HealthMonitor with a router id. APIs: Add optional "service_type_id" to create L3 router API. Add optional "router_id" to create Vip, Pool, and HealthMonitor APIs. Dependencies: Depends on service-type and lbaas extensions Test Cases: 1. create L3 router without and without service_type_id. 2. update L3 router with and without service_type_id. 3. delete L3 router. 4. create Vip/Pool/HealthMonitor with and without router_id. 5. Update Vip/Pool/HealthMonitor with and without router_id. 6. delete Vip/Pool/HealthMonitor. Project: neutron Series: grizzly Blueprint: rpc-for-l3-agent Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/rpc-for-l3-agent Spec URL: https://docs.google.com/document/d/1rWp41OnCwivj2sMazeNkD3RTf1rL_GzV2f9Fud5FNjY/edit At the end of Folsom, we did not have time to make the L3-agent use RPC, as a result, it uses polling, which is expensive if there are a lot of routers and router interfaces in a deployment. We should fix this to use RPC, similar to how an L2-plugin-agent uses RPC. Project: neutron Series: grizzly Blueprint: ryu-plugin-update-for-ryu Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ryu-plugin-update-for-ryu Spec URL: None Update plugin for Ryu update Project: neutron Series: grizzly Blueprint: ryu-remove-nova Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ryu-remove-nova Spec URL: None Remove nova files under plugins/ryu/nova Project: neutron Series: grizzly Blueprint: ryu-remove-ryu-specific-interface-driver Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ryu-remove-ryu-specific-interface-driver Spec URL: None With recent Ryu update, there is no necessity to use Ryu specific interface driver. Just OVS plugin's works. So let's remove Ryu specific interface driver. (and nova vif driver.) Project: neutron Series: grizzly Blueprint: ryu-tunnel-support Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ryu-tunnel-support Spec URL: None Add tunnel support to ryu plugin Project: cinder Series: grizzly Blueprint: scality-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/scality-volume-driver Spec URL: None The goal of this blueprint is to add a new driver that manages volumes on the Scality SOFS filesystem. This gives OpenStack users the option of storing their data on a high capacity, replicated, highly available Scality Ring object storage cluster. Scality SOFS: http://www.scality.com/connectors/ SOFS is a network filesystem mounted with FUSE, with most options given in a configuration file. Given a mount point and a SOFS configuration file as driver options (in cinder.conf), the Scality volume driver in Cinder mounts SOFS, and then creates or deletes volumes as regular (sparse) files on SOFS. Similarly, the Scality volume driver in Nova mounts SOFS and lets the hypervisor access the volumes. Project: nova Series: grizzly Blueprint: scality-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/scality-volume-driver Spec URL: None Provide nova support for scality volume driver Project: nova Series: grizzly Blueprint: scope-config-opts Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/scope-config-opts Spec URL: https://etherpad.openstack.org/grizzly-nova-config-options Many config options within Nova are declared either globally - i.e. in nova.config. Often, config options can be declared and used within the one module. This is ideal because it is clear the option is only intended for use by that module and its declaration is easily found. Even where multiple modules use a given option, one of those modules is often a natural place to declare it and the other modules can use CONF.import_opt() to explicitly declare their dependency on the option. Scoping options in this way will also help us figure out groups to put options which will, in turn, help users make sense of the config file. Project: horizon Series: grizzly Blueprint: security-group-rules Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/security-group-rules Spec URL: None The security group dialog closes every time the user adds or removes a rule. This dialog should be streamlined so that users can add and remove multiple rules without exiting the dialog. Project: neutron Series: grizzly Blueprint: security-groups-nvp Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/security-groups-nvp Spec URL: None Implement quantum security groups in nvp plugin Project: glance Series: grizzly Blueprint: separate-client Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/separate-client Spec URL: None The bin/glance CLI tool should be removed from Glance after the Folsom release. Project: nova Series: grizzly Blueprint: shared-dhcp-ip Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/shared-dhcp-ip Spec URL: None In multihost mode, the compute node needs an ip address on the guest network in order to dhcp. There is no reason that each compute node needs a different ip address, so it would be preferrable if each compute node could share an ip address for dhcp and that traffic would be isolated to the node itself. Project: nova Series: grizzly Blueprint: show-availability-zone Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/show-availability-zone Spec URL: None Today, these is no way to get all availibility_zones in a region. And availibility_zone of an instance is not shown in instance detail information. Bying adding this, users can see all availibility_zones in a region, and easly choice one to launch instance in. Project: nova Series: grizzly Blueprint: snapshot-task-states Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/snapshot-task-states Spec URL: http://wiki.openstack.org/nova-image-task-states Two new snapshot related task states that offer more transparency of instance state during snapshot action. Project: nova Series: grizzly Blueprint: snapshots-for-everyone Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/snapshots-for-everyone Spec URL: None Current snapshot logic suspend instances anyway during snapshot creation. So, there is no restriction for snapshotting raw and LVM backed instances. Project: heat Series: grizzly Blueprint: stack-rollback Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/stack-rollback Spec URL: None The AWS API has a "DisableRollback" flag, which we do not currently implement http://docs.amazonwebservices.com/AWSCloudFormation/latest/ APIReference/API_CreateStack.html "Boolean to enable or disable rollback on stack creation failures. " Currently it is necessary to manually delete stacks on creation failure, I guess we'll need to understand how the AWS rollback mechanism works and implement something similar which is enabled when DisableRollback=False Project: heat Series: grizzly Blueprint: static-inst-group Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/static-inst-group Spec URL: None Make a resource like the Autoscaling group, but a static group. Project: swift Series: grizzly Blueprint: static-large-object Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/static-large-object Spec URL: None Support large objects with a well-defined (explicit) manifest of segments Project: swift Series: grizzly Blueprint: storage-quotas Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/storage-quotas Spec URL: http://wiki.openstack.org/SwiftQuotas In some deployment scenarios, such as a private cloud, the provider may want to limit the tenants (accounts) to a maximum allowable amount of storage via a quota. Project: neutron Series: grizzly Blueprint: support-pagination-in-api-v2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/support-pagination-in-api-v2 Spec URL: https://docs.google.com/document/d/1xZQDCOmGXlMvn8BLC2_HG_VNrRCPdIkZ_KW0lorfVQM/edit Analysis and Design: https://docs.google.com/document/d/1xZQDCOmGXlMvn 8BLC2_HG_VNrRCPdIkZ_KW0lorfVQM/edit Project: horizon Series: grizzly Blueprint: swift-folder-prefix Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/swift-folder-prefix Spec URL: None Swift originally documented a method for implementing psuedo- hierarchical folders inside containers by using a directory marker object. The currently published 1.0 API removes this method in favor of a combination of prefix and delimiter arguments in the query string. Project: swift Series: grizzly Blueprint: swift-init-kill-wait-flag Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/swift-init-kill-wait-flag Spec URL: None You can now give swift-init a -k N (or --kill-wait N) option to override the default 15 second wait for a process to die after sending it the die signal. This is useful for boxes that are awfully slow for whatever reason. Project: swift Series: grizzly Blueprint: swift-proxy-caching Design: New Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/swift/+spec/swift-proxy-caching Spec URL: None The object servers impose relatively high overhead for small reads. Caching at the proxy node can alleviate this load, if the proxies are spec'd with a large amount of memory. Intended workloads: Large quantities of small reads with few writes, eg CDN Design: A memcache server is used to do the actual caching. For each swift object, one cache object is stored, composed of an the cache time, array of headers, and the actual object payload. A WSGI filter sits before the proxy server, which handles the caching. The WSGI filter adds an 'If- None-Match' and 'If-Modified-Since' HTTP header if: - The original request didn't specify these. - The object was found in cache. For GET and HEAD requests: If the returned answer is 304 Not modified, the response is replaced with the cached object is returned. If the response is less than 500, the cache is invalidated. If the response status is 200 and the object is deemed cachable, it is added to the cache. For other requests: If the response code is less than 500, the cache is invalidated. An object is considered cacheable if: - Its size does not exceed a configured maximum - The request does not contain a Range header Further points of interrest - We'll have to handle Range transfers correctly - Header changes are not reflected in the etag, so we might be serving stale - Documentation should explain where the cache should be in the chain (e.g. after auth) Ideas for further improvement: - Allow storage of objects larger than the maximum memcache size. - Allow write through instead of write around. (ie, cache PUT operations) - Allow write back. (probably a bad idea) - Leverage various Cache-Control flags to avoid contacting the object servers for cached objects. Project: heat Series: grizzly Blueprint: swift-resource-type Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/swift-resource-type Spec URL: None There should be a Swift container resource type which has similar behaviour to the S3Bucket, but exposing swift semantics in the same way that the native quantum resources do Project: horizon Series: grizzly Blueprint: system-info-panel Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/system-info-panel Spec URL: None As part of the goal of making the admin dashboard more useful and meaningful, we can combine the read-only information such as default quotas, API capabilities, etc. into a "System Information" panel. Project: nova Series: grizzly Blueprint: tenant-networks Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/tenant-networks Spec URL: None The os-networks extension is intended to be admin only, and in that capacity, does things only an admin should be allowed to do, like list all known networks. I'd like to propose renaming that extension to os- admin-networks and supplying a new extension focused on tenants to take over the os-networks namespace. Alternatively, I think it would be acceptable to supply the tenant only extension under the name os- tenant-networks and leave the existing networks extension alone. This blueprint also strongly asserts the need for things that are "extensions" to stop requiring changes in novaclient. Namely, network "extension" changes recently went into novaclient that essentially make assumptions about existence of extensions in the API. Project: horizon Series: grizzly Blueprint: theme-support Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/theme-support Spec URL: None In many organizations, horizon should look different from standard horizon. This includes css, graphics, and also templates Preferable, the selected theme should be loaded, if corresponding files are available; if not, default files should be used. Project: swift Series: grizzly Blueprint: time-to-first-byte-timing Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/time-to-first-byte-timing Spec URL: None emit statsd timing information on GET requests with the timing data for the first byte of the body Project: nova Series: grizzly Blueprint: trusted-filter-cache Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/trusted-filter-cache Spec URL: None Currently each time the trusted filter is called to check host_pass, it polls OAT service to get the trusted level for the host. This solution is not good on scalability. With a cache for the host trust level, trusted filter don't need to consult OAT service if the cache is still valid, thus improves the scalability. Project: swift Series: grizzly Blueprint: underscore-tempauth Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/underscore-tempauth Spec URL: None Allow underscores in tempauth users and accounts Project: horizon Series: grizzly Blueprint: unify-config Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/unify-config Spec URL: None Horizon's config is currently made up of a lot of random settings which may or may not be present and which are getting ever more complicated. Using the settings in a safe way in the code is getting progressively harder. By offering a unified config module in horizon (much like "from django.conf import settings") which has a default conf containing everything plus the user's conf overlayed on top of that we could standardize access, defaults, and safety all in one go. It'd even be more DRY! Project: swift Series: grizzly Blueprint: update-proxy-logging Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/swift/+spec/update-proxy-logging Spec URL: None Some client requests may create a bunch of internal requests and have a response with the body of one of these internal requests. Update proxy-logging to better handling these sort of activities Project: heat Series: grizzly Blueprint: update-rollback Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/update-rollback Spec URL: None Now we implement rollback on stack create (ref bp stack-rollback) we also need to add support for rolling back failed stack updates, so the stack ends up back in it's original state after an an attempted update fails Project: cinder Series: grizzly Blueprint: update-snap-metadata Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/update-snap-metadata Spec URL: None Add capability to update metadata for snapshots. Project: cinder Series: grizzly Blueprint: update-vol-metadata Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/update-vol-metadata Spec URL: None Add capability to update metadata for volumes. Project: nova Series: grizzly Blueprint: version-rpc-messages Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/version-rpc-messages Spec URL: None The summit session on upgrade topics confirmed the findings of the session on trusted RPC with respect to the need for versioned RPC wireline messages so that future improvements can be made without requiring cluster-atomic upgrades. Project: neutron Series: grizzly Blueprint: vif-plugging-improvements Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/vif-plugging-improvements Spec URL: https://docs.google.com/presentation/d/1vD2bc2WyqQzOLODjFrfLgg661WU0nteY7NEaVS4pv5g/edit High level description: Impove the VIF plugins. The following link describes the problem:- https://docs.google.com/presentation/d/1vD2bc2 WyqQzOLODjFrfLgg661WU0nteY7NEaVS4pv5g/edit The solution will enable Nova to retrive the underlying network implementations from the Quantum plugin. Below are the required Quantum and Nova changes. Quantum changes:- - provide a API whereby Nova can retrive the underlying network implementation - each plugin will need to provide this Nova:- - In the long run http://wiki.openstack.org/VifPlugging - for the first phase a generic Quantum VIF plugin will be created. - Nova will learn the networking implementation and build the networking configuration accordingly. This save the configuration of drivers and management on the nova side. - Old configuration variables will be kept for backward compatibility (still need to understand how these can be deprecated) API's: GET /network-implementation-details/ Each plugin will be responsible for filling the relevant details We may need to add in a port-implementation. This is pending input from the lists Configuration variables: None Algorithm: Quantum - return the specific networking implementation Nova - Ensure that the networking implementation is supported :) Data Model Changes: None Plugin Interface: No changes Required Plugin support: Yes, each plugin will need to inherit the extended API and ensure that correct networking implementation is returned. Project: nova Series: grizzly Blueprint: virt-disk-api-refactoring Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/virt-disk-api-refactoring Spec URL: http://wiki.openstack.org/VirtDiskApiRefactor The current nova.virt.disk API contains code for file injection which assumes that the disk image can be mapped into the host filesystem. As a previous CVE has demonstrated, exposing the guest filesystem in the host is risky. By introducing a proper VFS abstraction, we can make use of the libguestfs API directly, instead of via its FUSE module. This isolates file injection from the host OS Project: nova Series: grizzly Blueprint: vmware-compute-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/vmware-compute-driver Spec URL: http://www.slideshare.net/opencompute/vmware-nova-compute-driver Enhancing VMware Compute Driver • Launch OVF disk image • VNC console • Attach and Detach iSCSI volume • Guest info • Host ops • VLAN • Quantum • Cold migration • Live migration • VirtualCenter support Project: cinder Series: grizzly Blueprint: vol-api-consistency Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/vol-api-consistency Spec URL: None Update /volumes and /volumes/details to match behavior of other openstack projects. Project: cinder Series: grizzly Blueprint: vol-type-to-uuid Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/vol-type-to-uuid Spec URL: None Change ID of volume types to be a UUID and change all references to volume type in the API to use the UUID rather than the volume type name. Project: cinder Series: grizzly Blueprint: volume-backups Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/volume-backups Spec URL: http://wiki.openstack.org/VolumeBackups This blueprint adds support for backing up user volumes to Swift. This backup service will allow the user to create, restore and delete backups as well as listing backups and showing the details of a specific backup. The term backup as used in this blueprint refers to a copy of the original volume which is stored on Swift. This backup is independent of the original volume and may be used for archival and disaster recover purposes. This is distinct from a snapshot of the volume which may be generated using techniques such as copy-on-write and have dependencies on the original volume. Project: horizon Series: grizzly Blueprint: volume-encryption-field Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/volume-encryption-field Spec URL: None This blueprint is related to the new Nova feature proposed in https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes Changes have been made in Nova to allow for Cinder volumes to be encrypted when attached to an instance. When a volume is created, a metadata tag is created to indicate the encryption (or lack thereof) for that volume. This blueprint is to expose that functionality to users through Horizon. When creating a volume, a new form field will be added to allow the user to select encryption for the volume. The encryption field can be disabled/hidden via a setting flag. When displaying detailed information for a volume, a new section will be added to display all metadata for a volume. Project: cinder Series: grizzly Blueprint: volume-rpc-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/volume-rpc-api Spec URL: None Cinder's internal volume RPC API should be versioned for the sake for better support for upgrading. Project: cinder Series: grizzly Blueprint: volume-type-scheduler Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/volume-type-scheduler Spec URL: None To create a basic volume-type aware scheduler and all required infrastructure for reporting volume capabilities from nova-volume nodes (pretty much reuse the existing infrastructure). It should be able to find nodes best suited for hosting volume of particular type (with some special properties). Project: cinder Series: grizzly Blueprint: volume-usage-metering Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/volume-usage-metering Spec URL: http://wiki.openstack.org/Extend%20volume%20notifications%20to%20include%20usage%20statistics Nova administrators want data on nova volume usage(number of reads, bytes read, number of writes, bytes written) for billing, chargeback, or monitoring purposes. Project: heat Series: grizzly Blueprint: vpc-resources Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/vpc-resources Spec URL: https://etherpad.openstack.org/grizzly-heat-quantum Implement as many of the Amazon VPC resource types as the current feature-set of Quantum allows. Project: nova Series: grizzly Blueprint: xenapi-config-drive Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/xenapi-config-drive Spec URL: None Look at the libvirt driver support for Config Driver, and mirror that ability in the XenAPI driver. Project: cinder Series: grizzly Blueprint: xenapi-storage-manager-nfs Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/xenapi-storage-manager-nfs Spec URL: None nova-volume was able to use XenAPI storage manager. We need to add this back, with the knowledge one SR can only be attached to one Pool (usually one host). This blueprint looks at adding support for the NFS SR. Project: cinder Series: grizzly Blueprint: xenapinfs-glance-integration Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/xenapinfs-glance-integration Spec URL: None Make XenapiNFS volume driver to be able to create a volume from a glance image. Project: cinder Series: grizzly Blueprint: xenapinfs-snapshots Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/xenapinfs-snapshots Spec URL: None Support snapshots in XenAPINFS Project: nova Series: grizzly Blueprint: xenserver-bittorrent-images Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/xenserver-bittorrent-images Spec URL: None The proposal here is to use BitTorrent to speed up builds in a cluster. The main advantages being, this should speed up builds even when the image is not cached and reduces considerably our need to continually add more Glance-API nodes within a cluster. The ultimate goal would be for BitTorrent to supplant Glance-API entirely, using BitTorrent on the compute nodes (of which we have plenty :-) and Swift (which is already built to scale) without having the bottleneck of a fixed number of Glance-API servers in between. Project: heat Series: grizzly Blueprint: yaml-templates Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/yaml-templates Spec URL: http://wiki.openstack.org/Heat/YAMLTemplates Many users (and devs) complain about the difficulty of writing the cfn templates. This is partly because of how fiddly json is. So we need to support something easier to write, like yaml. Project: nova Series: grizzly Blueprint: zk-service-heartbeat Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/zk-service-heartbeat Spec URL: http://wiki.openstack.org/NovaZooKeeperHeartbeat Today the heartbeat information of Nova services/nodes is maintained in the DB, while each service updates the corresponding record in the Service table periodically (by default -- every 10 seconds), specifying the timestamp of the last update. This mechanism is highly inefficient and does not scale. E.g., maintaining the heartbeat information for 1,000 nodes/services would require 100 DB updates per second (just for the heartbeat). A much more lightweight, scalable and reliable heartbeat mechanism can be implemented using ZooKeeper (which on its own can be also used for other purposes, to further enhance scalability and resiliency of Nova).