Project: cinder Series: havana Blueprint: 3par-qos-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/3par-qos-support Spec URL: None Currently, the OpenStack HP 3PAR Fibre Channel (FC) and iSCSI Drivers do not support Quality of Service (QoS) extra specs. The QoS settings that we would like to add include; • maximum MB/second (maxBWS) • maximum IO/second (maxIOPS) These new extra specs will be scoped keys, the scoping will be qos:maxBWS and qos:maxIOPS. A new key hp3par:vvs was also added to allow the admin to predefine QOS settings on a 3PAR virtual volume set and any volume created would be added to that predefined volume set. No additional changes would be made to OpenStack outside the HP 3PAR FC and iSCSI Block Storage (cinder) drivers. Implementation Details This blueprint would add the maxBWS and maxIOPS as extra specs in the existing OpenStack HP 3PAR Fibre Channel and iSCSI Drivers Cinder Drivers. Both drivers call an existing 3PAR Web Server API to create a volume on the 3PAR storage array. The 3PAR storage arrays set these values on volume sets, not the actual volume. So the change would be to create a volume set with these settings and then create the volume in that volume set. 1. Max IO/S & Max MB/S are not QoS guarantees 2. These are per volume maximums which the 3PAR is guaranteed not to exceed. 3. Settings these values does not guarantee these performance rates will be achievable Project: nova Series: havana Blueprint: add-attribute-ip-in-server-search-options Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/add-attribute-ip-in-server-search-options Spec URL: None In the existing implimentation of openstack only admin has the option to search servers based on the ip attribute . This functionality has been extended to all the tenants. Project: ceilometer Series: havana Blueprint: add-event-table Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/add-event-table Spec URL: https://etherpad.openstack.org/Supporting-rich-data-types There are many use-cases that require the storing of the underlying raw event from the source systems. This feature will provide a flexible storage mechanism for this varied data. Project: cinder Series: havana Blueprint: add-export-import-volumes Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/cinder/+spec/add-export-import-volumes Spec URL: None It would be useful to add the ability to export/import volumes from one cinder node to another, or more precisely have the ability to import "non" openstack volumes like "lv's" or volumes already on a back-end device in to OpenStack/Cinder. Additionally, for enterprise environments the ability to export migh be useful. This would look like a delete from Cinder's perspective, however the volume on the back-end storage device would be left in tact. Project: nova Series: havana Blueprint: add-host-to-pcloud Design: New Lifecycle: Not started Impl: Unknown Link: https://blueprints.launchpad.net/nova/+spec/add-host-to-pcloud Spec URL: None This functionality would allow customers to add a host of particular flavor to the pcloud. Project: nova Series: havana Blueprint: add-iser-support-to-nova Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/add-iser-support-to-nova Spec URL: None its content: Adding support for iSER transport protocol Performance Improvements: iSCSI/TCP Vs. iSER Project: glance Series: havana Blueprint: add-sheepdog-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/add-sheepdog-support Spec URL: None This adds support to store images in Sheepdog cluster. Sheepdog is an open source distributed storage. Project: nova Series: havana Blueprint: add-tilera-to-baremetal Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/add-tilera-to-baremetal Spec URL: None This blueprint adds support for tilera bare-metal provisioning (http://www.tilera.com/). The description for general baremetal provisioning framework: https://wiki.openstack.org/wiki/GeneralBareMetalProvisioningFramework. The baremetal driver is a hypervisor driver for Openstack Nova Compute. Within the Openstack framework, it has the same role as the drivers for other hypervisors (libvirt, xen, etc). It exposes hardware via Openstack's API, using pluggable sub-drivers to deliver machine imaging (PXE) and power control (IPMI). WIth this patch set of tilera-backend, provisioning and management of non-PXE tilera physical hardware is accomplished using common cloud APIs and tools. Project: nova Series: havana Blueprint: admin-api Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/admin-api Spec URL: None This feature is for adding another API endpoint to nova that is intended to be exposed internally to the deployment. This is similar in concept to the admin API exposed by Keystone. The driving feature that brought this up most recently is: https://wiki.openstack.org/wiki/Cinder/GuestAssistedSnapshotting A related discussion thread here: http://lists.openstack.org/pipermail /openstack-dev/2013-August/013181.html Another example of a feature that would be better off only in an admin API is: https://review.openstack.org/#/c/41265/ Project: nova Series: havana Blueprint: admin-api-for-delete-quota Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/admin-api-for-delete-quota Spec URL: None Add ability for admins to be able to delete a non-default quota (absolute limit) for a tenant, so that tenant's quota will revert back to the configured default. Project: horizon Series: havana Blueprint: admin-domain-crud Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/admin-domain-crud Spec URL: None Keystone V3 introduces the concept of Domain as a high-level container for projects, users and groups. The feature should be manageable in Admin Dashboard. Additionally, there should be a way to associate project, user and groups to Domain through role assignment. Project: horizon Series: havana Blueprint: admin-group-crud Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/admin-group-crud Spec URL: None Keystone V3 introduces the concept of Group as a container of Users. A group is created in the scope of a Domain. Groups in Keystone should be manageable in Horizon's Admin Dashboard. Additionally, this feature should allow user management within the Group. Project: horizon Series: havana Blueprint: admin-password-for-server Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/admin-password-for-server Spec URL: None When creating a server, we can set a administrative password for it. Or we can change the password for the specified server using the Nova's API. This feature need add the value libvirt_inject_password=True to the nova.conf file Project: horizon Series: havana Blueprint: admin-role-crud Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/admin-role-crud Spec URL: None Roles in Keystone should be manageable in Horizon's Admin Dashboard. Project: ceilometer Series: havana Blueprint: alarm-api Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/alarm-api Spec URL: None We need to tie down the requirements for managing the state and history of alarms, for example providing: * an API to allow users define and modify alarm rules * an API to query current alarm state and modify this state for testing purposes * a period for which alarm history is retained and is accessible to the alarm owner (likely to have less stringent data retention requirements than regular metering data) * an administrative API to support across-the-board querying of state transitions for a particular period (useful when assessing the impact of operational issues in the metric pipeline) Project: ceilometer Series: havana Blueprint: alarm-audit-api Design: Drafting Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/ceilometer/+spec/alarm-audit-api Spec URL: None Provides an API to register and list audit trail of ran alarms. Project: ceilometer Series: havana Blueprint: alarm-distributed-threshold-evaluation Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/alarm-distributed-threshold-evaluation Spec URL: None A simple method of detecting threshold breaches for alarms is to do so directly "in-stream" as the metric datapoints are ingested. However this approach is overly restrictive when it comes to wide dimension metrics, where a datapoint from a single source is insufficient to perform the threshold evaluation. The in-stream evaluation approach is also less suited to the detection of missing or delayed data conditions. An alternative approach is to use a horizontally scaled array of threshold evaluators, partitioning the set of alarm rules across these workers. Each worker would poll for the aggregated metric corresponding to each rule they've been assigned. The allocation of rules to evaluation workers could take into account both locality (ensuring rules applying to the same metric are handled by the same workers if possible) and fairness (ensuring the workload is evenly balanced across the current population of workers). The polling cycle would also provide a logical point to implement policies such as: * correcting for metric lag * gracefully handling sparse metrics versus detecting missing expected datapoints * selectively excluding chaotic data. The allocation of rules to evaluation workers could take into account both locality (ensuring rules applying to the same metric are handled by the same workers if possible) and fairness (ensuring the workload is evenly balanced across the current population of workers). Project: ceilometer Series: havana Blueprint: alarm-notifier Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/alarm-notifier Spec URL: None As we develop alerting in Ceilometer, it might be a good idea to provide a simple destination endpoint for alerts to be forwarded as: - events on the oslo RPC bus - emails (SMTP) - SMS - Nagios alerts This should be a run by a set of distributed workers running a plugin based on the type of alarm to raise. Project: ceilometer Series: havana Blueprint: alarm-service-partitioner Design: Drafting Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/ceilometer/+spec/alarm-service-partitioner Spec URL: None We need a mechanism to split and balance the alarm threshold evaluation workload among workers. This should also cover the worker pool management. Project: ceilometer Series: havana Blueprint: alarming-logical-combination Design: Drafting Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/ceilometer/+spec/alarming-logical-combination Spec URL: None A mechanism to combine the states of multiple basic alarms into overarching meta-alarms could be useful in reducing noise from detailed monitoring. We would need to determine: * whether the meta- alarm threshold evaluation should be based on notification from basic alarms, or on re-evaluation of the underlying conditions * what complexity of logical combination we should support (number of basic alarms; &&, ||, !, subset-of, etc.) * whether an extended concept of simultaneity is required to handle lags in state changes Project: neutron Series: havana Blueprint: api-core-for-services Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/api-core-for-services Spec URL: None This blueprints stems from the discussion at the summit around, "multiple services, multiple core APIs". The final goal is to remove the loading of essential extensions for services other than L2 + IPAM from the Extension Manager and move it into core. The Load Balancing API extension, as well as the L3 extension will be moved into 'core' as part of this change. The API router will also need to be extended to leverage the concept of multiple plugins (this is already somehow done by the Quantum Manager and the Extension Manager). In theory the mapping is 1:N, where N is the number of 'services'. This means one could either have a distinct plugin per service or the same plugin for multiple services. As a part of this work we will provide a plugin- independent API (just like /extensions) whose aim is to list the services currently enabled. It should also be possible to query extensions by service. We expect the bulk of the work to happen inside the api.v2 package and in particular router.py what is *not* part of this blueprint: 1) When distinct plugins are used, wiring of advanced services onto the base service is outside the scope of this blueprint. For instance, knowing how a 'router in a VM' plugin will attach its interface into the logical switches implemented by the OVS plugin is not something that will be addressed here. The current stance is that it is the service plugin itself which should be aware of the underlying plugin ando configuring wiring at the data plane accordingly. 2) Similarly this blueprint does not address the problem of compatibility among plugins. 3) Some services (like LB) might have a model with multiple drivers, each one having specific capabilities. This blueprint does not address the problem of how these capabilities should be retrieved; this might ultimately be achieve allowing to query extensions by service_type, but since this concept is being rivisited for Havana, it is better to keep this out of this blueprint. Full spec to follow. Project: ceilometer Series: havana Blueprint: api-group-by Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/ceilometer/+spec/api-group-by Spec URL: http://wiki.openstack.org/Ceilometer/blueprints/api-group-by The API needs to provide some sort of GROUP BY operation to solve certain query types. Project: ceilometer Series: havana Blueprint: api-limit Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/api-limit Spec URL: None The API needs to allow retreive only the (n) latest sample of meter by specify something like '&limit=(n)' Project: ceilometer Series: havana Blueprint: api-sample-sorted Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/api-sample-sorted Spec URL: None We must ensure that sample are sorted by timestamp in API Project: ceilometer Series: havana Blueprint: api-v2-improvement Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/ceilometer/+spec/api-v2-improvement Spec URL: https://wiki.openstack.org/wiki/Ceilometer/blueprints/api-v2-improvements The API needs to evolve in order to solve more advanced questions from billing engines such as: - Give me the maximum usage of a resource that lasted more than 1h - Give me the use of a resource over a period of time, listing changes by increment of X volume over a period of Y time - Provide additional statistical function (Deviation, Median, Variation, Distribution, Slope, etc...) which could be given as multiple results for a given data set collection - OR operator in filter Project: glance Series: havana Blueprint: api-v2-property-protection Design: Approved Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/glance/+spec/api-v2-property-protection Spec URL: None We need to be able to authorize specific groups of users to create, update, and read different properties of arbitrary entities. Project: horizon Series: havana Blueprint: api-version-switching Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/api-version-switching Spec URL: None For the Keystone v3 API in particular we need the ability to use different versions depending on the service endpoint we're talking to. As such we need a mechanism for abstracting out the differences and triggering different versions based on known criteria. Eventually this will tie into version discovery via the API itself. Project: neutron Series: havana Blueprint: arista-ml2-mechanism-driver Design: Review Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/arista-ml2-mechanism-driver Spec URL: https://docs.google.com/document/d/1efFprzY69h-vaikRE8hoGQuLzOzVNtyVZZLa2GHbXLI/edit?usp=sharing This blueprint specifies Arista’s modular L2 mechanism driver to automate the management of virtual networks along with physical networks using Arista hardware devices (Spine and Leaf switches) This driver uses ML2 Mechanism Driver-API to interface with Quantum ML2 Plugin. Support for Arista Modular L2 Mechanism Driver is still work in progress. Initial version of the code will be submitted soon. For details, see specifications below. Project: nova Series: havana Blueprint: async-network-alloc Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/async-network-alloc Spec URL: None Setting up networks and allocating IP addresses has the potential to take an undesirable amount of time, blocking the build of a new instance. We can parallelize some work by querying for this information *while* a new instance is being provisioned in the virt driver. Project: heat Series: havana Blueprint: attributes-schema Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/attributes-schema Spec URL: None From: http://lists.openstack.org/pipermail/openstack- dev/2013-April/007989.html Introduce a schema for attributes (i.e. allowed arguments to Fn::GetAttr) Project: heat Series: havana Blueprint: auth-token-only Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/auth-token-only Spec URL: None To flush out any remaining issues, I propose the following: - by default, python-heatclient should aquire an auth token and only pass the token, not the password (able to be overridden) - in heat/engine/clients.py, use auth_token if both auth_token and password are specified This should have some performance improvements in reducing keystone token generation round-trips. Upstream fixes should be provided for any openstack clients which still don't handle this case properly Project: keystone Series: havana Blueprint: authenticate-role-rationalization Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/authenticate-role-rationalization Spec URL: None The auth/token controllers have different strategies of obtaining the list of user project/domain roles at authentication time - with varied use of the optional project id available in the identity driver authenticate call. Only the v2 authenticate_local uses this feature, the others (external and token) and all v3 read the roles for the project after authenticating with just the user details. Further the v2 code builds the roles lists by hand (allowing for groups), while the v3 version calls the "get_user_roles_for_project" (or "domain", which does that for you. This mismatch is not only bad for maintenance, but is also wrong in some cases (e.g. if the ONLY role you had on a project was by nature of group membership, authenticating locally would fail). We should rationalize this - and always just authenticate for the user and then call the "get_user_roles_for_project" (or "domain") within the controller. Project: neutron Series: havana Blueprint: auto-diassociate-floating-ip Design: Pending Approval Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/auto-diassociate-floating-ip Spec URL: None When i delete the VM instance, it will automatically disassociated a floating up from externel network. this would map to a amazon VPC use case as well Project: nova Series: havana Blueprint: auto-disk-config-disabled Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/auto-disk-config-disabled Spec URL: None It is not possible to resize the disk of all images. But, at the moment, users can always request for nova to resize and image. The proposal is to allow a new setting for auto_disk_config: * auto_disk_config="Disabled" When disabled, an api request is considered invalid if you request DiskConfig:AUTO for an image that has auto_disk_config=Disabled. This has the side affect that people can see which images are able to be requested for DiskConfig:AUTO, and which will always have to be DiskConfig:MANUAL, but looking at the image properties. Project: nova Series: havana Blueprint: backportable-db-migrations Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/backportable-db-migrations Spec URL: None In order to be able to backport db migrations we need a few blank migrations in place during each release. Project: neutron Series: havana Blueprint: bandwidth-router-label Design: Pending Approval Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/bandwidth-router-label Spec URL: https://wiki.openstack.org/wiki/Neutron/Metering/Bandwidth We need to distinguish network traffic based on their sources and destinations and to classify them to tag them with different label. Typically this is needed to be able to bill only certain traffic and ignore traffic to others destinations. Project: neutron Series: havana Blueprint: bandwidth-router-measurement Design: Pending Approval Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/bandwidth-router-measurement Spec URL: None The Ceilometer project would like to get meters from Quantum via the notifications system on how much bandwidth the projects are using. Project: nova Series: havana Blueprint: baremetal-force-node Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/baremetal-force-node Spec URL: None With the baremetal driver, force_hosts is not sufficient since the compute host is actually controlling other physical nodes. We need a way to force which baremetal node is provisioned. The current implementation of force_hosts is done by passing --availability_zone az:host. It seems appropriate to extend this to also support az:host:node. This will require a small change in nova/compute/api.py: _handle_availability_zone and _validate_and_provision_instance to parse host:node. It will probably require much larger changes in the host_manager, filters, and elsewhere to handle the inclusion of nodes. However, this approach requires that the operator knows which baremetal compute host controls the desired baremetal node, and I would like to avoid that. In order to do that, nodes need to be identified by uuid rather than host:number. Project: nova Series: havana Blueprint: baremetal-havana Design: Obsolete Lifecycle: Complete Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/baremetal-havana Spec URL: None Just an umbrella blueprint for other baremetal-related work. Project: nova Series: havana Blueprint: base-rpc-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/base-rpc-api Spec URL: None There have been a few times recently where we have wanted to be able to add an rpc method that exists on all services. This blueprint is for implementing that. Project: nova Series: havana Blueprint: better-libvirt-network-volume-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/better-libvirt-network-volume-support Spec URL: None Currently the libvirt network volume driver does only supports a source volume name attribute, but that is too little for some complex RBD or sheepdog configurations, and not enough for supporting NBD at all as the libvirt nbd support requires a source hostname and port attribute. Project: cinder Series: havana Blueprint: block-device-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/block-device-driver Spec URL: https://wiki.openstack.org/wiki/BlockDeviceDriver This blueprint proposes BlockDeviceDriver. This driver is based on VolumeDriver and ISCSIDriver, it has an ability to create volumes on plain block devices. Also it contains basic capabilities to attach/detach volumes, сopy image to volume, сopy volume to image and clone volume. Release Note Project: neutron Series: havana Blueprint: bsn-router-rules Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/bsn-router-rules Spec URL: None The Big Switch controller's Virtual Router implementation supports "routing rules" which are of the form: This extension aims to expose this abstraction via the Big Switch Quantum plugin. These rules are applied at the router level, allowing tenants to control communication between networks at a high level without requiring security policies. (e.g. prevent servers in a publicly accessible subnet from communicating with database servers). This extension does not have any relation to the extraroute extension. It controls a fundamentally different aspect of the network traffic. The extraroute extension is for adding routes to a routing table for the router to use to make forwarding decisions. The routing_rules extension is used to apply stateless ACLs to the router to control traffic flow between subnets before the routing table is reached. This is being submitted as a vendor-specific extension due to the presence of the 'nexthops' attribute. It can be used to specify the interfaces used to handle traffic from clients in order to prevent hair-pinning and other network inefficiencies. In other words, it is a next hop for the traffic as it leaves the client, not the next hop once it reaches the router. Project: heat Series: havana Blueprint: build-heat-graph Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/build-heat-graph Spec URL: None Currently the Heat API only returns resources that have started the "create" process. In order to display all resources that are going to exist in the Resource Topology UI, Heat would need to return a full relationship tree with all resources listed. Project: neutron Series: havana Blueprint: bulk-api-cisco-plugin Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/bulk-api-cisco-plugin Spec URL: None Calls to create_bulk_network, create_bulk_port, and create_bulk_subnet need to be supported in the Cisco plugin. Project: oslo Series: havana Blueprint: cache-backend-abstraction Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/oslo/+spec/cache-backend-abstraction Spec URL: https://github.com/FlaPer87/oslo-incubator/tree/cache/openstack/common/cache Provide an abstract class for managing multiple cache backends that can be used in projects needing a cache system. I.E: class BaseCache(object): passs class Memcache(BaseCache): passs class Redis(BaseCache): passs def get_cache(....): """Gets from configs the cache backend to use""" Could be imported like: from openstack.common import cache cache.get_cache() Project: keystone Series: havana Blueprint: cache-token-revocations Design: Superseded Lifecycle: Complete Impl: Slow progress Link: https://blueprints.launchpad.net/keystone/+spec/cache-token-revocations Spec URL: None The token revocation list is one of the most accessed resources in keystone. Current implementations are too slow. 1. Store the token revocation list in a cache (most likely memcache) 2. flush it on each revocation event and recreate 3. Lazy create it the first time it is accessed. Project: keystone Series: havana Blueprint: caching-layer-for-driver-calls Design: Drafting Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/keystone/+spec/caching-layer-for-driver-calls Spec URL: None There should be a configurable caching layer that lives above the various driver calls within Keystone. This should be configurable (using something akin to dogpile.cache) to allow for varying caching backends (in-memory, file-based, redis, memcache, etc) to alleviate load on the backend storage/mechanism for the differing aspects of keystone. This system will encompass the feature(s) of https://blueprints.launchpad.net/keystone/+spec/cache-token- revocations and extend to also cover Identity, Token Issuance, Trusts, etc. In absence of explicit configuration of a caching back-end, the behavior will remain the same as current with no-cache. Key things to keep in mind are: cache invalidation, ensuring the configuration for keystone is extended to expose a reasonable featureset of the caching layer. It will be on the developer to ensure that the decorator is added to new methods (as needed) and invalidation will be handled in a consistent manner. The cache interface within keystone will provide a way to be consistent with the implementation of the cache across the different drivers and subsystems within keystone. The initial discussions via IRC was to provide access to a series of decorators that will decorate the various driver calls, allowing for caching-in- line (via dogpile.cache/memoization) based upon the arguments to the various methods/functions. An example implementation is outlined: https://review.openstack.org/#/c/38866/ In conjunction with this, caching layer, normalization of the use of kwargs when calling manager methods needs to be completed. The main issue with kwargs, is that the dogpile.cache only supports the use of args (not kwargs) with the default key-generator. If we provide a new generator that can support kwargs, there is a lot more work to ensure that the invalidate() calls (when required) will produce expected results since the arguments will need to generate the same key in the cache. The extra level of introspection of the actual function needed to ensure consistency with kwargs support seems like it could have adverse side-effects, performance-impact, and possibly make the code much harder to maintain/follow. The kwargs change has the added benefit of making the drivers/managers where kwargs are used more consistent with the rest of keystone (In some cases, notably token drivers/providers, there are extensive uses of kwargs to methods). Project: nova Series: havana Blueprint: cancel-an-ongoing-image Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/cancel-an-ongoing-image Spec URL: None During an instance snapshot, the instance can be stuck in particular task state, the reasons could be image uploading taking a long time, restart of compute etc. This results in customers unable to perform any further action on an instance. This feature would expose an API extension to cleanly cancel an ongoing snapshot. Project: keystone Series: havana Blueprint: catalog-optional Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/catalog-optional Spec URL: None A service catalog is automatically included in a create token response (POST /v3/auth/tokens). While this behavior should not change, we should provide a method for clients to opt out of the catalog being included so that PKI tokens will be significantly smaller in a more complex deployment. A create token request that opts out of the catalog could simply be accomplished via a query string:   POST /v3/auth/tokens?nocatalog (out of scope for this bp...) The catalog should also become independently available on a new endpoint, such as: GET /v3/catalog Related Havana summit etherpad: https://etherpad.openstack.org/havana-endpoint-filtering Project: horizon Series: havana Blueprint: ceilometer Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/ceilometer Spec URL: None The primary place a "stand-alone" view into ceilometer would be useful is on the overview pages (project and admin overview) wherein we can compute/show the "top consumers" in order of consuming CPU, disk, network, etc. activity. Project: ceilometer Series: havana Blueprint: ceilometer-api-extensions Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-api-extensions Spec URL: https://wiki.openstack.org/wiki/Ceilometer/blueprints/Ceilometer-api-extensions Some proposed extensions to the Ceilometer API along with some supporting rationale. Project: ceilometer Series: havana Blueprint: ceilometer-quantum-bw-metering Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/ceilometer-quantum-bw-metering Spec URL: None Meter the bandwidth used by instances, projects, etc, using Quantum. Project: nova Series: havana Blueprint: cells-cinder-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/cells-cinder-support Spec URL: None Allow compute cells to work with a global cinder installation Project: nova Series: havana Blueprint: cells-filter-scheduler Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/cells-filter-scheduler Spec URL: None Add filter scheduler capabilities to the cells scheduler. Project: nova Series: havana Blueprint: cells-live-migration Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/cells-live-migration Spec URL: None Add live migration support to nova cells. Specifically, support it within a single compute cell, not between cells. Project: oslo Series: havana Blueprint: cfg-lowercase-groups Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/cfg-lowercase-groups Spec URL: None Nova uses lowercase group names e.g. [baremetal] [cells] [conductor] .. whereas Quantum uses lots of uppercase group names: [AGENT] [DATABASE] [DRIVER] ... *sigh* This thread brought the issue up: http://lists.openstack.org/pipermail/openstack- dev/2013-March/thread.html#6309 One option is to make group names case insensitive but, even then, we probably also want the sample configuration files for all projects to consistently use lower or upper case. IOW, we should move Quantum to use lowercase group names :) Here's a proposal: - log a deprecation warning if a project registers a group name that is not all lowercase before normalizing it to lowercase - in ConfigOpts getattr/getitem, just normalize the group name to lowercase and log a deprecation warning if it's a valid group - change Quantum's code to register and reference the groups in lower case - change Quantum's sample config files to use lowercase section names That way, Havana will be using all-lowercase but Grizzly config files will still work without any warnings being printed Other compat issues to think about: - if you're using Grizzly and you update to a newer version of oslo.config, you'll get a bunch of new warnings - if you're using CLI options with a group, it'll break - e.g. if you registered a 'FOO' group and a 'bar' CLI option, then the CLI argument would have been --FOO-bar. I don't know of anyone doing that and I think it's highly unlikely anyone would have done it. I think we should continue using [DEFAULT] as the default section since that's what ConfigParser does --- In the end, the plan became ... The basic goal here is that we want to support the legacy use of group names like DATABASE in config files That means: 1) if the API user registers a group called 'database', then we want to support reading values for the group from both database or DATABASE This is purely about support for legacy group names in config files. 2) when the API user registers a group called 'database', then CONF.database should be how to reference it. When they register a group called 'DATABASE', then CONF.DATABASE should be how the reference it. This ensures we're not making an incompatible API change - i.e. quantum's code will continue working and they only need to do s/CONF.DATABASE/CONF.database/ when they switch to oslo's DB code 3) the behaviour of [DEFAULT] should be unchanged so that users don't start switching to [default] and breaking existing tools 4) a deprecation warning to users/admins (i.e. a "you should fix your config file" warning) could *perhaps* be useful ... but it would need to be a one-time warning at startup per-file, not a warning every time a config value is accessed We can live without this, though - we're not going to ever remove this case normalization semantics, so we don't need to be too active about getting users to switch to lowercase groups Project: oslo Series: havana Blueprint: cfg-reload-config-files Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/cfg-reload-config-files Spec URL: None It's reasonable for services to want to reload their configuration files on e.g. SIGHUP. In order to be able to do so, we need a method in oslo.config that will cause a ConfigOpts instance to reload all configuration files. Project: horizon Series: havana Blueprint: change-user-passwords Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/change-user-passwords Spec URL: None Users in horizon should be enabled to change at least their passwords. Project: cinder Series: havana Blueprint: cinder-backup-to-ceph Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-backup-to-ceph Spec URL: None Cinder's volume backup manager currently supports backing up volumes to Swift. Some users may be using an alternative object store such as Ceph and may want to use that for volume backups. This blueprint proposes to add support to the Cinder backup manager for backing up volumes to Ceph. Ensuring that the existing Swift backup service is comptible with Ceph RADOS Gateway will be dealt with separate to this Blueprint. 'backing up volumes to Ceph' can mean a few different things depending on how it is used so will aim to support the following: 1. Backing up of volume from any driver to Ceph object store 2. Backing of of volumes where Ceph is used as Cinder store backend (RBD). This will be divided into two as follows: (a) backup of volumes within the same cluster using a seperate backup pool (b) backup of volumes between Ceph clusters The aim is to have (1) and (2) provided by the same Ceph backup service thus making it compatible with all exisiting volume drivers and providing extra features to the RBD driver. Project: cinder Series: havana Blueprint: cinder-fc-zone-manager Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/cinder-fc-zone-manager Spec URL: None Fibre Channel block storage support was added in Grizzly release but there is no support for automated SAN zoning (FC SANs are either pre- zoned or open-zoned). Pre-zoning introduces management complexity in cloud orchestration and using no zoning is the least desirable zoning option because it allows devices to have unrestricted access on the fabric and causes RSCN storms. The purpose of this blueprint is to add support for automated FC SAN zone/access control management in Cinder for FC volumes. Proposed FibreChannelZoneManager automates zone lifecycle management (using the zone driver) by integrating the necessary API hooks with volume manager's attach/detach entry points (for FC volumes when fabric zoning is enabled). Simplified zone management (viz. add, update, remove, and read/get zone operations) is intended to not require zoning administration and acts on the currently active zone set. Cinder FC Zone Manager etherpad https://etherpad.openstack.org/summit-havana-cinder-fc-zone-manager captures the requirements, use cases, and proposal. Project: cinder Series: havana Blueprint: cinder-nfs-driver-qos Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/cinder-nfs-driver-qos Spec URL: None In the nfs_shares_config file, in the definition of the nfs share a tag could define the QoS capability of the nfs share, like: fast, normal, slow. And the user should be able to select a QoS capability with a metadata of the volume "qos" or "nfs_qos". Project: cinder Series: havana Blueprint: cinder-refactor-attach Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-refactor-attach Spec URL: None Problem: Cinder now has copy/paste code from Nova's libvirt to do attach of a volume to the cinder node. This is done so cinder can copy volume contents into an image and then stuff the image into glance. This currently only works for iSCSI, but we need the same capability for Fibre Channel. Also needed for other non-iscsi drivers Project: cinder Series: havana Blueprint: cinder-state-machine Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/cinder-state-machine Spec URL: None Cinder needs a state machine to better keep track of events and action transitions. This will allow cinder to start to create a process for a safe shutdown as well as restarts from unsafe/killed shutdowns. This is related to the Safe Shutdown etherpad from the Havana dev session https://etherpad.openstack.org/Summit-Havana-Cinder-Safe-Shutdown Project: nova Series: havana Blueprint: cinder-volume-options Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/cinder-volume-options Spec URL: None The aim of this blueprint is to enhance the way NFS and GlusterFS volumes are handled. Currently, it is possible to set mount options for both Nova and Cinder on a per-node basis in the service config files. This blueprint extends this idea to be able to handle options for each export to be mounted that are set in the Cinder configuration and passed to Nova at attach time. This is a useful change for GlusterFS because the backupvolfile-server option enables a mount to succeed even if the first server specified is offline. It also may enable more specific performance tuning or deployment options for NFS/GlusterFS volumes. The configuration on the Cinder side is done on a per-export basis in the nfs_shares_config or glusterfs_shares_config file, i.e.: 192.168.175.166:/testvol -o backupvolfile-server=192.168.175.177 host1:/testvol2 -o backupvolfile- server=host2,rw,other_option This information will then be passed to Nova in the "data" field returned from initialize_connection() when an attach is performed. Nova review: https://review.openstack.org/#/c/29325/ Cinder review: https://review.openstack.org/#/c/29323/ Project: neutron Series: havana Blueprint: cisco-n1k-neutron-client Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/cisco-n1k-neutron-client Spec URL: None This change adds supports for the following client attributes: credentials network profiles policy profiles These are in support of the Nexus 1000v Cisco plugin work. Project: neutron Series: havana Blueprint: cisco-plugin-exception-handling Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-exception-handling Spec URL: None Scope: enhancements to the Cisco plugin to improve exception handling. Use Cases: Quantum with the Cisco plugin. Implementation Overview: The Cisco plugin exception handling needs to be more robust. Data Model Changes: n/a Configuration variables: n/a API's: n/a Plugin Interface: n/a Required Plugin support: n/a Dependencies: n/a CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: n/a Project: neutron Series: havana Blueprint: cisco-plugin-n1k-enh-vxlan-support Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-n1k-enh-vxlan-support Spec URL: None Scope: Adding enhanced VXLAN as a sub type to the VXLAN network profiles. Use Cases: Quantum with the Cisco plugin. Implementation Overview: Rename VXLAN type of network profiles to Overlay network profiles. Add a new sub type column to Overlay network profiles. Support enhanced VXLAN and native VXLAN as Overlay sub types. Allow plugin to be flexible to support newer sub types. Project: neutron Series: havana Blueprint: cisco-plugin-n1k-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-n1k-support Spec URL: None This blueprint tracks the addition of Nexus 1000V support into the Cisco plugin. The Nexus 1000V will allow for the creation of virtual networks on KVM based hypervisors. Project: neutron Series: havana Blueprint: cisco-plugin-svi Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-svi Spec URL: None Scope: Adding switched virtual interface support to the Cisco plugin Use Cases: Quantum with the Cisco plugin. Implementation Overview: Adding support for SVI (switched virtual interface) to the Cisco nexus plugin to realize VLAN gateways on hardware nexus switches. Data Model Changes: n/a Configuration variables: n/a API's: n/a Plugin Interface: n/a Required Plugin support: n/a Dependencies: n/a CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: n/a Project: neutron Series: havana Blueprint: cisco-plugin-vpc-support Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/cisco-plugin-vpc-support Spec URL: None Scope: Adding Virtual Port Channel support to the Cisco Nexus plugin. Use Cases: Quantum with the Cisco plugin. Implementation Overview: Support will be added for multihomes hosts connected to more than one ToR switch. Configuration files will be slightly modified to specify all switch interfaces that a host is connected to and trunking configuration will be performed on all switches specified. The switches themselves are assumed to be connected manually with a vPC channel. Data Model Changes: n.a Configuration variables: A host can show up as connected to multiple switches. API's: n/a Plugin Interface: n/a Required Plugin support: n/a Dependencies: n/a CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: n/a Project: neutron Series: havana Blueprint: cisco-single-config Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/cisco-single-config Spec URL: None Scope: Unification of all the various plugin files for the Cisco plugin into a single file. Use Cases: Quantum with the Cisco plugin. Implementation Overview: All the config values contained in the various files in etc/quantum/plugins/cisco will be unified into a single file etc/quantum/plugins/cisco/cisco_plugin.ini. The plugins needs to be modified to read from a single file instead of multiple. Data Model Changes: n.a Configuration variables: No config variables change, only emerged. API's: n/a Plugin Interface: n/a Required Plugin support: n/a Dependencies: n/a CLI Requirements: n/a Horizon Requirements: n/a Usage Example: n/a Test Cases: n/a Project: cinder Series: havana Blueprint: clone-image-imageid Design: New Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/clone-image-imageid Spec URL: None The clone_image method in VolumeDriver has volume and image_location as input parameters. This method also needs to have image_id as one of the inputs for clone_image implementations so as to use it in efficient cloning in scenarios where its required to introspect internal stores or catalogue for a particular image before performing efficient clone for it. Hence image_id needs to be added as additional param to clone_image method. Project: cinder Series: havana Blueprint: cloudbyte-elastistor-iscsi-driver Design: New Lifecycle: Not started Impl: Unknown Link: https://blueprints.launchpad.net/cinder/+spec/cloudbyte-elastistor-iscsi-driver Spec URL: None This proposal is to introduce a new volume driver support for CloudByte ElastiStor. ElastiStor driver (ElastistoreISCSIDriver) will be extending from SanISCSIDriver. This driver will implement following functionality 1. Volume Create/Delete 2. Snapshot Create/Delete 3. Get volume stats The driver uses python's http library to communicate with the ElastiStor server. We already have the driver code implemented for Grizzly. We need to migrate the code to Havana. CloudByte ElastiStor is a full-featured software-defined storage QoS solution, purpose-built for the cloud and virtualized environments. Software-only ElastiStor makes storage predictable, affordable, and easy, even as you scale to thousands of applications. ElastiStor lets you custom-build storage infrastructure based on your requirements, with support for SATA, SAS, and SSD hardware as well as NFS, CIFS, FC, and iSCSI protocols. Project: heat Series: havana Blueprint: cloudwatch-update-stack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/cloudwatch-update-stack Spec URL: None Currently updating the CloudWatch::Alarm resource type results in replacement of the resource. We should implement UpdateStack support such that properties for this resource can be updated without interruption to the stack. Project: nova Series: havana Blueprint: cold-migrations-to-conductor Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/cold-migrations-to-conductor Spec URL: None Regular resize/migrate functions should move to conductor to set out the path for the other migrate-related functions and how they will co- exist. Also to define how conductor will drive these processes without state in the compute nodes. Project: ceilometer Series: havana Blueprint: collector-stores-events Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/collector-stores-events Spec URL: None We want to start storing the raw events in the Event database tables. Not everyone will want to do this (yet), so it should be optional. Project: oslo Series: havana Blueprint: common-cli-utils Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/oslo/+spec/common-cli-utils Spec URL: None nova, keystone, cinder, and glance clients share a lot of common code for their CLI tools. We can move it to oslo. See also: https://blueprints.launchpad.net/oslo/+spec/common-client-library https://blueprints.launchpad.net/oslo/+spec/oslo-cliutils Project: oslo Series: havana Blueprint: common-client-library Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-client-library Spec URL: None Unfortunately, nova, keystone, cinder, and glance clients are very inconsistent. A lot of code was copied between all these clients instead of moving it to a common library. The code was edited without synchronization between clients, so, they have different behaviour: * keystoneclient reissues authorization request if the token is expired but glanceclient doesn't; * novaclient supports authorization plugins while the others don't; * all client constructors use different parameters (api_key in nova or password in keystone and so on); * keystoneclient authenticates immediately in __init__, while novaclient does it lazily during first method call; * keystone- and novaclient can manage service catalogs and accept keystone's auth URI while glanceclient allows endpoints only; * keystoneclient can support authorization with an unscoped token but novaclient doesn't; * novaclient uses class composition while keystoneclient uses inheritance: composition allows sharing common token and service catalog between several clients. That is worth to note that glanceclient still uses httplib instead of more convenient python- requests. There is python-openstackclient, and it is an awesome tool. But it is a console client, not an API client library. The basic library can be used by python-{nova,keystone,glance}client that are used, in their order, by python-openstackclient, horizon, etc. An sample implementation of the basic library was written in Jun, 2012 and is accessible at https://github.com/aababilov/python- openstackclient-base. Here is an example how to use the library from oslo-incubator: from openstack.common.apiclient.client import HttpClient http_client = HttpClient(username="...", password="...", tenant_name="...", auth_uri="...") from novaclient.v1_1.client import Client print Client(http_client).servers.list() from keystoneclient.v2_0.client import Client print Client(http_client).tenants.list() This blueprint is a reorganization of older blueprints https://blueprints.launchpad.net/nova/+spec/basic- client-library and https://blueprints.launchpad.net/openstack- common/+spec/common-http-client Project: oslo Series: havana Blueprint: common-quota Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-quota Spec URL: None There is equal quota code in nova and cinder. It can be imported to oslo, and used in any other projects. Equal code is: (nova|cinder).quota (nova|cinder).exception.*Quota* (nova|cinder).db.sqlalchemy.api.quota_* (nova|cinder).db.sqlalchemy.api.reservation_* At first, we can easy move *QuotaDriver, *Resource and QuotaEngine classes to oslo. Project: oslo Series: havana Blueprint: common-unit-tests Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/common-unit-tests Spec URL: None All project have unit tests, and they own wrappers around testools or unittest or something else. We should move one of this wrapper to oslo and use it in all projects Project: nova Series: havana Blueprint: compute-api-objects Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/compute-api-objects Spec URL: None An important step in the conversion to unified-object-model is making the compute/api.py::API (able to) return objects instead of DB models or dict representations thereof. This will help drive the conversion to objects for nova-api, nova-scheduler, and enable more of nova- compute to be able to do the same. Implementation of this blueprint will be complete when compute/api.py::API is using objects instead of db query models, and is returning those objects (when the caller wants them) through its public APIs. Project: heat Series: havana Blueprint: concurrent-resource-scheduling Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/concurrent-resource-scheduling Spec URL: None https://etherpad.openstack.org/heat-concurrent-resource-scheduling Project: glance Series: havana Blueprint: configurable-formats Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/glance/+spec/configurable-formats Spec URL: None Glance currently supports a specific set of container and disk formats. These sets of formats are rarely the actual set of formats supported by any given deployment. The set of acceptable container and disk formats should be configurable by the deployer of Glance. Project: neutron Series: havana Blueprint: configurable-ip-allocation Design: New Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/configurable-ip-allocation Spec URL: None Currently we have only one IP allocation algorithm. we should be able to allow user to provide more powerful way to allocation the IPs. design summit etherpad: https://etherpad.openstack.org/grizzly- quantum-ipallocation Project: ceilometer Series: havana Blueprint: convert-to-alembic Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/convert-to-alembic Spec URL: None Please read similar similar blueprint in nova: https://blueprints.launchpad.net/nova/+spec/convert-to-alembic In addition: 1) SQLAlchemy-migrate has several bugs related to sqllite backend. this project no longer maintained and doesn't accept bugfixes. Cause of this we have to use monkey patching and other workarounds to solve them in other projects. 2) It has compatibility issues with sqlalchemy >0.8.x (solved by a separate patch) Ceilometer is currently relatively small and it should be easier to convert it to Alebmic migration engine. Such conversion will provide a good reference point for other projects that will soon follow. Project: cinder Series: havana Blueprint: coraid-driver-refactoring-for-havana Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/coraid-driver-refactoring-for-havana Spec URL: None The current Coraid driver looks a bit messy and doesn't implement functionality which required for Havana Drivers. The blueprints aims to code refactoring and implement: - Copy Volume To Image - Copy Image To Volume - Clone Volume Project: ceilometer Series: havana Blueprint: count-api-requests Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/ceilometer/+spec/count-api-requests Spec URL: None Ceilometer should be able to count the number of API requests per type (GET, POST, PUT, DELETE, …) and URL for each API endpoint (nova, cinder, keystone, swift, quantum, ceilometer, …) Project: oslo Series: havana Blueprint: create-a-unified-correlation-id Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/create-a-unified-correlation-id Spec URL: None Create a correlation_id middleware to generate a correlation_id to be associated with an API request that crosses OpenStack service boundaries. This will enable more effective debugging of requests that span multiple services. For example, a Nova instance create request may touch other services including Glance and Quantum. A single id will simplify the process of tracking down errors. See related email discussion: https://lists.launchpad.net/openstack/msg13082.html Project: nova Series: havana Blueprint: create-error-notification Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/create-error-notification Spec URL: None Send a notification during instance creation if an individual build attempt fails. This will aid in tracking the flow of instance creation by external notification systems. Specifically, it will enable tracking of errors when multiple scheduling attempts (re- schedules) are required to build an instance. Project: cinder Series: havana Blueprint: create-raw-disk-driver-in-cinder Design: Superseded Lifecycle: Complete Impl: Good progress Link: https://blueprints.launchpad.net/cinder/+spec/create-raw-disk-driver-in-cinder Spec URL: None LVM is ideal, but there are folks interested in actually using an entire disk or partition directly for cinder-volumes. Once we seperate out some of the iscsi concerns and provide some utilities that deal with this model directly we can implement a local-disk driver that can be used not only for local block storage work but also be exported/used via iscsi. Project: heat Series: havana Blueprint: createstack-onfailure Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/createstack-onfailure Spec URL: None We need to support the different options here. This is normally the first or second question asked about heat. Asked many times at summit and on IRC. We seem to be implementing DO_NOTHING only. See: http:// docs.amazonwebservices.com/AWSCloudFormation/latest/APIReference/API_C reateStack.html OnFailure: DO_NOTHING, ROLLBACK, or DELETE shardy: Related - we need to implement rollback before we can implement the ROLLBACK option, ref #154 - I've looked into this but not got around to implementing yet - should basically just be a special type of update (to the previous template). I've also had several questions around rollback and "atomic stack launch" functionality recently, so will try to look into this in the next few weeks. Since #154 is already assigned to me, I'll take this issue (unless anyone else wants to do it :) Project: nova Series: havana Blueprint: cross-service-request-id Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/cross-service-request-id Spec URL: None Create a unified request identifier associated with an API request that crosses OpenStack service boundaries. This will enable more effective debugging of requests that span multiple services. For example, a Nova instance create request may touch other services including Glance and Quantum. A single id will simplify the process of tracking down errors. See related email discussion: https://lists.launchpad.net/openstack/msg13082.html Proposal: -The first OpenStack service to touch a request will tag it with an "OpenStack request id". This will effectively be a global identifier. Inter-service requests will contain this ID in an HTTP header, "x -openstack-request-id". This value will be a UUID. -Each service that touches the request will carry along this header value and use it in log messages and external notifications. -Each service may have its own internal request identifier (request_id in Nova and Glance) and should also log this value in conjunction with the global identifier. Question - How to guard against users supplying their own request id value? Check for uniqueness? Only accept request ID headers from certain source IPs? Project: nova Series: havana Blueprint: customer-quota-through-admin-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/customer-quota-through-admin-api Spec URL: None As an admin user, I would like to know how much of a resource a customer has used, when that resource counts against a quota. Customers can currently run "nova absolute-limits" to see what their quotas are and how much of each resource they have used (and, therefore, how much more they can use without having their quotas raised). Through admin API, we should be able to do "nova absolute- limits --tenant xxxxx" which should return the quota and the used limits for the given tenant. Project: horizon Series: havana Blueprint: d3 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/d3 Spec URL: None We should pull in the latest version of d3, and start by reworking all the quota infographics to use it in a reusable way. From there we can figure out how to start implementing it throughout the dashboard. Project: cinder Series: havana Blueprint: db-api-tests Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/db-api-tests Spec URL: None Cover by tests methods from db.api Project: nova Series: havana Blueprint: db-api-tests Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-api-tests Spec URL: None The goal of this bp is to add missing tests for a lot of methods in nova.db.api At least we should add missing tests for methods that have session parameter that should be removed in bp db-session- cleanup. For example security_groups. Project: nova Series: havana Blueprint: db-api-tests-fix-context Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/db-api-tests-fix-context Spec URL: None Some methods in nova.db.sqlalchemy.api requires admin_context, some methods requires non admin context. Almost all tests in nova.tests.test_db_api uses admin_context, and there is no tests for context/admin_context difference. These tests should be implemented. Project: nova Series: havana Blueprint: db-api-tests-on-all-backends Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/db-api-tests-on-all-backends Spec URL: None There is a lot of different things in SQl backends. For example work with casting. In current SQLite we are able to store everything in column (with any type). Mysql will try to convert value to required type, and postgresql will raise IntegrityError. So to avoid such nasty errors on db.api layer we should runt test_db_api on all backends. Project: cinder Series: havana Blueprint: db-archiving Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/cinder/+spec/db-archiving Spec URL: None In Nova (Grizzly) we added new feature DB Archiving. The problem: We actually don't delete rows from db, we mark it only as deleted (we have special column). So the DB is growing and growing, so this causes problem with performance. The solution: Create shadow tables and copy "deleted" rows from main to shadow table. Steps to implement: 1) sync utils for work with shadow table (create, check methods) 2) add db migration, that will create all shadow tables 3) write tests for shadowing (for example checking that all shadow tables are up-to-date) 4) add utils to work with shadowing (periodic task, manager method) Project: nova Series: havana Blueprint: db-common-migration-and-utils Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/db-common-migration-and-utils Spec URL: None The code from test_migration and db/utils could be useful for a range of projects: The goals are: 1) cleanup test_migration to make it common 2) add to db/utils code that change type of deleted columns in tables Project: nova Series: havana Blueprint: db-enforce-unique-keys Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/db-enforce-unique-keys Spec URL: None * Add real unique indexes on (`col`, `deleted`), and when deleting any row, set `deleted`=`id` instead of `deleted`=1. * Handle duplicate key errors in a sane way. * In db/sqlalchemy/api.py, replace occurrences of SELECT + INSERT-or-UPDATE with upserts. In mysql, this is "INSERT ... ON DUPLICATE KEY UPDATE". In postgres, it is a bit more involved. * Add to models missing unique constraints (just to show what UC model have) Project: nova Series: havana Blueprint: db-improve-archiving Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-improve-archiving Spec URL: None I found a lot of problems with shadow tables. 1) Contributors that doesn't know about bp db-archiving (that was made in grizzly) are forgeting to update shadow tables in migrations examples: https://review.openstack.org/#/c/26588/ https://review.openstack.org/#/c/24994/ and also one new patch (that is not in code) https://review.openstack.org/#/c/28232/3/nova/db/sqlal chemy/migrate_repo/versions/178_make_user_quotas_key_and_value.py So to avoid such kind of errors and improve a little bit situation we should make 2 things: 1) Add tests for checking that all tables have shadow table, and that columns in table and shadow table are equal ✓ 2) Add generic method in sqlalchemy.utils that creates shadow_table from table Project: cinder Series: havana Blueprint: db-migration-tests Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/cinder/+spec/db-migration-tests Spec URL: None In Nova (Grizzly): We added mechanism to test migrations on all backends with real data. So the goal of this bp is to: 1) sync the same mechanism from oslo to cinder (it is not yet in oslo) 2) write tests for all migrations Project: cinder Series: havana Blueprint: db-session-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/db-session-cleanup Spec URL: None 1. Use common session handling code already implemented in Oslo (DONE) 2. Don't pass session instances to public DB methods (DONE) 3. Use explicit transactions only when necessary (DONE) 4. Fix incorrect usage of sessions throughout the DB-related code (DONE) Project: nova Series: havana Blueprint: db-session-cleanup Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-session-cleanup Spec URL: None Bring the use of sessions and transactions in nova/db/sqlalchemy/api into a consistent state, based on these goals: * use explicit transactions only when necessary * don't pass session objects to public methods An easy way to find methods which need to be updated is: grep -P 'def [^_].*session=' nova/db/sqlalchemy/api.py Also, annotate methods that pose a risk but can not be easily addressed right now. Current known risks are: - with_lockmode may deadlock (or at least create a global bottleneck) - duplicate-insert race conditions (due to lack of UNIQUE constraints) Project: nova Series: havana Blueprint: db-slave-handle Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/db-slave-handle Spec URL: None An option for a slave handle would be beneficial for scaling. We could send all reads that aren't sensitive to replication lag to this handle and gain quite a bit of room to scale on our write masters. Besides just having the handle, we also need a nice way for devs to indicate that a query is safe to execute on a slave. We could probably indicate the "safeness" in context. In the db api we can indicate safeness using a decorator. Project: keystone Series: havana Blueprint: db-sync-models-with-migrations Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/db-sync-models-with-migrations Spec URL: None For creation we are using migrations. We don't have any tests that check that our models are up-to-date. So we should: 1) Add in __table_args__ indexes and unique constraints 2) Fix all mistakes in models 3) Fix all mistakes in migrations 4) Sync effects of migrations in different backends. 5) Add tests that ensure that models are up-to- date. This will allow us to find some mistakes or missing indexes and make the work with db cleaner. Project: nova Series: havana Blueprint: db-sync-models-with-migrations Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/db-sync-models-with-migrations Spec URL: None We are using declarative_base in nova.db.models just for reflection, not for db creation. For creation we are using migrations. We don't have any tests that checks, that our models are up-to-date. Also we are testing it only on sqlite, which couldn't test such things as nullable constraints. So We should: 1) Use always explicit nullable parameter for columns. There is a lot of mistakes in current nova models implementation. (sometimes in schema is nullable=True and in model nullable=False and vise versa). Also at this moment when you see Column description without nullable, you couldn't be sure that is author forget to set nullable='False' or it is really nullable='True'. So the easiest way to track all this and to fix and to avoid such things is to have one rule "always use explicit nullable". 2) Add in __table_args__ indexes and unique constraints 3) Fix all mistakes in models 4) Fix all mistekes in migrations 5) Sync effects of migrations in different backends. 6) Add tests that ensures that models are up- to-date. This will allow us to find probably some mistake, or missing inexes and make work with db more clean. Project: ceilometer Series: havana Blueprint: db-ttl Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/db-ttl Spec URL: None Implement a TTL mechanism for data entering Ceilometer collector database. Project: heat Series: havana Blueprint: default-nova-flavors Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/default-nova-flavors Spec URL: None Running nova_create_flavors.sh seems like a moving-the-mountain approach to making our example templates work on a default openstack installation. nova_create_flavors.sh should be deleted and templates modified to expect nova defaults, defaulting to a flavor which should work on typical all-in-one openstack installations In any case, the default AWS flavors might bear no relation to public openstack clouds, for example here are HPs standard.small - 2 vCPU / 2 GB RAM / 60 GB HD standard.medium - 2 vCPU / 4 GB RAM / 120 GB HD standard.large - 4 vCPU / 8 GB RAM / 240 GB HD standard.xlarge - 4 vCPU / 16 GB RAM / 480 GB HD standard.2xlarge - 8 vCPU / 32 GB RAM / 960 GB HD Rackspace private cloud looks like it might provide nova defaults. Project: horizon Series: havana Blueprint: define-flavor-for-project Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/define-flavor-for-project Spec URL: None The project-specific flavors API described in https://blueprints.launchpad.net/nova/+spec/project-specific-flavors has been implemented in Nova as of late-Folsom. We can now take advantage of that API capability to provide flavor management in the Project dashboard, and admin-level cross-project management in the System dashboard. Project: oslo Series: havana Blueprint: delayed-message-translation Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/delayed-message-translation Spec URL: https://wiki.openstack.org/wiki/DelayedMessageTranslation Current OpenStack does immediate translation of messages to the local server locale. This proves problematic for two use cases: 1) As an OpenStack technical support provider, I need to get log messages in my locale so that I can debug and troubleshoot problems. 2) As an OpenStack API user, I want responses in my locale so that I can interpret the responses. To solve these issues, we propose enabling delayed translation by creating a new Oslo object that saves away the original text and injected information to be translated at output time. When these messages reach an output boundary, they can be translated into the server locale to mirror today's behavior or to a locale determined by the output mechanism (e.g. log handler or HTTP response writer). NOTE: API responses in the user locale is handled separately under https://blueprints.launchpad.net/nova/+spec/user- locale-api Project: keystone Series: havana Blueprint: delegated-auth-via-oauth Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/delegated-auth-via-oauth Spec URL: https://review.openstack.org/#/c/36613/ A method by which for to be producing the providance of the ability to request tokens with a limited scope in the case of a third party Spec available at: https://review.openstack.org/#/c/36613/ Use case at: https://gist.github.com/termie/5225817 Project: neutron Series: havana Blueprint: dhcp-flexi-model Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/dhcp-flexi-model Spec URL: https://wiki.openstack.org/wiki/Neutron/dhcp-flexi-model Current dhcp agent model is great for dhcp services provided by in- process, same-node drivers, like dnsmasq. It would be good to generalize its implementation slightly to allow a proxy-based model where dhcp services are provided by an external party. Project: nova Series: havana Blueprint: different-availability-zone-filter Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/different-availability-zone-filter Spec URL: None Today, Scheduler has a lot of filters. Use these filters we can schedule an instance on specified availability_zone, or Schedule the instance on a different host from a set of instances. But we can not schedule the instance on a different availability_zone from a set of instance. Because user may want to spread instances on many availability_zones for disaster tolerance, so the filter can be helpfull. Project: glance Series: havana Blueprint: direct-url-meta-data Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/direct-url-meta-data Spec URL: None When a direct URL is returned to a client it will be helpful to the client to have additional information about that URL. For example, with a file:// URL the client may need to know the NFS host that is exporting it, the mount point, and FS type used. This could be directly spelled out or determined by way of some cryptic token. Such information is hard/awkward/kludgy to encapsulate in a URL. This blueprint requests that each storage system have a means to return direct URL specific meta-data to the client when direct_url is enabled. Project: nova Series: havana Blueprint: divide-download-image-logic Design: New Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/divide-download-image-logic Spec URL: None "Directly copy a file URL from glance" feature https://review.openstack.org/#/c/19408/ introduced a possibility to load image not only using http/https but directly copy it from filesystem. It would be better to divide this logic into separate loaders which are dynamically loaded according to the scheme. Project: horizon Series: havana Blueprint: domain-context Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/domain-context Spec URL: None Add the ability in the admin dashboard to set a domain context -- a working domain. This context would then limit the information shown on other related keystone panels to be limited to items in the domain context. Unsetting the domain context will provide existing behavior of exhaustively listing results. Will apply to Users, Projects and Groups. Project: ceilometer Series: havana Blueprint: double-entry-accounting Design: Drafting Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/double-entry-accounting Spec URL: None In order to offer SEC-Compliant billing we need to validate collected metrics from two sources. The needs to be an audit trail for important metrics such as instance lifecycle, bandwidth and storage usage. How might this be accomplished with CM? Project: horizon Series: havana Blueprint: dry-templates Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/dry-templates Spec URL: None There are a vast number of templates that could be completely eliminated with just a few tweaks (mostly around page names/titles, which could come from the underlying object). Project: nova Series: havana Blueprint: ec2-error-codes Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/ec2-error-codes Spec URL: http://wiki.openstack.org/blueprint-ec2-error-codes The EC2 API has well defined Error Codes, which are usable by programs to take corrective action. Without correct EC2 Error Codes it is very difficult to use the EC2 API. 'UnknownError' and 'EC2APIError' are really *not* useful. Project: cinder Series: havana Blueprint: edit-default-quota Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/edit-default-quota Spec URL: None This will allow admins to edit the default quotas. see also: https://blueprints.launchpad.net/horizon/+spec/edit-default-quota Project: nova Series: havana Blueprint: edit-default-quota Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/edit-default-quota Spec URL: None Horizon has a default quota UI which is currently RO. This UI should allow admins to edit the quota defaults. Project: horizon Series: havana Blueprint: edit-default-quota Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/edit-default-quota Spec URL: None Corresponding to https://blueprints.launchpad.net/nova/+spec/edit- default-quota. This will allow admins to edit the quota defaults. Project: nova Series: havana Blueprint: eliminate-clear-passwords-from-cells-table Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/eliminate-clear-passwords-from-cells-table Spec URL: https://wiki.openstack.org/wiki/Nova/eliminate-clear-passwords-from-cells-table At present, passwords for rabbit queues are stored in clear text in the database. This blueprint is for the elimination of these clear- text passwords, either by encrypting them in the database or deprecating the use of the database for storing cells information. Project: nova Series: havana Blueprint: encrypt-cinder-volumes Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes Spec URL: http://wiki.openstack.org/VolumeEncryption The Cinder volumes for a virtual machine (VM) are currently not being encrypted. This makes the platforms hosting volumes for VMs high value targets because an attacker can break into a volume-hosting platform and read the data for many different VMs. Another issue is that the physical storage medium could be stolen, remounted, and accessed from a different machine. This blueprint addresses both of these vulnerabilities The aim of this blueprint is to provide encryption of the VM's data before it is written to disk. The idea is similar to how self-encrypting drives work. Our goal is to present the VM a normal block storage device, but we will encrypt the bytes in the virtualization host before writing them to the disk. For more information, see the referenced specification. Project: nova Series: havana Blueprint: encrypt-ephemeral-storage Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/encrypt-ephemeral-storage Spec URL: None This blueprint is an incremental feature to [https://blueprints.launchpad.net/nova/+spec/encrypt-cinder-volumes]. When virtual machines (VMs) are launched, ephemeral storage is created to support a large single volume. It is created locally on the same platform as the machine hosting the VM, for both the guest operating system files and additional storage space can also be added for other purposes. These volumes are currently not being encrypted, and this makes the platforms hosting VMs high value targets because an attacker can break into the platform and read the data for many different VMs. This feature makes it harder for an attacker to read VM disks, since it encrypts each one with a unique key that is not stored locally. Also, if the physical storage medium were stolen, remounted, and accessed from a different machine, this blueprint fully addresses this vulnerability also. The aim of this blueprint is to provide encryption of the VM's data before it is written to disk. The idea is similar to how self-encrypting drives work. Our goal is to present the VM a normal block storage device, but we will encrypt the bytes in the virtualization host before writing them to the disk. For more information, see the referenced specification. Project: keystone Series: havana Blueprint: endpoint-filtering Design: Drafting Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/keystone/+spec/endpoint-filtering Spec URL: None Currently Keystone returns all endpoints in the service catalog, regardless whether users have access to them or not. This is neither necessary nor efficient. We need to establish project-endpoints relationship so we can effectively assign endpoints to a given project, and be able to filter endpoints returned in the service catalog based on the token scope. Project: nova Series: havana Blueprint: entrypoints-plugins Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/entrypoints-plugins Spec URL: https://etherpad.openstack.org/grizzly-common-entrypoints-plugins Convert the binaries (everything in the bin/ directory) to use Python entrypoints. Project: heat Series: havana Blueprint: environments Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/environments Spec URL: https://wiki.openstack.org/wiki/Heat/Environments https://wiki.openstack.org/wiki/Heat/Environments Project: cinder Series: havana Blueprint: eql-volume-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/eql-volume-driver Spec URL: None We would like to introduce the Dell Equalogic support for the nova- volume service. This blueprint proposes the driver based on the iSCSI driver already integrated into the OpenStack nova-volume service, with the basic capabilities to create, export and delete volumes and snapshots. Project: heat Series: havana Blueprint: event-persistence Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/event-persistence Spec URL: https://wiki.openstack.org/wiki/Heat/event-persistence Currently when we delete a stack, we remove all information associated with it from the database. This is a very bad idea (and not what AWS do) because it means that there is no record of the stack having ever existed. In particular, it is bad with the new rollback feature, since if a stack fails it will be rolled back by default and all records of *how* it failed destroyed. When a stack is deleted, we should mark it as deleted in the database (with a timestamp). Deleted stacks should not show up in the stack list or be accessible by name. New stacks whose names conflict with deleted stacks should be allowed. However, access to the deleted stack and its events using the ARN or UUID or (canonical) URL should be maintained. Project: nova Series: havana Blueprint: evzookeeper-kazoo Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/evzookeeper-kazoo Spec URL: None It is likely the evzookeeper driver can be supplemented with kazoo (or migrated to kazoo) since it appears to be general consensus that kazoo (right now) is more active and provides more useful functionality that will aid its usage in openstack (and nova in general). Project: ceilometer Series: havana Blueprint: example-consumer-program Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/example-consumer-program Spec URL: None DreamHost is willing to release most of the DUDE (our ceilometer client program) as open source. We will need to modify it to make it more generic (probably by making it use a plugin for communicating with the billing system), but a lot of the logic should be reusable at least as an example program for someone else making their own bridge between ceilometer and a billing system. Project: heat Series: havana Blueprint: exception-formatting Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/heat/+spec/exception-formatting Spec URL: None Currently any API request that results in an error response results in an error message which is one of: - a stack trace - a generic http error response which hides the original message Error responses should exhibit the following: - an appropriate HTTP response code set - a message which is clear and helpful to the user - where appropriate, some machine-parsable reference to where in the template the source of the error is (parameter name for validation error, resource name for a resource error, etc) python-heatclient and horizon should display any message and make use of any reference to template elements. Project: ceilometer Series: havana Blueprint: expose-event-data Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/ceilometer/+spec/expose-event-data Spec URL: None Once we are able to collect raw events we will need to expose this data to users. Project: ceilometer Series: havana Blueprint: extended-client-operations Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/extended-client-operations Spec URL: None Including access to Events, aggregated data types, summary reports, etc. (similar to what is available in StackTach's "stacky" tool) Project: keystone Series: havana Blueprint: extract-credentials-id Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/extract-credentials-id Spec URL: None Move the credentials API into its own backend. LDAP was not going to be able to support credentials. Even with a custom schema, many people are using LDAP in read only mode, which means that they would not be able to use the credentials API at all. By splitting it out, we have a workable solution for both SQL and LDAP Identity backends. Project: keystone Series: havana Blueprint: extract-eventlet Design: Discussion Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/extract-eventlet Spec URL: https://etherpad.openstack.org/keystone-extract-eventlet Eventlet is not necessarily a good fit for keystone. The event driven model works best where the application is spending a lot of time waiting for requests to finish which works well for nova and others that may have long periods of waiting on machines to come up. Keystone is a more traditional web application that could benefit from the optimizations of apache/nginx and as we look to add more encryption that is not well supported by eventlet we should provide at least the option of running on other WSGI servers. Project: nova Series: havana Blueprint: fc-support-for-vcenter-driver Design: Discussion Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/fc-support-for-vcenter-driver Spec URL: None Title: vCenter compute driver enhancements to support HP Cinder 3Par driver Blueprint/Facilitated Discussion Overview Volumes created on FC arrays are supported only for KVM today. This blueprint proposal is to 1. Enhance the vCenter driver to support FC volume attach to instances created on ESX 2. Support HP Cinder 3Par driver 3. Provide HBA information of all ESX hosts in the vCenter cluster(s) so that cinder driver can present the LUN to the right ESX host(s) 4. Provide iSCSI initiator information for all hosts in the cluster Intending to implement for vCenter compute driver only Note: The blueprint for FC support for KVM(libvirt) has already been approved and code has been merged into Grizzly 3. https://wiki.openstack.org/wiki/Cinder/FibreChannelSupport Project: nova Series: havana Blueprint: find-host-and-evacuate-instance Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/find-host-and-evacuate-instance Spec URL: None In the event of a unrecoverable hardware failure, support needs to relocate an instance to another compute so it can be rebuilt. The API call should locate a suitable host within the same cell (using nova- scheduler), and perform an update of the instance's location to the new host (similar to a rebuild). This call should only be available to users with the Admin role. Project: nova Series: havana Blueprint: fix-libvirt-console-logging Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/fix-libvirt-console-logging Spec URL: None We need to resolve the problems with unbounded growth in console logs for libvirt. This involves moving to a unix domain socket. Project: cinder Series: havana Blueprint: flatten-volume-from-snapshot Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/flatten-volume-from-snapshot Spec URL: None Currently, creating a volume from a snapshot using some of the volume drivers will create a hidden dependency which will prevent the snapshot from being removed. This change would permit a flag which, when set, would cause the new volume to be disassociated from the snapshot after creation, allowing the snapshot to be deleted. Project: nova Series: havana Blueprint: flavor-instance-type-dedup Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/flavor-instance-type-dedup Spec URL: None Nova currently uses the terms flavor and instance type interchangeably in both user facing tools and in code. Lets pick just one. Project: heat Series: havana Blueprint: generate-resource-docs Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/generate-resource-docs Spec URL: None It should be possible to generate documentation for resource types by introspecting the following: - docstrings on resource type classes - documentation attributes on the properties schema This will keep the documentation close to the code, which at least in theory will help it to stay up-to-date. This documentation would be consumed by the following: - Automatic resource type documentation manual generation - Help text served to clients (cli and gui) via a REST API call Project: nova Series: havana Blueprint: get-cell-free-ram Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/get-cell-free-ram Spec URL: None The capacity information is stored in the cell's StateManager in memory. There's no way of getting the free ram of a cell. For Admins to plan for capacity, they need an API to fetch the capacity for cells Project: keystone Series: havana Blueprint: get-role-assignments Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/get-role-assignments Spec URL: https://etherpad.openstack.org/keystone-role-inheritance The current v3 api specification includes an api for getting all roles for a user: GET /users/{user_id}/roles with the response actually being a list of role assignments (rather than just roles). Although this is part of the v3 spec, it is not actually implemented in Grizzly (it always returns an error). We should re-define (and implement) this api so that it is clear it is indeed for getting the effective role assignments that exist for a given user, whether these roles are directly assigned, by virtue of group membership or inherited from the parent domain. Project: ceilometer Series: havana Blueprint: gettext-i18n-issue Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/gettext-i18n-issue Spec URL: None The recent gate job failure http://logs.openstack.org/25815/2/gate /gate-ceilometer-python27/1914/console.html.gz which was caused by the nova patch https://github.com/openstack/nova/commit/9447e59b704701aad7 65f8ffa109843d9ffc88ae reminds us that we have the same gettext issues needs to be addressed. Project: nova Series: havana Blueprint: ghosts Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/ghosts Spec URL: https://wiki.openstack.org/wiki/Nova/Ghosts "Ghosts" are used to keep track of resources held by the hypervisor for a short period of time after destroying an instance. They work around an issue where hypervisors report some instance resource as free when it is not actually yet free due to some time-consuming process such as memory scrubbing. Project: glance Series: havana Blueprint: glance-basic-quotas Design: Discussion Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/glance/+spec/glance-basic-quotas Spec URL: None It would be very helpful if we were able to limit the usage of some basic image-related resources, like - the number of images stored - the amount of storage in occupied by a set of images - ... Apparently, implementation of these limits in Glance would require an introduction of some sort of quota capabilities similar to those in Nova and Cinder. Ideally Nova, Cinder and Glance could share the same quota handling code. There’s another blueprint which actually covers this potential approach to the problem resolution (see the blueprint’s dependencies). Project: glance Series: havana Blueprint: glance-cinder-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver Spec URL: None We have a driver for swift as an object storage back-end in Glance, so make a block storage back-end is make sense to me, that is Cinder. **Note**: Currently Cinder store is a partial implementation. After Cinder expose 'brick' library, and Readonly-volume-attaching', 'volume-multiple-attaching' enhancement ready, the store will support 'Upload' and 'Download' interface finally. Draft: Interfacing to cinderclient and creating a read only volume specifically for image store. 1. Create an image-store volume (special type volume that's R/O). 2. Store the glance image, and update glance to have a record of the image and it's location (specifically provider_location). 3. Copy can use the clone functionality which is especially great for back- ends with advanced cloning capability. Project: glance Series: havana Blueprint: glance-scrubber-refactoring Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/glance/+spec/glance-scrubber-refactoring Spec URL: None Short term: Change current Scrubber to allow it support multiple location for 'pending_delete' image. Long term: 1. Adding status field to image location. (This part be covered in a dedicated BP: https://blueprints.launchpad.net/glance/+spec/image-location-status) 2. Change Scrubber using DB to cleanup image data from backend store which location under 'pending_delete' status. Project: nova Series: havana Blueprint: glusterfs-native-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/glusterfs-native-support Spec URL: None Qemu > 1.3 is now able to directly address an image stored in a GlusterFS backend thanks to GlusterFS native support : http://www.gluster.org/2012/11/integration-with-kvmqemu/ It would be great if Nova could take advantage of it, as benchmarks are showing huge I/O improvements http://raobharata.wordpress.com/2012/10/29/qemu- glusterfs-native-integration/ This could be either autodetected by Nova if instances_path is stored in a Gluster instance, or enabled thru nova.conf setting. Project: cinder Series: havana Blueprint: gpfs-volume-attributes Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/gpfs-volume-attributes Spec URL: None GPFS provides a variety of knobs which together define the performance and reliability characteristics of a volume. This BP in particular proposes to specify the following attributes when creating a new volume: * The storage pool on which the volume is placed * Number of block-level replicas * Whether the physical blocks should be allocated locally on the node issuing the IO or striped across the cluster * Whether writes to the volume should use direct IO * Number of file system blocks to be laid out sequentially on disk to behave like a single large block * Local storage attached to specific node(s) where the replicas of the volume should be allocated Project: cinder Series: havana Blueprint: gpfs-volume-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/gpfs-volume-driver Spec URL: None IBM General Parallel File System (GPFS) is a mature cluster file system used by some of the largest enterprises and supercomputers in the world. It offers a number of specific features and optimizations for hosting master images, instances, and volumes. In particular, the bock-level format-agnostic copy-on-write mechanism enables quick volume provisioning through snapshots. The File Placement Optimization (FPO) feature allows specifying the set of nodes and their local disks where the physical blocks of a particular volume file and its replicas should be allocated. Transparent block-level replication, controllable per file, provides resilient volumes. This driver implements an initial set of features using the primitives of the GPFS file system. Project: nova Series: havana Blueprint: graceful-shutdown Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/graceful-shutdown Spec URL: None Enable Nova services to be terminated gracefully. Disable processing of new requests, but allow new requests to complete before terminating the process. During a software upgrade, this would allow service instances to be swapped out, while still completing existing requests. Steps: 1) Disable message "listening". 2) Disable periodic task timer. 3) Wait for existing requests and periodic tasks to complete. 4) Kill process. Project: oslo Series: havana Blueprint: graceful-shutdown Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/graceful-shutdown Spec URL: None Add support for graceful shutdown of services. Give services a chance to do complete existing requests before terminating. Required for Nova BP: https://blueprints.launchpad.net/nova/+spec/graceful-shutdown Project: glance Series: havana Blueprint: gridfs-store Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/gridfs-store Spec URL: None The idea is to create a Store backend for GridFS[0]. In order to achieve this, the store should: 1) Handle MongoDB URIs 2) Handle authentication 3) Handle Add / Get / Delete of files 4) Be Tested on ReplicaSet Environments [0] http://docs.mongodb.org/manual/applications/gridfs/ Project: nova Series: havana Blueprint: group-affinity-filter Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/group-affinity-filter Spec URL: None Now in OpenStack, we have GroupAntiAffinityFilter to schedule the instance on a different host from a set of group hosts, it is better we can also add a new GroupAffinityFilter to support schedule the instance to host from a set of group hosts. Project: horizon Series: havana Blueprint: group-domain-role-assignment Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/horizon/+spec/group-domain-role-assignment Spec URL: None Provide an interface to allow the user to assign domain role to groups. This is the continuation of the project role assignemt to Group BP: https://blueprints.launchpad.net/horizon/+spec/group-role- assignment Project: horizon Series: havana Blueprint: group-role-assignment Design: Drafting Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/group-role-assignment Spec URL: None Provide an interface to allow the user to assign project role to groups. Project: horizon Series: havana Blueprint: hacking Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/hacking Spec URL: None It appears that rebasing on changed master branch is not a trivial task now. When nearly each Horizon module includes comma separated imports, all patch sets including import from the same module in an existing import statement can't be merged automatically. You have to rebase manually, just because of a few imports. It would be nice to implement all imports uniformly like that is done in other projects. For example we could write one import per line or just replace all related imports with module import. That would solve merging problems and code would look much better. It also would be nice to add HACKING.rst to Horizon. We could describe coding standards there and we would (hopefully) never get such kind of rebase and merge problems. Since we start using HACKING.rst, the following import standarts are to be enabled: F841 local variable '' is assigned to but never used H201 no 'except:' at least use 'except Exception:' H301 one import per line H303 No wildcard (*) import H304 No relative imports H306 imports not in alphabetical order Project: ceilometer Series: havana Blueprint: hbase-metadata-query Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/hbase-metadata-query Spec URL: None metadata query implementation needs to be added to HBase. This might imply schema change of HBase as well (migration required?). Project: cinder Series: havana Blueprint: hds-hus-iscsi-cinder-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/hds-hus-iscsi-cinder-driver Spec URL: None Add a cinder volume iSCSI driver to support HUS (DF850) arrays from Hitachi Data Systems Inc. This will support: -- volume creation/deletion. -- snapshot creation/deletion -- volume attach/detach -- statistics -- create volume from snapshot Project: heat Series: havana Blueprint: heat-multicloud Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/heat-multicloud Spec URL: https://wiki.openstack.org/wiki/Heat/Blueprints/heat-multicloud Building on https://blueprints.launchpad.net/heat/+spec/heat- standalone, this allows heat to run without any configured keystone endpoint. All actions will be performed using whatever keystone endpoint and credentials are passed to the request. Tasks include: - changing authpassword middleware to take the keystone endpoint from the request header rather than config - providing a way for the cfn and cloudwatch pipelines to find out what keystone endpoint to use for incoming signal and watch data Project: heat Series: havana Blueprint: heat-standalone Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/heat-standalone Spec URL: None It should be possible to install a heat server which is configured to orchestrate on an external OpenStack compatible cloud. The existing authpassword middleware provides most of what is required, however further effort is required in: - python-heatclient on what credentials to send in standalone mode - heat-api-cfn and heat-api-cloudwatch authentication so that servers on the external cloud can signal waitconditions and watch data This is a pre-requisite for blueprint heat-multicloud Project: heat Series: havana Blueprint: heat-trusts Design: Approved Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/heat/+spec/heat-trusts Spec URL: None Now keystone trusts have been merged, we need to figure out how to use trust tokens in order to avoid storing user credentials in our DB Project: horizon Series: havana Blueprint: heat-ui Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/heat-ui Spec URL: None It should be possible to perform the following Heat operations through Horizon: - Select a file or url of a template to launch as a stack - Set stack launch parameters and launch a stack - List stacks for the currently selected Project - List resources and events for a stack - Update an existing stack with modified template/parameters - Delete a running stack Project: horizon Series: havana Blueprint: heat-ui-resource-topology Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/heat-ui-resource-topology Spec URL: https://wiki.openstack.org/wiki/Horizon/Heat-UI Building a visual Topology Graph that describes the various resources, parameters, and relationships that exist based on a submitted Template. During instance creation the Topology Graph should reflect the current state of the resources and eventually be able to provide a quick visual overview of the health of the deployed stack. Project: horizon Series: havana Blueprint: horizon-cisco-n1k Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/horizon-cisco-n1k Spec URL: None Add a new dashboard in the admin mode of Horizon to manage the Cisco Nexus 1KV when the cisco plugin is being used. Essentially add the ability to add/delete/update profiles and associate tenants with particular profiles via dashboard. Project: nova Series: havana Blueprint: host-manager-overhaul Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/host-manager-overhaul Spec URL: None It appears that HostManager class in nova contains methods which are responsible for general host selection as well as methods which are specific for obtaining and storing host-specific and service-specific data. We propose to separate generic parts responsible for hosts handling and leave them in HostManager class while moving service- specific methods to ServiceProvider class. These ServiceProvider class can be used for acquiring, storing and releasing on demand host- specific data (i.e. computing capabilities, storage space etc.). In future this can be used as a basis for building a generic host manager and scheduler. Also as for now HostManager seems to be in need for refactoring, which can be done in the framework of this blueprint. Project: neutron Series: havana Blueprint: hostid-vif-override Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/hostid-vif-override Spec URL: None There are some networks that are a hybrid mix of VIF types (e.g. OVS and IVS). When using the generic VIF driver on nova compute nodes, there is no way to specify different VIF types for different compute nodes. This allows users to configure a list of VIF overrides for nova host ID's. This will utilize the proposed host_id parameter being sent from Nova (https://blueprints.launchpad.net/nova/+spec/vm-host- quantum). Project: heat Series: havana Blueprint: hot-hello-world Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/hot-hello-world Spec URL: None Implement end-to-end processing of an initial hello world style example using the HOT DSL being developed. That work involves the following items: 1) Finalize an initial hello world HOT template 2) Implement the engine code that will consume the HOT template and deploy it successfully 3) Document the HOT DSL constructs used in the hello world example This BP is related to [1] and actually a break- down item of [1] to make it more actionable. [1] https://blueprints.launchpad.net/heat/+spec/open-api-dsl Project: heat Series: havana Blueprint: hot-parameters Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/heat/+spec/hot-parameters Spec URL: None This blueprint is to address full parameter validation based on the current HOT hello world implementation. HOT will use an enhanced syntax for defining parameters constraints - partly taken from proposal discussed at [1] - which will be implemented by this blueprint. Goal is to have feature equivalance with cfn parameter validation, plus addtional enhanced features (e.g. multiple regular expressions, specific validation message per constraint, etc.). As part of this blueprint we also plan to provide an enhanced HOT sample template, as well as documentation of the parameter validation syntax. [1] https://wiki.openstack.org/wiki/Heat/DSL Project: heat Series: havana Blueprint: hot-specification Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/hot-specification Spec URL: None This blueprint is to provide documentation (i.e. a formal specification including examples) for the new HOT DSL currently being implemented. It is suggested for this to be done in context of other documentation for the Heat project, which is maintained in the Heat code repository so there is some review and governance process on the specification (as opposed to using a wiki that anyone can edit). For the first iteration, it is planned to provide documentation of those features that are already implemented (HOT hello world), plus features being implemented in parallel, so we have a clean documentation at the end of the current development cycle. Project: cinder Series: havana Blueprint: huawei-fibre-channel-volume-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/huawei-fibre-channel-volume-driver Spec URL: None This blueprint is to add Fibre Channel drivers for Huawei storage system. changes as follows: 1.Add Fibre Channel drivers for huawei OceanStor T series and Dorado arrays. 2.For there are lots of codes that can be shared for both iSCSI driver and FC driver,we are going to refactor huawei iSCSI driver codes and rewirite common classes. Project: cinder Series: havana Blueprint: huawei-hvs-volume-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/huawei-hvs-volume-driver Spec URL: None Huawei's OceanStor HVS-series enterprise storage system is an optimum storage platform for next-generation data centers that feature virtualization, hybrid cloud, simplified IT, and low carbon footprints. This blueprint is to add an iSCSI Driver and a Fibre Chanel Driver for Huawei HVS storage system using REST. Project: ceilometer Series: havana Blueprint: hyper-v-agent Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/ceilometer/+spec/hyper-v-agent Spec URL: None Hyper-V usage metrics can be obtained by using the rich WMI V2 API available starting with Windows Server / Hyper-V Server 2012: http://msdn.microsoft.com/en-us/library/hh850073(v=vs.85).aspx Project: nova Series: havana Blueprint: hyper-v-dynamic-memory Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-dynamic-memory Spec URL: None Hyper-V supports memory ballooning to improve VM density with a feature called dynamic memory. Dynamic memory needs to be enabled at the VM level in the Nova Hyper-V driver. Smart Paging must be configured as well in order to improve the reliability in case of memory oversubscription. The configuration dan be done via standard OpenStack configuration options including: -Enabling / disabling dynamic memory -Percentage of memory to be initially assigned -Smart paging file location -Hyper-V memory buffer percentage For details: http://technet.microsoft.com/en-us/library/hh831766.aspx Project: nova Series: havana Blueprint: hyper-v-ephemeral-storage Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-ephemeral-storage Spec URL: None The scope of this blueprint is to add ephemeral storage support in the Nova Hyper-V driver, including resize. Project: nova Series: havana Blueprint: hyper-v-metrics Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-metrics Spec URL: None The Hyper-V compute metrics collected by Ceilometer need to be enabled when the instance is created. The metrics involved include vCPUs and local disk I/O. Project: neutron Series: havana Blueprint: hyper-v-metrics Design: New Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/hyper-v-metrics Spec URL: None The Hyper-V compute metrics collected by Ceilometer need to be enabled when the instance is created. The metrics involved include network I/O on the virtual switch ports. Project: nova Series: havana Blueprint: hyper-v-rdp-console Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-rdp-console Spec URL: None Hyper-V, unlike the majority of the hypervisors employed on Nova compute nodes, uses RDP instead of VNC as a desktop sharing protocol to provide instance console access, which means that novnc / xvpvnc are not a viable options for providing console connections to Hyper-V hosted instances. Goals: 1) Add Hyper-V RDP console access support to Openstack 2) From a user perspective, console access should be handled in a way as consistent as possible between hypervisors. Project: nova Series: havana Blueprint: hyper-v-remotefx Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-remotefx Spec URL: None OpenStack VDI support can be largely improved by enabling the Hyper-V RemoteFX features in the Nova Hyper-V driver. RDVH must be enabled on the host* and one or more physical GPU with RemoteFX support need to be available. *On Hyper-V server enable RDVH with: Add-WindowsFeature –Name RDS-Virtualization Project: nova Series: havana Blueprint: hyper-v-vhdx Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-vhdx Spec URL: None Hyper-V uses the VHD and VHDX formats for virtual disks. VHDX has been introduced with Windows Server 2012, providing better performance and the ability to resize differential disks (aka CoW in the driver's implementation). VHDX disks are not supported on Windows Server 2008 and 2008 R2. The Nova Hyper-V driver currently supports the VHD format only. The aim of this blueprint in to provide support for VHDX as well. Note: VHDX support in Hyper-V requires the WMI root \virtualization\v2 namespace. Project: nova Series: havana Blueprint: hyper-v-wmi-v2 Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/hyper-v-wmi-v2 Spec URL: None The Nova Hyper-V driver uses the WMI API to execute most OS actions. Starting from Windows Server 2008 and up to Windows Server 2008 R2 the namespace used for hypervisor management was "root\virtualization" (aka V1). Windows Server 2012 introduced a refactored namespace "root\virtualization\v2", offering also the V1 version for backwards compatibility with existing tools. Features introduced with Windows Server 2012 (e.g. live migration, VHDX, replica) are supported on the V2 namespace only. The Hyper-V Nova compute driver has been refactored in the Grizzly timeframe in order to avoid any coupling between the operations "ops" classes and the "utils" classes, the latter providing the interaction with the OS. The object of this blueprint is to add utils classes implementing the V2 namespace without impacting on the ops classes. This will be accomplished by adding an abstract base class for each utils class, providing the current V1 implementations along with a V2 versions. Factory methods will instantiate the proper class at run time based on the OS version (OS <= 2008 R2: V1, OS >= 2012: V2). Note: "livemigrationutils" provides already a V2 implementation only. Project: neutron Series: havana Blueprint: hyper-v-wmi-v2 Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/hyper-v-wmi-v2 Spec URL: None The Neutron Hyper-V plugin agent uses the WMI API to execute most OS actions. Starting from Windows Server 2008 and up to Windows Server 2008 R2 the namespace used for hypervisor management was "root\virtualization" (aka V1). Windows Server 2012 introduced a refactored namespace "root\virtualization\v2", offering also the V1 version for backwards compatibility with existing tools. The object of this blueprint is to add utils classes implementing the V2 namespace without impacting on the ops classes. This will be accomplished by adding an abstract base class for each utils class, providing the current V1 implementations along with a V2 versions. Factory methods will instantiate the proper class at run time based on the OS version (OS <= 2008 R2: V1, OS >= 2012: V2). Note: Hyper-V 2012 R2 (scheduled for this year, preview available for download now) drops support for the V1 namespace, which means that the currently available code (Grizzly) will NOT be supported. Project: horizon Series: havana Blueprint: hypervisor-info Design: Drafting Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/hypervisor-info Spec URL: None Some operators would find it very useful to have access to the hypervisor information in nova via the OpenStack Dashboard. This feature would present the list of hypervisors as well as statistics and any other pertinent information. Most, if not all, of this data already exists in nova. I've implemented an example for the hypervisor list and statistics as an additional panel (though this may be better suited as a tab in 'System Info'): https://github.com/zestrada/horizon-hypervisor Project: ceilometer Series: havana Blueprint: ibm-db2-support Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/ceilometer/+spec/ibm-db2-support Spec URL: None Ceilometer currently support mongodb, hbase, mysql but not db2. DB2 as an enterprise data store should be supported as ceilometer backend. Project: cinder Series: havana Blueprint: ibm-gpfs-driver Design: Superseded Lifecycle: Complete Impl: Not started Link: https://blueprints.launchpad.net/cinder/+spec/ibm-gpfs-driver Spec URL: None Adding GPFS driver to Cinder, allow nova-compute consume it with more effective way. Project: nova Series: havana Blueprint: image-compression-mode Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/image-compression-mode Spec URL: None Sets image compression level via nova.conf. -1 for least compressed, -9 for most, etc. Option is IntOpt, image_compression_mode. Currently have code for gzip in XenApi: https://review.openstack.org/31898 Project: nova Series: havana Blueprint: image-multiple-location Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/image-multiple-location Spec URL: None Glance now supported adding/removing multiple location information to the metadata of an image, an image maybe have more then one location within the backend store, nova should add a layer to transparent handle the image preparing for the instance with the best approach/location. It should allow Cloud administrator to configure the image handler pipline and its order administrator preferred to the layer. Project: nova Series: havana Blueprint: improve-block-device-handling Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/improve-block-device-handling Spec URL: https://wiki.openstack.org/wiki/BlockDeviceConfig Currently, the drivers have special code to deal with creating swap and ephemeral. This is complicated by the fact that some of these values can be changed in block_device_mapping. This needs to be cleaned up.  * Values passed in BDMs need to be validated  * If BDM is not specified they should be defaulted in the api according to the instance_type. For example, ephemeral in the instance_type will create bdm 2 and swap will create bdm 3.  * BDMs need to get the proper prefix from the driver. This probably means that the bdms will not track the prefix at all, and that the prefix will be determined by the driver can just be pulled from instance['default_root_device'] (or something similar)  * All special handling for swap and ephemeral should be removed from the drivers. They should build the instance record based solely on the block_device_mapping that is passed in.  * A smart migration needs to be done to add missing block device mapping for all instances that already exist. Project: nova Series: havana Blueprint: improve-boot-from-volume Design: Superseded Lifecycle: Complete Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/improve-boot-from-volume Spec URL: None The aim of this blueprint is to improve the interface for booting from volumes. Some discussion around this occured on the Grizzly summit, and was sumarized in the following etherpad https://etherpad.openstack.org/grizzly-boot-from-volumes . This blueprint will cover the following functionality: * Make Nova able to boot without image * Make Cinder relate image metadata when the volume is created with an --image_id (done already, see https://review.openstack.org/#/c/16172/4) * Make this data available through the Cinder API * Make nova consider this data when booting from volume using bdms * Add the --volume option to nova boot * Extend nova API to return more details about the volumes attached and also be more explicit when an instance is booted from a volume * Add the --kernel and --ramdisk options to nova boot Note that bits of this blueprint require changes to Cinder as well. For more details see the related discussion on the dev ML http://lists.openstack.org/pipermail /openstack-dev/2012-November/002793.html Further steps outlined in the etherpad will be done in separate blueprints. Project: neutron Series: havana Blueprint: improve-db-performance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/improve-db-performance Spec URL: None Improve db perfomance of quantum. There are several issues that this blueprint aims to fix. In quantum/db/models_v2.py in the Ports class we define: fixed_ips = orm.relationship(IPAllocation, backref='ports', lazy="dynamic") When get_ports() is called: items = [self._make_port_dict(c, fields, context=context) for c in query.all()] generates a: SELECT ipallocations.port_id AS ipallocations_port_id, ipallocations.ip_address AS ipallocations_ip_address, ipallocations.subnet_id AS ipallocations_subnet_id, ipallocations.network_id AS ipallocations_network_id, ipallocations.expiration AS ipallocations_expiration FROM ipallocations WHERE '04f508b7-0d62-431c- a8e3-4148603b8a58' = ipallocations.port_id for each port when we could have done this in one query via:         query = query.options(orm.subqueryload(models_v2.Port.fixed_ips)) if we weren't using lazy="dynamic" (though code needs to change in order to account for this) Lastly, we need a better solution for how we have been extended fields for the return of results i.e: for net in quantum_lswitches: self._extend_network_dict_provider(context, net) self._extend_network_port_security_dict(context, net) self._extend_network_dict_l3(context, net) self._extend_network_qos_queue(context, net) as this generates tons of selects as well. This blueprint will probably be implemented in a few patch sets. Project: neutron Series: havana Blueprint: improve-dhcp-initial-sync Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/improve-dhcp-initial-sync Spec URL: None When the dhcp agent starts up it does an initial sync of all of the networks/subnets/ports in quantum. The current implementation makes a RPC call to quantum-server for each network. This could be greatly sped up if done in one bulk request. Project: nova Series: havana Blueprint: improve-isolatedhostsfilter Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/improve-isolatedhostsfilter Spec URL: None IsolatedHostsFilter should give the opportunity to not restrict the isolated hosts to a set of isolated images. This can be performed by adding a flag "restrict_isolated_hosts_to_isolated_images" in nova.conf, if it's set to True then the filter acts like the current one otherwise the isolated hosts can run every images but isolated images must be run into isolated hosts. By default it is set to True. Project: nova Series: havana Blueprint: improve-vmware-disk-usage Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage Spec URL: None Current VMware Hyper doesn't support ephemeral disk and doesn't honor root disk size of a flavor . This blueprint will enhance the current VMware Hyper to support ephemeral disk and different root disk size based on flavor. Project: horizon Series: havana Blueprint: improved-boot-from-volume Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/improved-boot-from-volume Spec URL: https://docs.google.com/document/d/1Zg5PS3-s4OJtSoYGttI13OMHEtragDPYv63GUl4zpF4/edit?usp=sharing There are some flaws with current boot from volume flow: 1. Even if you boot from volume you need to specify an image. Bug is already created https://bugs.launchpad.net/horizon/+bug/1163566 2. When booting from volume you need to manually copy the image to the volume 3. https://review.openstack.org/#/c/22072/4 makes it possible to prepare volumes and use them later when booting from Volume, but doesn't address the need for automatically launching an instance which is backed by volumes based on existing images. 4. What's the use of the root gb and epheral gb attributes in Flavor if we're using boot from volume? My proposal: Add a new attribute: Backend, which can either be Local Disk File(Default like before) or Volume. What happens when somebody chooses Volume as instance Storage backend? 1. We create a new Volume with the size of root gb of the flavor, this volume is created from the image chosen 2. If there is an ephemeral disk in the flavor we create another Volume based on the flavor epeheral gb disk size and attach it to vdb 3. We launch the instance with the two attached volumes, the volume created in step 1 should be used as root and contains the image that the user choose. I think the above steps should happen in nova, so nova should have this new parameter also called storage backend. Currently nova doesn't seem to support this, if nova doesn't want this we could also do it in horizon by manually calling the apis, but that doesn't seem nice. This mostly is an improvement in the usability of using Volumes as backend and using them to boot. Some people may choose not to use local disk files on the compute nodes at all. Because the performance and reliability of Cinder volumes can be better depending on the cinder backend. Ideal implementation is that nova/cinder supports this and in horizon we only call the api with a new backend parameter. It seems that there was something similar going on in Nova, but never got accepted and uses a nova.conf flag): https://blueprints.launchpad.net/nova/+spec/auto-create-boot-volumes https://lists.launchpad.net/openstack/msg12390.html Project: keystone Series: havana Blueprint: index-token-expiry Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/index-token-expiry Spec URL: None The 'expires' column in the table is not indexed, and only "unexpired" tokens are ever queried from the backend, resulting in very slow performance. ALTER TABLE token ADD INDEX expires (expires); Project: glance Series: havana Blueprint: index-using-checksum-image-property Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/index-using-checksum-image-property Spec URL: None Having the same UUID for the same image data in different regions was rejected by the community. However, we could be able to use the checksum image property to index the image. URL will be "/images?checksum=" (returning a list containing at most one element) Project: keystone Series: havana Blueprint: inherited-domain-roles Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/inherited-domain-roles Spec URL: https://etherpad.openstack.org/inhertited-domain-roles With the Identity v3 API we removed the concept of a global admin for use within keystone's policy file. However, there are situations were the current capabilities are not sufficient. The classic example is when a Cloud Provider uses a Domain for each of their customers and enables customer admins to manage their own users, groups and projects. However, the cloud provider would like to ensure that they maintain some specific admin roles across all their customers' projects (perhaps so that they could do things like evacuate VMs from a machine for maintenance). How can they ensure such a role is always added to such projects? Right now they would have to rely on the customer who created the project adding the appropriate roles for the cloud provider. A solution would be to allow a role assigned to a domain to optionally be able to be inherited by the owned projects. That way a cloud provider could assign, for example, a maintenance- role to an admin user (or group) on each domain they create, which would be automatically be included in a token they scoped to any project for issuing maintenance commands. Identity API changes proposed can be found here: https://review.openstack.org/29781 Project: glance Series: havana Blueprint: inherited-image-property-support Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/glance/+spec/inherited-image-property-support Spec URL: None Glance currently has support for storing image metadata in single or nested key-pairs. However, there are use-cases which need more capability from this metadata storage service, which would benefit from an admin-manageable list of "inherited" properties. One use-case is having the "configuration_strategy" values (which stores OVF or sysprep related data) get cloned into child images from the parent image's so the image-owner doesn't have to re-enter those values every time they snapshot an instance. A second use-case is having a cloud-administrator want to store license-cost properties on a per- image basis (like how much to charge per hour per cpu). This property can already be protected by role with the blueprint for "api-v2 -property-protection", but a cloud owner will want this property to be automatically inherited to child images. The cloud-administrator doesn't want to develop a second database to store this data in as it could get out of sync with the list of images in glance. Glance would know which property keys to inherit to child images with a "inherited" property list, stored in the glance database. A new API-extension to glance would be required for a cloud-admin to manage the "inherited" property list, which should have a similar API to to that used by whatever is proposed as part of https://blueprints.launchpad.net/glance/+spec/api-v2-property- protection . Based on how that property-protection blueprint gets implemented, it might be possible to have this inheritance managed at the level of the role required to access that property. Project: horizon Series: havana Blueprint: init-state-heat-topology Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/init-state-heat-topology Spec URL: None Heat recently added the ability to see all template resources before they have been created by setting an INIT state. This can be used to draw the complete topology at initialization instead of waiting for the resources to achieve a CREATE state. Project: nova Series: havana Blueprint: instance-group-api-extension Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension Spec URL: https://docs.google.com/document/d/1QUThPfZh6EeOOz1Yhyvx-jFUYHQgvCw9yBAGDGc0y78/edit?usp=sharing This API extends the current Compute-As-A-Service offering from Nova and allows tenants to group multiple VM instances and specify the desired relationship among these instances. With this API extension, tenants can register/list/delete a group and add/remove an instance from an existing group. Policies such as "anti-affinity" and "network-proximity" can be specified for each group. These policies will impact the placement results of the scheduling. The "members" attribute of a group tells what instances are part of this relationship. Instances can be members of multiple groups. Quantitative policies such as QoS can be added in future phase. Spec can be seen here: https://wiki.openstack.org/wiki/GroupApiExtension Project: heat Series: havana Blueprint: instance-group-nested-stack Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/heat/+spec/instance-group-nested-stack Spec URL: None InstanceGroup should be implemented with a nested stack to allow for interaction with the individual Instances in the future. Administrator (via orchestration automation or manually) wants to address individual Instances in an InstanceGroup to do things like suspend them if they've been compromised or had an internal error Project: heat Series: havana Blueprint: instance-resize-update-stack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/instance-resize-update-stack Spec URL: None As a follow-on from the instance-update-stack blueprint (which implemented support for updating the instance Metadata block), we should implement UpdateStack support for the InstanceType property, such that an instance can be resized after it has been launched. Proposing this for the H cycle Project: nova Series: havana Blueprint: iovisor-vif-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/iovisor-vif-driver Spec URL: http://plumgrid.com/technology/ IO Visor is a fully-virtualized IO engine in which new data plane functions can be developed, loaded and instantiated at run-time. IO Visor gets deployed in the hypervisor of each Data Center server providing a Virtual Fabric Overlay and the ability to dynamically provision Virtual Domains with a rich set of fully distributed Network Functions on top. This blueprint will introduce virtual interface (VIF) driver support for plugging a VM's VIF into IO Visor engine. Details about IO Visor: http://plumgrid.com/technology/ Project: neutron Series: havana Blueprint: ipsec-vpn-reference Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/ipsec-vpn-reference Spec URL: https://docs.google.com/presentation/d/1uoYMl2fAEHTpogAe27xtGpPcbhm7Y3tlHIw_G1Dy5aQ/edit IPSec VPN Reference Implementation to implement the reference implementation Project: neutron Series: havana Blueprint: ipv6-feature-parity Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/ipv6-feature-parity Spec URL: https://docs.google.com/document/d/1RaKIfaIpy0NhHtssWhlUYgtzPpRgQzjJTF5N1DAKD_c/edit The current L3 services in Folsom have gaps when using IPv6. This blueprint will bring parity between IPv4 and IPv6 L3 features. Project: neutron Series: havana Blueprint: isolated-network Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/isolated-network Spec URL: https://wiki.openstack.org/wiki/Isolated-network When a network is created, a broadcast domain is available to plug ports. It should be interesting to proposed an option on the network creation that enable the isolation between ports in a same broadcast domain (network), similar to a common use of private VLANs with isolated port technologies (RFC 5517). This prevents communication between VMs on the same logical switch. This functionality could address the use cases where we create a shared network between tenants, for example. This should also work with a provider network. Project: neutron Series: havana Blueprint: ivs-interface-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ivs-interface-driver Spec URL: None Indigo Virtual Switch (IVS) is a pure OpenFlow virtual switch designed for high performance and minimal administration. It is built on the Indigo platform, which provides a common core for many physical and virtual switches. More details on IVS can be found here: https://github.com/floodlight/ivs/blob/master/README.md This blueprint will introduce quantum interface driver support for plugging an interface into IVS. Scope: Add IVS VIF type to Quantum to allow plugins to reference the type on Nova compute nodes. Also adds IVS interface class to allow agents (e.g. DHCP) to bind to an IVS switch. Use Cases: Support the use of the Indigo virtual switch in place of open vSwitch. Implementation Overview: Add an interface class for the Linux network agent. Add the new VIF type for use by the plugins. Data Model Changes: Adding new interface class for agents and a new VIF type constant. Configuration variables: Agents can bind to the interface by referencing the class. Plugins can use the new VIF type in their port bindings. API's: N/A Plugin Interface: Plugins can reference the new VIF type in their portbindings. Required Plugin support: N/A Dependencies: Nova compute nodes will need to have IVS installed. CLI Requirements: N/A Horizon Requirements: N/A Usage Example: To use it with the BigSwitch plugin, set the vif_type to 'ivs' in etc/quantum/plugins/bigswitch/restproxy.ini Test Cases: unit tests are included in the code. Project: nova Series: havana Blueprint: ivs-vif-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/ivs-vif-driver Spec URL: None Indigo Virtual Switch (IVS) is a pure OpenFlow virtual switch designed for high performance and minimal administration. It is built on the Indigo platform, which provides a common core for many physical and virtual switches. More details on IVS can be found here: https://github.com/floodlight/ivs/blob/master/README.md This blueprint will introduce virtual interface (VIF) driver support for plugging a VM's VIF into IVS. Project: heat Series: havana Blueprint: json-parameters Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/json-parameters Spec URL: None From: http://lists.openstack.org/pipermail/openstack- dev/2013-April/007989.html Allow JSON values for parameters Project: keystone Series: havana Blueprint: key-distribution-server Design: Pending Approval Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/keystone/+spec/key-distribution-server Spec URL: https://wiki.openstack.org/wiki/MessageSecurity#A_Key_Distribution_Server_in_Keystone MessageSecurity requires a central repository to register service identies, manage grou pof sevices and store shared keys,a s well as provide a ticketing system to allow secure communication between parties (signing and optionally encryption services). The Key Distribution Server manages the ticketing system and stores shared keys between the Server itself and the registered servies. It may also store temporary group keys. This server is necessary for the implementation of https://wiki.openstack.org/wiki/MessageSecurity Project: keystone Series: havana Blueprint: keystone-manage-token-flush Design: Discussion Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/keystone-manage-token-flush Spec URL: https://etherpad.openstack.org/keystone-token-sql-mgmt (reduced scope of this BP to exclude issues related to archiving) Bug 1032633 describes how keystone's token table grows unconditionally as new tokens are issued as not disposed of after expiration. We've left this issue to deployers to resolve, as keystone should not automatically delete tokens that provide traceability for security issues, etc. However, we should provide a tool to make it easier to manage those tokens via keystone-manage. I'd propose the following command:   $ keystone-manage token-flush Flushing tokens simply deletes expired tokens, eliminating any means of traceability. This would require a new driver method that could be overridden with an alternative implementation, but it should look something like: delete_expired_tokens() This method should not be exposed to the HTTP API (at least not as part of this BP) -- that should require additional discussion. Project: heat Series: havana Blueprint: keystone-middleware Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/keystone-middleware Spec URL: None heat.common.auth_token is an ancient fork with some heat-specific customisations. This should be replaced with keystoneclient.middleware.auth_token, which is actively maintained by the keystone developers. As part of this work, the authtoken config can be moved from the *-paste.ini file to the *.conf file. *-paste.ini files shouldn't be considered user-editable configuration files, so we need to move out any config that needs customization on each site. This includes authtoken and ec2authtoken Project: neutron Series: havana Blueprint: l2-population Design: Discussion Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/l2-population Spec URL: https://docs.google.com/document/d/1sUrvOQ9GIl9IWMGg3qbx2mX0DdXvMiyvCw2Lm6snaWQ/edit Open source plugins overlay implementations could be improved to increase their scalability. Linux bridge VXLAN implementation, as well as OVS tunnels forwarding tables could be populated in order to limit dataplane-based learning (and broadcasts). A common mechanism driver could be implemented in ML2 to propagate the forwarding information among agents using a common RPC API. Project: neutron Series: havana Blueprint: l3-ext-gw-modes Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/l3-ext-gw-modes Spec URL: https://wiki.openstack.org/wiki/L3-ext-gw-modes-spec This blueprint is a follow up of the discussion around bug 1121129. In a nutshell, the goal is to allow users of the quantum API for specifying how a quantum router should behave when an external network is connected. For more details, please refer to the specification page. In order to guarantee full backward compatibility the current behaviour (default SNAT and DNAT - floating IPs) enabled will be the default selection. Plugins supporting the L3 API should not be required to support this feature too. To this aim, this change should be implemented as an API extension. Project: neutron Series: havana Blueprint: l3-router-port-relationship Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/l3-router-port-relationship Spec URL: https://docs.google.com/document/d/11HCGxrI0mtlEg-HlOfcuc0Ggz4Jsbbi-WiQ_oY5aP6U/edit The current L3 Router/Port relationship is managed by overloading fields. This blueprint creates a model to relate routers to ports. Project: neutron Series: havana Blueprint: lbaas-agent-scheduler Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-agent-scheduler Spec URL: None The goal of this blueprint is to adapt lbaas namespace agent to agent scheduler framework. This will allow to have multiple agents running in the cloud and to have control on how lb services are mapped to haproxy processes. Project: neutron Series: havana Blueprint: lbaas-common-agent-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-common-agent-driver Spec URL: https://wiki.openstack.org/wiki/Neutron/LBaaS/CommonAgentDriver Current haproxy-on-host driver implementation which is using agents is quite specific: - with haproxy it is easier to deploy the whole loadbalancer config from scratch every time then to create/update/delete separate components - namespace driver needs virtual interface driver on init, other drivers may have their own specific parameters So it is useful to unify reference agent implementation to: - make it suite any driver which wants to use async mechanism - have single lbaas agent type and hense single agent scheduling mechanism All we need is a small revision of agent API and agent mechanism of loading device driver(s) Project: neutron Series: havana Blueprint: lbaas-integration-with-service-types Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-integration-with-service-types Spec URL: None In order to give admins and tenants ability to choose LBaaS implementation we need to integrate LbaaS service with service types framework where service providers are registered and configured. This blueprint targets the following changes: 1) REST API change 2) CLI change Project: neutron Series: havana Blueprint: lbaas-multiple-vips-per-pool Design: Discussion Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/lbaas-multiple-vips-per-pool Spec URL: None Logical loadbalancer device should allow users to create multiple VIPs per pool. Such feature is supported by all sw and hw loadbalancers. That also include ability for VIPs to share one quantum port The implementation will include:  * Loadbalancer DB schema change  * extension change  * migration  * Corresponding change in 'reference implementation' Project: neutron Series: havana Blueprint: like-op-list Design: New Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/like-op-list Spec URL: None currently, the get_xxxs use the in_ to query the resources in filters parameter. It cannot deal with the requirement like ' to list all of the ports which have the ip address like this '10.0.%', or to list all of the network the name of which is like 'test_network%'. This BP will add an way to deal with it. for top field: quantum net-list --name test_network% x_query=like query = query.filter(Network.name.like('test_network%')) for sub field: quantum port-list --fixed-ips ip_address~=10.0.% query = query.filter(IPAllocation.ip_address.like('10.0.%')) for API compatible: we will use the current filters parameter: quantum net- list --name test_network% x_query=like filters = {'name': [test_network%], 'name_x_query': ['like']} quantum port- list --fixed-ips ip_address~=10.0.% filters = {'fixed-ips': {'ip_address': ['10.0.%']}, 'fixed- ips_ip_address_x_query': ['like']} Project: nova Series: havana Blueprint: list-resizes-through-admin-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/list-resizes-through-admin-api Spec URL: None As an administrator, I want to see the resize/migration operations in progress for a region/cell. - Justification: Deploys/nova-compute restarts currently break these operations, and this call would determine which instances need attention. Project: nova Series: havana Blueprint: live-migration-to-conductor Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/live-migration-to-conductor Spec URL: None Live migrations should be unified with the new conductor-resident unified code path(s) for migration. Project: nova Series: havana Blueprint: live-snapshot-vms Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/live-snapshot-vms Spec URL: None The snapshot command takes a cold snapshot of the instance disk. There are some use cases where it is useful to be able to snapshot the memory and processor state as well to be able to do a quick-launch of the instance. Note that booting from this special type of snapshot can be tricky, as the guest will need to reconfigure some things like its ip. Project: cinder Series: havana Blueprint: local-lvm-storage-utils Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/local-lvm-storage-utils Spec URL: None This will be the first step in offering a common block storage library and improving the LVM driver in Cinder. By separating the LVM specific code and making it independent this code could be imported/used by other projects in OpenStack as well as utilized by the LVM driver in Cinder. Project: cinder Series: havana Blueprint: local-storage-volume-scheduling Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/local-storage-volume-scheduling Spec URL: None In some use case (e.g. hadoop cluster), we need create local volume in the same host in which vm instance was created. cli example) cinder create [--intance-uuid ] size This volume only can be attached the specific vm instance. (Restrictions) cinder-volume has to running in the computer server which want to be using local storage. Project: glance Series: havana Blueprint: location-proxy-metadata-checking Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/location-proxy-metadata-checking Spec URL: None Enable image locaton proxy checking metadata when the location changing. Project: glance Series: havana Blueprint: locations-policy Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/locations-policy Spec URL: None Added a policy layer for the locations APIs of the domain model to enable the policy checks for image locations changing. Project: nova Series: havana Blueprint: log-progress-of-sparse-copy Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/log-progress-of-sparse-copy Spec URL: None For XCP/XenServer driver, add logging to specify progress of sparse_dd during a resize_down:. This is to help track the status during long running method sparse_dd. Approach: After n seconds, logs how many bytes have been written already, and how many are left. Project: horizon Series: havana Blueprint: login-domain-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/login-domain-support Spec URL: None To support domains in the Keystone v3 API we need to alter the login process slightly. Within that, we should have three configurable cases. The requirements are as follows: Login changes: requires domain name in addition to user name Three cases: * domains are "public", may want to simply select domain from a list (Implement similarly to region list, or can we query?) * domains are "secret", user must type in domain name (simple text box) * only default domain, hide domain input field Project: neutron Series: havana Blueprint: make-authz-orthogonal Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/make-authz-orthogonal Spec URL: https://wiki.openstack.org/wiki/Quantum/Make-authz-orthogonal The quantum codebase is now a bit 'polluted' by policy checks spread throughout db logic and sometimes even plugin logic. While per se this is not harmful, it has some drawbacks: 1) There's no uniformity of style in policy.json 2) Understanding how authorization works is not trivial, as the checks might be somewhere else in the code 3) Developers have to explicitly worry about authZ logic, which is mixed with 'business' logic 4) It is hard for users to understand how to tune authZ in their setup by editing policy.json It would be great to finally be able to decouple user request processing from user request authorization. This is something we are carrying with us in the codebase since the F-2 milestone. (Policy.json was introduced in F-1). During the Grizzly release cycle many new extensions were added with explicit policy checks. The aim of this blueprint is to submit a set of patches that progressively (probably over the course of H-1 and H-2) will complete the separation of authZ from request processing. For further details, please refer to the specification URL. Project: neutron Series: havana Blueprint: map-networks-to-multiple-provider-networks Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/map-networks-to-multiple-provider-networks Spec URL: https://docs.google.com/document/d/1804rD8QwUvViABRTb0tfQopPiPqUe5oU5T1IJV1fX5I/edit?usp=sharing https://docs.google.com/document/d/1804rD8QwUvViABRTb0tfQopPiPqUe5oU5T 1IJV1fX5I/edit?usp=sharing Project: neutron Series: havana Blueprint: mellanox-quantum-plugin Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/mellanox-quantum-plugin Spec URL: None The proposal is to implement Quantum L2 plugin that support Mellanox embedded Switch functionality as part of the Ethernet/InfiniBand NIC, allowing hardware vNICs (based on SR-IOV VFs) per each VM vNIC with its unique connectivity, security, and QoS attributes. NIC based switching provides better performance, functionality, and security/isolation for virtual cloud environments. This plugin will be implemented according to Plugin-Agent pattern. -- Plugin: Will processes Quantum API calls, manage network segmentation id allocation and support L2 and L3 Agents to provide Network connectivity. -- L2 Agent: Will run on each compute node, get a mapping between a VIF and Embedded Switch port and apply VIF connectivity. A nova VIF Driver will be provided for Embedded Switch port creation and vNIC binding (Para-virtualized or SR-IOV with optional RDMA guest access). Project: nova Series: havana Blueprint: mellanox-vif-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/mellanox-vif-driver Spec URL: None Adding VIF Driver to allocate probed Virtual Functions as macvtap devices. It works with Mellanox Quantum Plugin and using Mellanox eswitch control utils. Some details: https://wiki.openstack.org/wiki /Mellanox-vif-driver Project: glance Series: havana Blueprint: membership-policy Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/membership-policy Spec URL: None We should add a policy enforcement to membership APIs (v1 and v2) just like we have for image APIs. Project: horizon Series: havana Blueprint: messages-on-login-page Design: Drafting Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/horizon/+spec/messages-on-login-page Spec URL: None Being able to show specific error messages on the login page would be useful and make for a nicer user experience, particularly after having to force logout a user or after preventing logging in for various reasons. Only messages that have been explicitly marked as ok for the login page should be displayed. See bug 1165702, and from review https://review.openstack.org/#/c/24878/ : "There's actually a very explicit reason we *don't* display the messages on the login screen: it can be a security hole and/or hugely confusing by way of displaying accumulated messages that should only be presented to a logged in user to whomever next lands on that login page with that browser. Somewhere there's a ticket about having a better way to filter messages that should *only* be displayed on the login screen, but that hasn't been implemented yet. I'm open to other ideas/implementations, but for now putting the messages back into the login screen isn't an option." I couldn't find the other ticket or blueprint for this so I created this one. Project: oslo Series: havana Blueprint: messaging-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/messaging-api Spec URL: https://wiki.openstack.org/wiki/Oslo/Messaging Before any API can move out of "incubation" in openstack-common, we should be confident in our ability to evolve the API and implementation without breaking backwards compatibility. To gain this confidence, we should carefully review each of the exposed public interfaces to be sure it makes long term sense. Things to watch out for:   * Implementation details exposed in the API   * cfg options which may not make sense long term   * Unused APIs   * API design which makes future additions difficult Project: oslo Series: havana Blueprint: messaging-api-notifications-client Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/oslo/+spec/messaging-api-notifications-client Spec URL: https://wiki.openstack.org/wiki/Oslo/Messaging#Handling_Notifications The RPC library in Oslo currently has no support for consuming notifications (as ceilometer needs). It's currently a hack and it ack()'s the event too early and always. This needs to change to remove the "method" structure, add versioning and not ack() until the event has been processed. Project: ceilometer Series: havana Blueprint: meter-post-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/meter-post-api Spec URL: None Some system needs to send meter to Ceilometer via the REST API. Project: neutron Series: havana Blueprint: migrations-for-service-plugins Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/migrations-for-service-plugins Spec URL: None Some db migrations need to be run only if particular service is enabled (i.e. service plugin is configured in neutron config). Currently such migrations specify 'migration_for_plugins = ['*']' which is not quite correct. Also following bugs are possible: https://bugs.launchpad.net/neutron/+bug/1209151 - agent extension migration is applied for particular core_plugins while lbaas agent scheduler migration for all plugins. All this can be fixed by adding service plugins support to the migration framework. Project: neutron Series: havana Blueprint: ml2-gre Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ml2-gre Spec URL: None The ml2 plugin currently supports the flat, local, and vlan network types via TypeDrivers, using the openvswitch, linuxbridge, and hyperv agents. For parity with the openvswitch plugin, it needs to also support the gre network type. This involves a GreTypeDriver that will manage allocation of tunnel_ids, along with implementation of the tunnel endpoint management RPCs used by the openvswitch agent for GRE networks. This implementation should also take into consideration the possibility that future network types such as vxlan (see https://blueprints.launchpad.net/quantum/+spec/ovs-vxlan-lisp-tunnel, https://blueprints.launchpad.net/quantum/+spec/vxlan-linuxbridge, and https://blueprints.launchpad.net/quantum/+spec/openvswitch-kernel- vxlan) may also require tunnel endpoint management RPC support (when multicast is not being used). Whether tunnel endpoints are a global resource, or are specific to each network type needs to be determined. It should also take the https://blueprints.launchpad.net/quantum/+spec /ovs-tunnel-partial-mesh and https://blueprints.launchpad.net/quantum/+spec/l2-population blueprints into consideration, since these are likely to extend or modify the tunnel endpoint management RPC APIs and semantics. Project: neutron Series: havana Blueprint: ml2-md-cisco-nexus Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/ml2-md-cisco-nexus Spec URL: None Port the quantum/plugin/cisco/nexus plugin to run under the Modular Layer 2 (ML2) infrastructure as defined in https://blueprints.launchpad.net/quantum/+spec/ml2-mechanism-drivers This blueprint is dependent on: "Initial Modular L2 plugin implementation." code committed under https://review.openstack.org/#/c/20105/ "Initial Modular L2 Mechanism Driver implementation." code to be committed under https://review.openstack.org/#/c/33201 Project: neutron Series: havana Blueprint: ml2-mechanism-drivers Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ml2-mechanism-drivers Spec URL: None The MechanismDriver API and associated MechanismManager class are just stubs in the current ml2 implementation. These both need to be extended to support integration with external devices such as SDN controllers and top-of-rack switches. MechanismDrivers need to be called as part of CRUD operations on the network and port resources, both within the DB transaction and after the DB transaction commits. Involvement of MechanismDrivers in port binding (plugging) is being addressed in https://blueprints.launchpad.net/quantum/+spec/ml2-portbinding. The relationship between this BP and https://blueprints.launchpad.net/quantum/+spec/ovsplugin-hardware- devices must be determined. One approach is to ensure the HardwareDriverAPI can be implemented in terms of ml2's MechanismDriver API, the other is to simply use the MechanismDriver API in place of the HardwareDriverAPI. Project: neutron Series: havana Blueprint: ml2-multi-segment-api Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/ml2-multi-segment-api Spec URL: None The providernet extension currently exposes details of a virtual network using a (network_type, physical_network, segmentation_id) tuple of attributes. The ml2 plugin's DB schema and driver APIs support virtual L2 networks made up of multiple segments with different details. The network_type value "multi-segment" is used when network is made up of more than one segment, but there is currently no API for accessing the tuples describing these individual segments, nor for adding segments to a network to form a multi-segment network. A separate resource type, either top-level or as a sub-resource of network, is needed, with the existing provider attributes on network retained for compatibility and ease-of-use for single-segment networks. Project: neutron Series: havana Blueprint: ml2-portbinding Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/ml2-portbinding Spec URL: None The ml2 plugin currently returns a hard-coded value of "unbound" for the binding:vif_type port attribute. Instead, when a port needs to be bound, it should call into the registered MechanismDrivers to determine what mechanism and details will be used to bind that specific port, including the binding:vif_type value and the specific network segment to use. MechanismDrivers for the openvswitch, linuxbridge, and hyperv agents would use the binding:host_id value from nova with the agents_db to see if their supported agent is running on that host. If so, it would would determine if a segment of the port's network can be bound by checking that agent's configuration to see if it supports the segment's network_type, and, where needed, if that host has a bridge or interface mapping for the segment's physical_network. If one MechanismDriver cannot bind, others would be tried, based on a prioritized list. MechanismDrivers for SDN controllers would also eventually participate in binding in the same way. Before the binding:host_id has been set by nova, the binding:vif_type should have the value "unbound". If the binding:host_id has been supplied, and a valid binding cannot be created, then the binding:vif_type should have the value "binding_failed". One open question is whether its sufficient to perform port binding each time get_port() executes, or if it should only be performed when details such as binding:host_id change, with the results cached in a database table. Another open question, likely deferred until a future blueprint, is whether to support establishment of composite bindings. For example, a hypervisor vSwitch might be bound at the lowest level, along with a top-of-rack switch binding, and maybe a core switch binding. The port binding mechanism would need to make sure a complete and valid chain of bindings could be established. Project: neutron Series: havana Blueprint: ml2-typedriver-extra-port-info Design: New Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info Spec URL: None Currently agent uses segment tuple (type,physical,segmentation_id) to decide how to setup the port, but sometimes extra information may be needed in order to setup the port. eg vxlan may need the multicast ip address. This blueprint provides a mechanism where TypeDriver can provide extra information to agent when the agent is going to setup a port. Project: neutron Series: havana Blueprint: ml2-vxlan Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ml2-vxlan Spec URL: None Support for VXLAN is being added to the openvswitch agent (https://blueprints.launchpad.net/quantum/+spec/ovs-vxlan-lisp-tunnel and https://blueprints.launchpad.net/quantum/+spec/openvswitch-kernel- vxlan) and linuxbridge agent (https://blueprints.launchpad.net/quantum/+spec/vxlan-linuxbridge). A VxlanTypeDriver for the ml2 plugin is needed to support these, ideally allowing the various agent implementations to coexist and interoperate for the same vxlan network. Tunnel endpoint management RPC support may also be needed (see https://blueprints.launchpad.net/quantum/+spec/ml2-gre). Project: neutron Series: havana Blueprint: mlnx-plugin-improvments Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/mlnx-plugin-improvments Spec URL: None Add support for Host Port Binding, Agent Scheduler. Keep compatibility to LinuxBridge plugin implementation to allow Network node deployment via Linux Bridge L2 agent. Project: neutron Series: havana Blueprint: modular-l2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/modular-l2 Spec URL: https://docs.google.com/document/d/1FXo0Hlc5c0myvBk99Bw51yOdHmEXHSaFKUhEGNEuDo4/edit The Modular L2 Plugin and Agent use drivers to support extensible sets of network types and of mechanisms for accessing networks of those types. See the linked specification for details. Project: ceilometer Series: havana Blueprint: monitoring-metrics-object Design: New Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/monitoring-metrics-object Spec URL: https://wiki.openstack.org/wiki/Ceilometer/blueprints/monitoring-metrics-object In Ceilometer, the meters are polled in the form of Counter(namedtuple), do we need two different peer objects or a common lower level object for both meters and metrics? After the Havana design summit, it's agreed that a common schema will be shared between meters and metrics. So The target of this blueprint is to check and confirm that the customized metrics(openstack-related) will work with the UDP publisher and the common schema. e.g. metrics produced by openstack logging tools, Ganglia, Statsd etc. Bridge the gaps if there is any. Project: ceilometer Series: havana Blueprint: monitoring-physical-devices Design: Approved Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/ceilometer/+spec/monitoring-physical-devices Spec URL: http://wiki.openstack.org/Ceilometer/MonitoringPhysicalDevices It should be possible to monitor physical devices in the OpenStack environment. The monitored devices are: - the physical servers on which Glance, Cinder, Quantum, Swift, Nova compute node and Nova controller runs - the network devices used in the OpenStack environment (switches, firewalls ...) Project: nova Series: havana Blueprint: mq-failover Design: Obsolete Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/nova/+spec/mq-failover Spec URL: None When MQ server is Active-Active mode, each Nova component's fanout queue exist each MQ server. When first MQ server is downed, then the fanout queues that exists in the first MQ server does not created in second MQ server. So Added the check logic for existance of the fanout queue, and if the fanout queue does not exist, create it. Project: ceilometer Series: havana Blueprint: multi-dispatcher-enablement Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/multi-dispatcher-enablement Spec URL: None Ceilometer currently can only use one dispatcher to be the outlet for metering data. It is necessary to allow ceilometer to support multiple dispatchers based on ceilometer configuration so that multiple outlets can be made available on a single Ceilometer server. Project: nova Series: havana Blueprint: multi-nic-floating-ip-assignment Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/multi-nic-floating-ip-assignment Spec URL: None The current floating IP API extension only accepts the instance and address to be assigned. Where the instance is connected to more than one network the behaviour is to associated the floating IP with the first fixed IP of the instance (and to log a warning) This change adds a fixed IP address as an optional parameter, allowing the floating IP to be associated with a specific fixed IP. Without this optional parameter the API behaviour is unchanged. If specified the fixed IP must be associated with the instance. Project: neutron Series: havana Blueprint: multi-segment-and-trunk-support-cisco-nexus1000v Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/multi-segment-and-trunk-support-cisco-nexus1000v Spec URL: https://docs.google.com/document/d/1JGDoXRSynkkAP5URQ9PR3zHpMd_zvYe1_cROJigyZvw Multi-Segment Networks and Trunk support for Cisco Nexus 1000v In a multi-segment network, multiple network segments of different types (VLAN/VXLAN) can be bridged to form a single broadcast domain. A VM port will be connected to only one segment (access port). A trunk network will contain multiple network segments of the same type (VLAN/VXLAN). A VM port will have a bunch of segments enabled (trunk port). How it works In Cisco Nexus 1000v OpenStack plugin, a network profile is a container or a pool of segment id’s of a specific type. A network profile can be of type VLAN, VXLAN, MULTI-SEGMENT or TRUNK. Each network created in Neutron is associated with a network profile. To create a multi-segment network, typically a user needs to create a network of type VLAN and another of type VXLAN. Then the user needs to create a multi-segment network and add the vlan and vxlan based networks to it. Cisco Nexus 1000v takes care of bridging the VLAN and VXLAN based networks and creating the broadcast domain. To create a trunk network, a user needs to first create the networks he needs to trunk. Then the user needs to create a trunk network from a trunk-type network profile and add the networks created earlier to it. Neutron Changes There is no change to the core Neutron resources. All the changes necessary for multi-segment and trunk network support are done through attribute extensions on top of Cisco Nexus 1000v plugin. Project: neutron Series: havana Blueprint: multi-vendor-support-for-lbaas Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/multi-vendor-support-for-lbaas Spec URL: https://docs.google.com/document/d/1OT9m3bWl4yimvXLXTh_REQqONSS_f8jwplm7Y1iBxC8/edit?usp=sharing As decided in Portland summit we want to enable lbaas driver implementations This BP is "abstract" and used as master BP for other BP's (see Whaiteboard) Project: neutron Series: havana Blueprint: multi-vendor-support-for-lbaas-step0 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/multi-vendor-support-for-lbaas-step0 Spec URL: None See step#0 in the deign doc - http://goo.gl/5Dgvb Master BP is https://blueprints.launchpad.net/quantum/+spec/multi-vendor-support- for-lbaas Project: neutron Series: havana Blueprint: multi-vendor-support-for-lbaas-step1 Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/multi-vendor-support-for-lbaas-step1 Spec URL: None See step#1 in the deign doc - http://goo.gl/5Dgvb Master BP is https://blueprints.launchpad.net/quantum/+spec/multi-vendor-support- for-lbaas Project: neutron Series: havana Blueprint: multi-workers-for-api-server Design: Review Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/multi-workers-for-api-server Spec URL: https://wiki.openstack.org/wiki/MultiAPIWorkersForNeutron Currently, there is only one pid running for quantum-server. It is not enough when undering lots of API access. So multiple workers for quantum-server are ugrent necessary. To use multiple processes by changing a flag in the configuration file - allowing to share work between multiple cores on the one machine. Project: nova Series: havana Blueprint: multiple-clusters-managed-by-one-service Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service Spec URL: https://wiki.openstack.org/wiki/Nova/multiple-clusters-managed-by-one-service Title: Multiple VMware vCenter Clusters managed using single compute service Current implementation of VMware VC driver for OpenStack uses one proxy server to run nova-compute service to manage a cluster. New model will have the following changes to nova-compute service VMware VC Driver • To allow a single VC driver to model multiple Clusters in vCenter as multiple nova-compute nodes. • To allow the VC driver to be configured to represent a set of clusters as compute nodes. • To dynamically create / update / delete nova-compute nodes based on changes in vCenter for Clusters. Nova-compute is identified uniquely with the combination of vCenter + mob id of cluster pool. This is an enhancement to VMware vCenter Nova driver. VMware vCenter driver treats an ESX cluster as one compute, where as our proposal is in line of Baremetal nova driver where we would like to present one nova proxy driver to serve multiple ESX clusters. Project: heat Series: havana Blueprint: multiple-engines Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/heat/+spec/multiple-engines Spec URL: https://etherpad.openstack.org/heat-multiple-engines We need support for running multiple engines for scale-out: https://etherpad.openstack.org/heat-multiple-engines Project: glance Series: havana Blueprint: multiple-image-locations Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/multiple-image-locations Spec URL: None Etherpad from the Grizzly summit: https://etherpad.openstack.org/GrizzlyMultipleImageLocations Project: keystone Series: havana Blueprint: multiple-ldap-servers Design: Review Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/multiple-ldap-servers Spec URL: https://etherpad.openstack.org/keystone-multiple-ldap Allow configuration for multiple identity backends on a domain by domain bases. This would allow a domain to have its own LDAP or SQL services, or one LDAP to server multiple domains, with each domain is different subtree. Domains that require this will provide a domain- specific configuration file, while other domains will share a common backend driver as they do today. Project: glance Series: havana Blueprint: multiple-locations-downloading Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/multiple-locations-downloading Spec URL: None Enable image domain object fetch data from multiple locations, allow API client consume image from multiple backend store. Project: nova Series: havana Blueprint: multiple-scheduler-drivers Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/multiple-scheduler-drivers Spec URL: https://wiki.openstack.org/wiki/Nova/MultipleSchedulerPolicies In heterogenous environments, it might be desirable to apply different scheduling policies in different host aggregates. This could be different drivers, or even same driver with different configurations (e.g., FilterScheduler with different sets of filters/weights and/or different parameters of particular filters/weights). Project: horizon Series: havana Blueprint: multiple-service-endpoints Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/multiple-service-endpoints Spec URL: None The Keystone service catalog can contain multiple endpoints for the same service in a given region, but Horizon only uses the first one and doesn't give any option for selecting regions by name, etc. We should enable the use of alternate endpoints, probably in a configurable fashion (round-robin vs. random vs. selectable) to enable various use-cases. Large questions remain on whether or not there should be the ability to have "default" services that can serve across various regions if a service is not present in one, etc. Project: keystone Series: havana Blueprint: multiple-sql-migrate-repos Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/multiple-sql-migrate-repos Spec URL: None SQL Migrations are currently served out of a single repo that lives in keystone/common/sql/migrate_repo. Plugins that are not core should not be modifying this repo, but should instead have their own repos, and should have no SQL constraints on tables defined in other repos. The table migrate_version in the database has a column for the repo, which is the path to where the migrations live. Each of the plugins needs to advertise to the CLI that it has database migrations to perform. Then running keystone-manage db_sync will migrate all of the registered repos. Project: heat Series: havana Blueprint: native-cinder-volume Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/native-cinder-volume Spec URL: None This resource should share the same code as AWS::EC2::Volume but its properties schema should be derived from the Cinder REST volume creation API call. According to the python-cinderclient volume create call, the following properties are elible for being in the resource properties schema for OS::Cinder::Volume (with equivalent AWS properties in parethesis) size (Size) snapshot_id (SnapshotId) display_name display_description volume_type availability_zone (AvailabilityZone) status attach_status metadata (Tags) imageRef source_volid FnGetAtt should expose any information that a look-up on the volume provides. According to the python-cinderclient volume attachment call, the following properties are elible for being in the resource properties schema for OS::Cinder::VolumeAttachment (with equivalent AWS properties in parethesis) volume_id (VolumeId) instance_uuid (InstanceId) mountpoint (Device) Project: heat Series: havana Blueprint: native-nova-instance Design: Approved Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/heat/+spec/native-nova-instance Spec URL: None Create a OS::Nova::Server resource This resource should share the same code as AWS::EC2::Instance but its properties schema should be derived from the Nova REST instance creation API call. According to the python-novaclient create call, the following properties are elible for being in the resource properties schema (with equivalent AWS properties in parethesis) name imageRef (ImageId) flavorRef (InstanceType) user_data (UserData) metadata (Tags) reservation_id key_name (KeyName) os:scheduler_hints (NovaSchedulerHints could then be removed from AWS::EC2::Instance) config_drive adminPass security_groups (SecurityGroups) personality availability_zone (AvailabilityZone) block_device_mapping networks (NetworkInterfaces) FnGetAtt should expose any information that a look-up on the instance provides. Project: heat Series: havana Blueprint: native-tools-bootstrap-config Design: New Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/heat/+spec/native-tools-bootstrap-config Spec URL: None Instances may be booting "vanilla" images which will need configuration in order to interact with the orchestration system. An in-instance tool that is able to download, configure, and install software will be useful in this case. In CloudFormation, the tool that provides this is 'cfn-init'. Project: neutron Series: havana Blueprint: nec-disribute-router Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/nec-disribute-router Spec URL: None The goal of this blueprint is to leverage the distributed router feature in NEC OpenFlow controller. By this feature easy-west traffic in data centers is directly transferred between hypervisors or network appliances without going through a router node. Two types of neutron router will be supported: l3-agent and distributed. The type is specified via an attribute provided by plugin specific extension. (This is similar to service provider mechanism, but at now there is a discussion about what we should do when a corresponding provider disappears by configuration change or dynamic configuration, so I would like to implement this BP without depending on the service provider framework first). distributed router in NEC OpenFLow controller now does not support NAT, so l3-agent and distributed router coexists. To support l3-agent with distribute router, l3-agent router scheduling logic will be enhanced to exclude distributed routers from a list of scheduling target (It is done inside NEC plugin implementation.) Project: neutron Series: havana Blueprint: nec-plugin-test-coverage Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nec-plugin-test-coverage Spec URL: None The aim of this blueprint is to improve the test coverage of NEC plugin. The work is split into several parts. - tests plugin itself and OpenFlow controller driver - tests for nec-agent - tests for packet filter extension Project: neutron Series: havana Blueprint: nec-port-binding Design: Approved Lifecycle: Not started Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/nec-port-binding Spec URL: None This BP covers two features related to port-binding extension. The one is host-id support in port-binding which is already supported in several plugins. The other is to expose portinfo in NEC plugin through binding:profile attribute in the port binding extension. portinfo is a mapping between neutron port id and OpenFlow switch physical information (datapath_id, port number). This information is usually updated by the plugin agent on compute nodes. However in cases where baremetal nodes are used or hardware appliance are connected to neutron network, portinfo needs to be registered from outside since there is no plugin agent. Although portinfo needs to be updated from the external system, it is useful to connect hardware nodes to neutron network. Project: heat Series: havana Blueprint: nested-stack-updates Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/heat/+spec/nested-stack-updates Spec URL: None Currently if a nested stack template changes, we simply replace the nested stack, to allow easier management of very large composed stack definitions, we should allow in-place update of nested stacks Project: cinder Series: havana Blueprint: netapp-cinder-nfs-image-cloning Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/netapp-cinder-nfs-image-cloning Spec URL: None Cinder provides an option for specifying image-id as a parameter while creating a volume. This is used to first check if image cloning can be performed and if not than fallback to copy image to volume. NetApp drivers right now do not implement clone_image method and hence the functionality falls back to copying image to volume. NetApp would like to add implementation for clone_image in the nfs drivers which would provide ways to efficiently clone image using following workflows:- 1. In cases of glance backed by nfs store, cinder can get the actual nfs path for image and clone image directly in case its present on the same nfs share as being used by the nfs driver on cinder. 2. The nfs driver will also maintain an internal image cache on the nfs shares which would be used to provide cloned images in case the nfs store used by glance is different than that used by the nfs driver on cinder. These image copies residing on image cache would be cloned copies of actual images downloaded which will be internally maintained by the driver on nfs share and not registered on any OpenStack related infrastructure. The sole purpose of image-cache is to provide source for efficient image cloning. The image cache will be cleaned up in regular intervals in case of shares lacking space based on outdated,older images which occupy sufficient space. This will be a asynchronous job which will not interfere with regular driver functionality. Project: cinder Series: havana Blueprint: netapp-unified-driver Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/netapp-unified-driver Spec URL: None The current netapp block storage drivers are divided based on the technology(nfs/iscsi and different storage families) used across multiple classes. This poses a problem or confusion at the end user level about which driver to configure and which one is the recommended one from netapp with recommended options. It also causes documentation mess as multiple driver classes which are growing during the period of time are required to be mentioned as part of the documentation and also configured at cinder backend at the time of usage. The NetApp unified driver attempts to solve this problem by providing a single entry point driver class for different netapp storage families and storage protocols and can be configured with simple options. It is required to be built in a plug in style architecture to ease registering existing and new drivers without any significant code change. It gives the opportunity to provide recommended/default driver options while configuring a driver backend for cinder using a single netapp driver without the need to remember weird class names. Project: nova Series: havana Blueprint: network-bandwidth-entitlement Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/network-bandwidth-entitlement Spec URL: https://wiki.openstack.org/wiki/NetworkBandwidthEntitlement Currently the Nova resources that the host_manager keeps track of (Disk, Memory and #_vCPU).largely independent of differences in physical servers, which makes keeping track of these fairly simple. However this makes it hard to use the scheduler effectively on a heterogeneous server environment. The cpu-entitlement blueprint will extends this to add host independent CPU capacity https://blueprints.launchpad.net/nova/+spec/cpu-entitlement The blueprint adds network bandwidth entitlement as an attribute of flavors which allows instances to be scheduled based on host network capacity. Project: horizon Series: havana Blueprint: network-quotas Design: Drafting Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/horizon/+spec/network-quotas Spec URL: None When Quantum is enabled we should also expose the quota management and usage displays for network quotas that we do for everything else. Project: neutron Series: havana Blueprint: neutron-client-n1000v-multisegment-trunk Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/neutron/+spec/neutron-client-n1000v-multisegment-trunk Spec URL: None Add support to create multi-segment and vlan/vxlan trunk network profiles. Project: neutron Series: havana Blueprint: neutron-fwaas-explicit-commit Design: New Lifecycle: Not started Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/neutron-fwaas-explicit-commit Spec URL: https://docs.google.com/document/d/1gmJoAYJOMpdGuKXTJVbBVlCDAou0k_h2DYuD4W7aEyg/edit#heading=h.9xfek5j4sfhh In Neutron Firewall as a Service (FWaaS), we currently support an implicit commit mode, wherein a change made to a firewall_rule is propagated immediately to all the firewalls that use this rule (via their firewall_policy association), and the rule gets applied in the backend firewalls. This might be acceptable, however this is different from the explicit commit semantics which most firewalls support. Having an explicit commit operation ensures that multiple rules can be applied atomically, as opposed to in the implicit case where each rule is applied atomically and thus opens up the possibility of security holes between two successive rule applications. Project: nova Series: havana Blueprint: new-hypervisor-docker Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/new-hypervisor-docker Spec URL: https://github.com/dotcloud/openstack-docker/blob/082fac0c55835a2197eb3fd7756221a98005b487/docs/nova_blueprint.md Docker is an open-source engine which automates the deployment of applications as highly portable, self-sufficient containers which are independent of hardware, language, framework, packaging system and hosting provider. Containers don't aim to be a replacement for VMs, they are just complementary in the sense that they are better for specific use cases. Nova support for VMs is currently advanced thanks to the variety of hypervisors running VMs. However it's not the case for containers even though libvirt/LXC is a good starting point. Docker aims to go the second level of integration. Project: cinder Series: havana Blueprint: nexenta-nfs-volume-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/nexenta-nfs-volume-driver Spec URL: None Add a NFS volume driver to support NexentaStor appliance(s). Utilize ZFS folders for shares, ZFS snapshots and clones. The driver should be able to: Create Volume Delete Volume Create Snapshot Delete Snapshot Create Volume from Snapshot Create Cloned Volume Attach Volume Detach Volume Please proceed to code review: https://review.openstack.org/#/c/41984/ Project: neutron Series: havana Blueprint: nicira-plugin-get-improvements Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/nicira-plugin-get-improvements Spec URL: https://wiki.openstack.org/wiki/Nicira-plugin-get-improvements This blueprint falls in the framework of the scalability and performance improvements scheduled for the Havana release. The aim of this blueprint is to stop synchronizing resource status everytime a GET is issued. Project: nova Series: havana Blueprint: no-compute-fanout-to-scheduler Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/no-compute-fanout-to-scheduler Spec URL: https://etherpad.openstack.org/no-compute-fanout-to-scheduler Remove nova-computes fan-out of capabilities to the schedulers. This data is mostly unused, and the remaining parts that are used should be stored in the database See https://etherpad.openstack.org /no-compute-fanout-to-scheduler Project: nova Series: havana Blueprint: normalize-scheduler-weights Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/normalize-scheduler-weights Spec URL: https://wiki.openstack.org/wiki/Scheduler/NormalizedWeigts See description on https://wiki.openstack.org/wiki/Scheduler/NormalizedWeights Project: nova Series: havana Blueprint: notification-host-aggregate Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/notification-host-aggregate Spec URL: https://docs.google.com/document/d/1MJ3nUBI6Dp_-THklGxaLjyLcMp13591bNDtDqUfXYP8/edit?usp=sharing Currently in OpenStack, when customer create/delete host aggregation, add host to/remove host from host aggregation, OpenStak did not generate notifications for those operations. If one component wants to do some operations based on host aggregation update, then the component need to query DB to get the status of host aggregation and this is not real-time. It is better that the component can get notification for aggregation related operations so as to make related component can response on time based on aggregation operations. Solution Description Add notification for the following operations related to host aggregation a) Send notification when create a new host aggregation b) Send notification when delete a host aggregation c) Send notification when add host to a host aggregation d) Send notification when delete host from host aggregation e) Send notification when update host aggregation for aggregate name and available zone f) Send notification when update host aggregation meta data Project: nova Series: havana Blueprint: nova-boot-bandwidth-control Design: Superseded Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/nova/+spec/nova-boot-bandwidth-control Spec URL: None Admin can set the network bandwidth when create instance(nova boot). Project: ceilometer Series: havana Blueprint: nova-cell-support Design: Drafting Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/nova-cell-support Spec URL: None Investigate Nova Cell support, if it lands early enough Project: nova Series: havana Blueprint: nova-network-legacy Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/nova-network-legacy Spec URL: None nova.network.model.NetworkInfo.legacy converts an instance of NetworkInfo into a legacy version of NetworkInfo. Project: nova Series: havana Blueprint: nova-tests-code-duplication Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/nova-tests-code-duplication Spec URL: None Lots of code in nova tests is duplicated. This blueprint is a summary list of problem places that needs to be worked on We should create separate classes for different methods tests to improve tests setUp code. Tests should be refactored to remove duplicated code and improve test assertions. Project: nova Series: havana Blueprint: nova-v3-api-filter Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/nova-v3-api-filter Spec URL: None Filter API discovery based on tenant role / policy.json Project: neutron Series: havana Blueprint: nvp-agent-scheduler-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-agent-scheduler-extension Spec URL: None NVP can use the quantum dhcp agent to provide dhcp services to tenant VM's. The aim of this blueprint is to provide support to the agent scheduler to allow the deployment of multiple dhcp agents. This improves scalability and avoid SPoF's. For more information, please read: - http://docs.openstack.org/trunk/openstack- network/admin/content/demo_multiple_operation.html - http://docs.openstack.org/api/openstack- network/2.0/content/agent_ext.html Support to this extension is rather minimal and it reduces itself to just ensuring that the API extension is properly hooked up with the plugin. Project: neutron Series: havana Blueprint: nvp-dhcp-metadata-services Design: Drafting Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/nvp-dhcp-metadata-services Spec URL: None In order to provide DHCP on logical networks, the NVP Neutron plugin uses the Neutron DHCP agent. A similar consideration can be made as far as metadata traffic is concerned.This blueprint is about making the plugin capable of adopting alternative solutions, for instance dhcp and metadata services provided by the NVP platform itself. Project: neutron Series: havana Blueprint: nvp-distributed-router Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/nvp-distributed-router Spec URL: https://wiki.openstack.org/wiki/Neutron/nvp-distributed-router The aim of this blueprint is to leverage the 'distributed' router feature which was is available since NVP 3.1 Thanks to this feature, east-west traffic goes directly from the source hypervisor to the destination hypervisor, without traversing a routing node. The design for this blueprint will ensure compatibility with pre 3.1 NVP deployments. Full details on the proposed implementation to follow. Project: neutron Series: havana Blueprint: nvp-extra-route-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-extra-route-extension Spec URL: None Provide Support for Extension: http://docs.openstack.org/api /openstack-network/2.0/content/extraroute-ext.html In the NVP Plugin. Project: neutron Series: havana Blueprint: nvp-fwaas-plugin Design: New Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/nvp-fwaas-plugin Spec URL: None This blueprint is for providing firewall service on NVP advanced service router. The implementation will provide a firewall plugin which confirms to current FWaaS service extension to configure NVP advanced service router. Project: neutron Series: havana Blueprint: nvp-lbaas-plugin Design: New Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/neutron/+spec/nvp-lbaas-plugin Spec URL: None This blueprint is for providing load balancer service on NVP advanced service router. The implementation will provide a load balancer plugin which confirms to current LBaaS service extension to configure NVP advanced service router. Project: neutron Series: havana Blueprint: nvp-mac-learning-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-mac-learning-extension Spec URL: https://wiki.openstack.org/wiki/Quantum/Spec-NVPPlugin-MacLearning The NVP platform allows the ability to enable mac learning. The aim of this blueprint is to make it available to Quantum users. Project: neutron Series: havana Blueprint: nvp-port-binding-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-port-binding-extension Spec URL: None The NVP plugin currently does not support the port binding extension. The aim of this blueprint is to support it in a way similar to the other plugins. Project: neutron Series: havana Blueprint: nvp-remote-net-gw-integration Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/neutron/+spec/nvp-remote-net-gw-integration Spec URL: None The aim of this blueprint enable the nvp-net-gateway Quantum API extension to allow 3rd parties to attach their gateway appliances to the NVP fabric running in the cloud (and being managed by Quantum). This will allow to complete the "Bring your own Gateway" use case, which now requires a mix of NVP and Quantum APIs. Project: neutron Series: havana Blueprint: nvp-service-router Design: New Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/nvp-service-router Spec URL: None This blueprint is to add support for advanced services, such as load balancer, firewall, and vpn, on an logic router. An advanced service VM will be deployed per logic router instance when a user requests to create a router that can provide advanced services. Users will be able to use lbaas/fwaas/vpnaas plugins to configure the advanced service VM and provide these services to tenant networks. Project: neutron Series: havana Blueprint: nvp-test-coverage Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/nvp-test-coverage Spec URL: None The aim of this blueprint is to improve test coverage for the python modules in the NVP packages. This will be accomplished either by adding unit tests as required or by pruning code which is either redundant or unused. Project: neutron Series: havana Blueprint: nvp-vpnaas-plugin Design: New Lifecycle: Not started Impl: Unknown Link: https://blueprints.launchpad.net/neutron/+spec/nvp-vpnaas-plugin Spec URL: None This blueprint is for providing IPSec VPN service on NVP advanced service router. The implementation will provide a IPSec VPN plugin to configure NVP advanced service router. Project: nova Series: havana Blueprint: once-per-request-filters Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/once-per-request-filters Spec URL: None Currently all scheduler filters are run for each instance in a request, but for many filters the data doesn’t change during a request. For example the AZ filter is pretty static, but expensive to run on a large system. Similarly the ServiceGroup information used by the compute filter is cached as part of the host status, and doesn’t need to be evaluated more than once per request. This blueprint introduces a new attribute that allows filters to declare that they only need to be run once per request. The default behaviour is left as being run for each instance. In addition the function that does the check is defined in the filter base class so that a filter that wants to run (for example) once for every 10 instances in a request can override the function and implement its own behaviour. Project: ceilometer Series: havana Blueprint: one-meter-per-plugin Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/one-meter-per-plugin Spec URL: None We currently allow pollster plugins to return multiple counters. This makes it difficult to control which plugins are enabled and which are disabled, and led to an O(n^2) configuration loop in https://review.openstack.org/#/c/22132/9/ceilometer/agent.py. We should tighten the pollster plugin API up so that each instance only generates counters for one type of meter. That will make implementing them simpler, and make configuring the pipelines easier. The argument against this approach in the past was that some plugins will ask for the same data over and over, which is inefficient. To address that, we should provide a context or cache object to the pollsters so they can save data. That context would be refreshed at the start of each invocation loop, and any pollster could add data and reuse data created from previous pollsters. This would allow the plugins to do things like scan the instance data once, and save it for other pollsters to reuse. For backwards compatibility, we will need to support the existing plugin API (and its setup code). To do that we can change the namespace used to load the new pollsters, so we can tell the old from the new. Project: heat Series: havana Blueprint: onetrue-paste-ini Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/onetrue-paste-ini Spec URL: None There should be only one paste.ini file for all of Heat's API services, and this file should have no user-configurable parameters in it. Once this is implemented, packagers may choose to put paste config somewhere other than /etc Project: neutron Series: havana Blueprint: openvswitch-kernel-vxlan Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/openvswitch-kernel-vxlan Spec URL: None Linux starting from kernel 3.7 linux offers native vxlan support. Like gre tunnel, vxlan provide l2 isolation, but more convenient to config and setup. After ml2 has been merged, and ovs plugin has been deprecated, instead implement kernel vxlan for ovs plugin, this blueprint retarget to implement kernel vxlan support for ml2 plugin, and use ovs agent to set up the vxlan device This task contains two parts: 1. create kernel vxlan TypeDriver in ml2 2. implement kernel vxlan in ovs agent Project: nova Series: havana Blueprint: os-ext-ips-mac-api-extension Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/os-ext-ips-mac-api-extension Spec URL: None Adding a extension to adds the OS-EXT-IPS-MAC:mac_addr param to server(s), so that users can associate the mac address to the ip of the server in one API call. Project: heat Series: havana Blueprint: oslo-db-support Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/heat/+spec/oslo-db-support Spec URL: None Make use of the common DB code from OSLO Project: neutron Series: havana Blueprint: oslo-db-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/oslo-db-support Spec URL: None Make use of the common DB code from OSLO Project: nova Series: havana Blueprint: oslo-messaging Design: Approved Lifecycle: Started Impl: Beta Available Link: https://blueprints.launchpad.net/nova/+spec/oslo-messaging Spec URL: https://wiki.openstack.org/wiki/Oslo/Messaging#nova The oslo.messaging library is the evolution of the oslo-incubator RPC code into a stable API. This blueprint tracks the work to port Nova to oslo.messaging. The only user visible change should be that oslo.messaging is a new dependency. Project: ceilometer Series: havana Blueprint: oslo-multi-publisher Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/oslo-multi-publisher Spec URL: None The goal is to provide a oslo.notifier driver that use multi publisher to publish notifications. Project: cinder Series: havana Blueprint: oslo-periodic-tasks Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/oslo-periodic-tasks Spec URL: None Cinder uses old style invocation of periodic tasks, which is not based on oslo library. It will be better to use olso implementation of periodic_tasks for all services. Project: oslo Series: havana Blueprint: oslo-sqlalchemy-migrate-uc-fixes Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/oslo-sqlalchemy-migrate-uc-fixes Spec URL: None Most of OpenStack projects use SQLAlchemy-migrate library for applying of DB schema migrations. The library contains a few bugs related to handling of unique constraints in SQLite: 1. Adding a new unique constraint to a table deletes all existing ones. 2. Dropping of unique constrains is not supported at all (due to limited support of ALTER in SQLite). Of course, we don't use SQLite in production and are interested in applying of DB schema migrations mainly on MySQL and PostgreSQL. Nevertheless, support of migrations on SQLite is vital for running unit tests in Nova (they are used to obtain an initial DB schema to run tests on; this can not be done by using of model descriptions as they are incomplete) and other projects using common Oslo DB code. Unfortunately, SQLAlchemy-migrate seems to be a dead project, so these bugs can not be fixed in upstream code. The other way to fix those is to monkey patch the library. Oslo seems to be a proper place to contain this patch, as it can be reused by other OpenStack projects, that rely on SQLAlchemy-migrate to make DB schema migrations. Long term goal is to put this code into Alembic (currently, it doesn't support ALTER in SQLite on purpose, but pull requests are welcome) and migrate OpenStack projects to it. Project: oslo Series: havana Blueprint: oslo-sqlalchemy-utils Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/oslo-sqlalchemy-utils Spec URL: None There is a lot of utils in nova.db.sqlaclhemy.utils that helps us to work with: db-archiving db-unique-constraints and utils that allows us to make migration that provide unique constraint (they will be useful in glance and cinder) Project: oslo Series: havana Blueprint: oslo.sphinx Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/oslo.sphinx Spec URL: None This is about extracting the OpenStack theme from the various projects so we only need to have one copy of it. We may also move the code that auto-generates the API doc stub files into the library, since that is also copied into several projects. The new library will be called oslo.sphinx and its API will be used by adding "oslo.sphinx" to the list of extensions in the sphinx conf.py file for the doc project. There is one template, with a couple of blocks that can be overridden (those need to be documented). There is no importable code, yet, and we don't anticipate any. There will be a configuration setting to turn the API stub generation on and off, when that code is brought in. Project: neutron Series: havana Blueprint: ovs-tunnel-partial-mesh Design: New Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/neutron/+spec/ovs-tunnel-partial-mesh Spec URL: https://wiki.openstack.org/wiki/Ovs-tunnel-partial-mesh While using the OVS GRE encapsulation, when a broadcast packet is sent, every tunnel endpoint will receive it, even if the tunnel endpoint has no device in the concerned network. The traffic is bothered by unecessary broadcast traffic. This could be improved by populating a relationship between endpoints and networks as soon as a port of the network is created in the hypervisor/tunnel endpoint. The flow in the br-tun will also be changed so that traffic from br-int will still have the action "set-tunnel" but will have the the action "output:tunnel_1,...,tunnel_n" instead of NORMAL. Project: neutron Series: havana Blueprint: ovs-vxlan-lisp-tunnel Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/ovs-vxlan-lisp-tunnel Spec URL: None This blueprint tracks the addition of VXLAN and LISP tunnel support into the Open vSwitch plugin. VXLAN and LISP support will be a part of Open vSwitch 1.10.0 when it is released, so adding the option to have the OVS Quantum Plugin support VXLAN and LISP in addition to GRE tunnels will be nice for people wanting to make use of additional tunneling protocols, especially a pure L3 tunneling protocol like LISP. Configuration for VXLAN on OVS can be found here (search for VXLAN):1 http://git.openvswitch.org/cgi- bin/gitweb.cgi?p=openvswitch;a=blob_plain;f=FAQ;hb=HEAD Configuration is the same as for GRE, except the VNI for VXLAN is 24-bit. Project: ceilometer Series: havana Blueprint: paginate-db-search Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/ceilometer/+spec/paginate-db-search Spec URL: None Instead of returning all the large amount of data as a whole from db, we need to paginate the return objects. The sqlalchemy backend in oslo already has this kind of capability, so we need to rebase our sqlalchemy implementation on oslo's sqlalchemy to leverage that. We also need to add this feature for mongoDB and HBase. Project: keystone Series: havana Blueprint: pagination-backend-support Design: Review Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/keystone/+spec/pagination-backend-support Spec URL: https://etherpad.openstack.org/pagination In v3, our identity APIs support pagination. However, such pagination support is only skin deep - in that the controller re-fetches the entire list upon each "get next page" request from the client. We should therefore extend the pagination into the backends (e.g. SQLAlchemy supports such a concept) to improve the scaling and performance of keystone. Project: heat Series: havana Blueprint: parallel-delete Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/heat/+spec/parallel-delete Spec URL: None Stack delete should use stack_task so it can be run in parallel, like suspend and resume. This would also mean: - rewriting volume and instance delete to implement check_delete_complete instead of directly using TaskRunner - implement some simple check_delete_complete 404 checking for resource that are demonstrating races on delete (most involving neutron) Project: cinder Series: havana Blueprint: pass-ratelimit-info-to-nova Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/cinder/+spec/pass-ratelimit-info-to-nova Spec URL: None As a prerequisite of Nova side volume rate-limit feature, Cinder needs to pass rate-limit control information to Nova when doing attaching. Project: nova Series: havana Blueprint: pci-passthrough-base Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/pci-passthrough-base Spec URL: None PCI Passthrough is used to use pci devices in VMs without any virtualization. This blueprint will provide base integration of PCI passthrough with OpenStack based on PCI labels. It will include: 1) Mechanism to specify available PCI devices on host through nova.conf 2) Mechanism to store PCI devices state in DB 3) Specific InstanceTypeExtraSpec that contains list of PCI labels that are required by InstanceType. 4) Scheduling based on PCI labels PCI label is high level abstract name of device, all operations such as: a) scheduling, b) specifying required by instance_types devices c) attaching pci device to instance # I mean work in DB, not virt layer are make with PCI lables, not PCI addresses. Using PCI lables makes this approach very flexible. Why labels and not device address? For example if you have 10 nodes, on each SR-IOV then each node will have 7 pci addresses. So if all these addresses on all nodes have one logical name, for example "with_eth". Then it is pretty simple to organize: 1) scheduling // get all nodes that have available devices with this label 2) specifying what devices you need 3) attaching devices How to use it (base use case): 1) Specify on all compute nodes available for passthrough PCI devices: add to nova.conf: pci_passthrough_devices=[{"label": "some_name", "address": "xxxx:xx:xx.x"}] # This will create/update db records for PCI devices on host (when we run nova compue) 2) Add to any InstanceType extra specs {'pci_passthrough:labels': '["some_name"]'} 3) Create Instance with this InstanceType, it will create VM with device with address "xxxx:xx:xx.x" Project: nova Series: havana Blueprint: pci-passthrough-libvirt Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/pci-passthrough-libvirt Spec URL: None Implement PCI passthrough for libvirt driver. Support for all operation with VM, except live migration. Project: nova Series: havana Blueprint: per-aggregate-resource-ratio Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/per-aggregate-resource-ratio Spec URL: None Current RamFilter / CoreFilter / DiskFilter enforce a single resource allocation ratio globally, i.e. ram_allocation_ratio, cpu_allocation_ratio and disk_allocation_ratio. However, in cloud provider cases, it make sense to differentiate resource commitment level according to SLA or physical resource management needs. Hence, this blueprint is proposing add per-aggregate resource allocation ratio to address this requirement. With these kinds of AggregateRamFilter / AggregateCoreFilter / AggregateDiskFilter, default global setting will be used if per-aggregate settings not found. Project: nova Series: havana Blueprint: per-user-quotas Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/per-user-quotas Spec URL: http://wiki.openstack.org/PerUserQuotas#preview Currently nova supports per-project quotas, that is, users under a same project(tenant) have identically the same quotas set. It's not enough when we want to limit user's use of cloud resources. Thereby we present per-user quotas. A user's quotas is defined in association with a specific project. The sum of all users' quotas under a same project should not exceed the quotas of the project. The following aspects will be covered   - User quota is a sub-division of a project quota   - If user quota is not setted, the usages of a resource of all users will be limited by project quotas, that is, it's compatible with the current project-quotas.   - The usage of a resource will be limited by the user quota which is under a specific project quota, if it was setted. Project: nova Series: havana Blueprint: periodic-tasks-to-db-slave Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/periodic-tasks-to-db-slave Spec URL: None Periodic tasks are some of the most consistent load that any deployment will experience. As such, and because of the nature of most periodic tasks, their reads are prime targets for DB slaves. Project: ceilometer Series: havana Blueprint: pipeline-configuration-cleanup Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/ceilometer/+spec/pipeline-configuration-cleanup Spec URL: None Pipelines have are configured with the names of the counters they should produce, but they don't have a real list. Instead they have a set of wildcard patterns and negation options that can be used to test an individual counter by name. We should resolve that list fully when the pipeline is configured, so it is possible for the pipeline manager to ask the pipeline for a list of the counters supported by a pipeline. That will allow us to have the PublisherTask ask the pipeline which counters to collect, instead of having to work it out in advance. Project: ceilometer Series: havana Blueprint: pipeline-publisher-url Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/pipeline-publisher-url Spec URL: None Use URL scheme to define publishing destination in the pipeline, rather than simply a publisher name. That would allow to use several time the same publisher with different destination more easily. Project: keystone Series: havana Blueprint: pluggable-remote-user Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/pluggable-remote-user Spec URL: https://etherpad.openstack.org/havana-external-auth Keystone's handling of REMOTE_USER is hardcoded and should be made pluggable to generically support external authentication methods. Project: keystone Series: havana Blueprint: pluggable-token-format Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/pluggable-token-format Spec URL: None keystone.conf's token_format currently has two options, either 'UUID' or 'PKI'. These two options represent slightly different code paths, each with their own token generation and validation logic. Both should be made pluggable, and the existing UUID and PKI code paths should be extracted into plugins. token_generator = keystone.token.uuid.generator token_validator = keystone.token.uuid.validator token_generator = keystone.token.pki.generator token_validator = keystone.token.pki.validator Backwards compatibility should be maintained for overriding token_format such that if 'UUID' is specified, then the default UUID token generator & validator callables should be used, etc. Additionally, the PKI token_validator should consume keystoneclient. See related Havana summit etherpad: https://etherpad.openstack.org/havana-external-auth Project: neutron Series: havana Blueprint: plumgrid-plugin-rest-access Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/neutron/+spec/plumgrid-plugin-rest-access Spec URL: None In current version of PLUMgrid plugin, REST Management Console known as Director does not has enabled credentials check for configuration management. This blueprint implements the mechanisms to check for admin credentials. Project: neutron Series: havana Blueprint: plumgrid-plugin-unit-tests Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/neutron/+spec/plumgrid-plugin-unit-tests Spec URL: None Increase the unit tests coverage for PLUMgrid plugin Project: neutron Series: havana Blueprint: plumgrid-plugin-v2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/plumgrid-plugin-v2 Spec URL: None Include support for the following neutron extensions: router, binding and ext-gw-mode Substitutes Quantum for Neutron whenever is possible Rename some of the internal components Implements: get_network and get_networks Project: keystone Series: havana Blueprint: policy-on-api-target Design: Review Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/keystone/+spec/policy-on-api-target Spec URL: https://etherpad.openstack.org/api_policy_on_target Today we support policy enforcement on any items that are passed into an api call, even the individual fields of an object. However, there are times when you want to enforce policy on the object the api is operating on (for example on DELETE). A classic example would be having a domain admin that has the responsibility for managing users in a given domain. A cloud provider would want to be able to set the policy file so that such a domain admin could ONLY manage users in the appropriate domain. Today this works for create user since we pass the whole object into the call (and domain_id is a field of the user object), but won't work for update/delete - since the whole user object isn't passed into the call. In fact we want to enact the policy on the target of the api call, not on the parameters passed into it. We should also support the protection of role assignments in the same way, e.g. being able to specify that an api caller can only modify a role assignment where the domain_id of the actor (e.g. user or group) of the role assignment is the same as the scope of the caller. This enables the division of administration between, say, a cloud administrator and a domain administrator. This may not require a change to the policy engine, but would require us to change how we call it for our protected apis. Project: ceilometer Series: havana Blueprint: pollster-runtime-configuration Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/ceilometer/+spec/pollster-runtime-configuration Spec URL: None When using ceilometer for monitoring, sometimes the users want to enable/disable some pollsters which are only for testing/debugging purpose in runtime, without modifying the configuration file and restarting the agent. Besides, some users might want to ask a pollster only to monitor part of the resources available to it, e.g. only to monitor one specific nova instance. The users need to pass the instance UUID as a configuration-paramter to the pollster in runtime. We might need to design a framework to allow the user to use the "management-API" to do the following things in the run-time:  - enable/disable a pollster  - get/set configuration parameter for a pollster  - ask a pollster to immediately start polling, instead of waiting for other pollsters in the same polling task to finish before it can start poll. Project: neutron Series: havana Blueprint: portbinding-ex-db Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/portbinding-ex-db Spec URL: None if the plugin supports the portbind extension, the nova compute will pass in binding:host_id to port model. https://review.openstack.org/#/c/21141/ The host information is important for multihost feature and other features that have algorithms depending on host segment Project: nova Series: havana Blueprint: powervm-configdrive Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/powervm-configdrive Spec URL: None Today, all virt drivers except PowerVM support ConfigDrive. This blueprint will handle implementation including new iso attach and detach code for power instance.  Assumptions and Constraints: Like with other drivers config option and api param will be checked to enable or disable     ConfigDrive     iso attached will be dropped on a timeout also defined as config option     Instance migrate code will clean up iso artifacts before moving instance.     Instance delete code will clean up iso artifacts before moving instance.     Snapshot code does not snapshot iso. Project: neutron Series: havana Blueprint: provider-network-extensions-cisco Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/provider-network-extensions-cisco Spec URL: None Provider network extensions for Cisco plugin as defined in blueprint: https://blueprints.launchpad.net/quantum/+spec/provider-networks Project: heat Series: havana Blueprint: provider-resource Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/provider-resource Spec URL: None From: http://lists.openstack.org/pipermail/openstack- dev/2013-April/007989.html * Create a Custom resource type that is based on a nested stack but, unlike the AWS::CloudFormation::Stack type, has properties and attributes inferred from the parameters and outputs (respectively) of the template provided. * Modify resource instantiation to create a Custom resource whenever a resource has a non-empty "Provider" attribute Project: heat Series: havana Blueprint: provider-upload Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/provider-upload Spec URL: None From http://lists.openstack.org/pipermail/openstack- dev/2013-April/007989.html * Modify the resource instantiation to search among multiple named definitions (either Custom or Plugin) for a resource type, according to the "Provider" name. * Add an API for posting multiple named implementations of a resource type. Project: cinder Series: havana Blueprint: public-volumes Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/public-volumes Spec URL: None Provide the ability to share volumes across the tenants. Similar to the glance's public images. Provide visibility and ability to attach volume to a VM for users from other tenants if volume marked as public. Operations of updating and deleting still available only for volume owner. Use cases:   - it could be useful in case with shared (https://blueprints.launchpad.net/cinder/+spec/shared-volume) bootable volumes Project: neutron Series: havana Blueprint: pxeboot-ports Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/pxeboot-ports Spec URL: None Nova baremetal needs to send PXE boot instructions to machines. It currently runs its own dnsmasq process to accomplish this, but this is listening on the same broadcast domain as the quantum dhcp-agent's dnsmasq (or equally that of nova-network). This makes booting very unreliable. I discussed this on the list a few times - http://lists.openstack.org/pipermail/openstack- dev/2013-January/004363.html is a decent summary / entry point. What I propose to do is to add a pxeboot extension (extensions?) to quantum that allow nova to specify the pxe boot details when it creates or updates a port. Guidance on exactly what that looks like in code appreciated :) Overall arc looks like: - teach quantum ports to store pxe boot details - teach the dhcp agent to configure that (see list reference for how that should look on disk) - teach quantumclient to drive this api - teach nova to ask the hypervisor for pxe details as part of network setup We can assume that any pxe details for a port are reachable via the network the port is on (e.g. it is not quantums problem to ensure a sane setup). Project: cinder Series: havana Blueprint: qemu-assisted-snapshots Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/qemu-assisted-snapshots Spec URL: https://wiki.openstack.org/wiki/Cinder/GuestAssistedSnapshotting QEMU-assisted snapshotting Enable snapshotting of volumes on backends such as GlusterFS by storing data as QCOW2 files on these volumes. With Nova support, this can also enable quiescing via the QEMU guest agent. Project: nova Series: havana Blueprint: qemu-assisted-snapshots Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/qemu-assisted-snapshots Spec URL: https://wiki.openstack.org/wiki/Cinder/GuestAssistedSnapshotting? QEMU-assisted snapshotting Enable snapshotting of volumes on backends such as GlusterFS by storing data as QCOW2 files on these volumes. With Nova support, this can also enable quiescing via the QEMU guest agent. Project: neutron Series: havana Blueprint: qos-ovs-qos Design: New Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/neutron/+spec/qos-ovs-qos Spec URL: None Support the QoS API in OpenVSwitch plugin, with an implementation that marks packets with DSCP values via IPTables. Project: neutron Series: havana Blueprint: quantum-common-rootwrap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-common-rootwrap Spec URL: None Once we ported oslo-rootwrap new features to quantum-rootwrap (https://blueprints.launchpad.net/quantum/+spec/quantum-rootwrap-new- features) and the other way around (https://blueprints.launchpad.net/oslo/+spec/rootwrap-quantum- features), they will be feature-equivalent. Now we just need to make quantum-rootwrap use the common rootwrap rather than its forked copy. Project: neutron Series: havana Blueprint: quantum-fwaas Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-fwaas Spec URL: https://wiki.openstack.org/wiki/Quantum/FWaaS Quantum now has the ability to load multiple service plugins. Firewall features could be managed and exposed via a Firewall service plugin (similar to LBaaS service plugin). Work items - Defining the resource abstractions and CRUD operations - SQLAlchemy data model - Backend "fake" driver for testing Google doc: https://docs.google.com/documen t/d/1PJaKvsX2MzMRlLGfR0fBkrMraHYF0flvl0sqyZ704tA/edit Project: neutron Series: havana Blueprint: quantum-fwaas-agent Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-fwaas-agent Spec URL: None The firewall agent would consume notifications for changes in the logical firewall resources and make calls on the underlying driver to configure the firewall. This agent functionality is likely to be collocated with the L3 agent (probably via a mixin approach). Project: neutron Series: havana Blueprint: quantum-fwaas-iptables-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-fwaas-iptables-driver Spec URL: None This will serve to complete the reference implementation for FWaaS in Havana. This driver will configure the Iptables rules on the gateway host(s) to realize the firewall rules. The details of driver is captured under Section "Reference Implementation->IPTables Driver" in FWaaS spec https://docs.google.com/document/d/1PJaKvsX2MzMRlLGfR0fBkr MraHYF0flvl0sqyZ704tA Project: neutron Series: havana Blueprint: quantum-fwaas-plugin Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-fwaas-plugin Spec URL: None This service plugin will implement the FWaaS CRUD api and handle communication with the agent that realizes the firewall. Project: neutron Series: havana Blueprint: quantum-l3-routing-plugin Design: Discussion Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/quantum-l3-routing-plugin Spec URL: http://wiki.openstack.org/L3%20mixin%20to%20plugin Currently, the L3 router and floatingip functionality is essentially implemented as two components; a server-side piece, which is essentially the L3_NAT_db_mixin class that is inherited by all core plugins, and an L3 agent (L3NATAgent class) where the actual routing is done (using Linux namespaces, the kernel IP forwarding functionality and IP tables). Suppose one would like to replace (or complement) that router implementation with something else, e.g., use a hardware-based router or add additional features like VRRP to the implementation. As long as the changes can be contained within the L3 agent (while honoring the normal interface), it is fairly simple to just replace the default one with the extended l3 agent in the deployment. However, if the desired functionality requires changes to the "server side", i.e., the L3_NAT_db_mixin class, the situation gets much more tricky since the mixin is essentially baked into the core plugin. A way around this problem could be to provide the L3 routing functionality as a separate plugin. This should be done analogous to how advanced services like LBaaS, FW, will be implemented as separate plugins as targeted in this blueprint: https://blueprints.launchpad.net/quantum/+spec/quantum-service- framework. With L3 routing also as a separate plugin, it would be simpler to provide different such implementations, independent of (L2) core plugin but also to introduce additional L3 specific extensions. Project: neutron Series: havana Blueprint: quantum-multihost Design: Drafting Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/quantum-multihost Spec URL: https://docs.google.com/document/d/1Y41g-POd3DLmtFnD6JfKFDvZJikbUvsL0wIi8LFD35U/edit?usp=sharing Goal here is to have a DHCP implementation that provides the same properties as nova-network's "multi_host" functionality, where the DHCP server for a particular VM runs directly on the same hypervisor as the VM itself (with the exception of when a VM migrates). The main goal of this approach is that you don't need to worry about providing HA for a centralized DHCP agent. Note: the downside of this approach is that each DHCP instance consumes an IP, meaning it can reduce the number of available IPs on a subnet (i.e., it probably makes sense to double or triple the size of the subnet when using multi_host, assuming subnets are using "free" RFC 1918 space). Project: neutron Series: havana Blueprint: quantum-qos-api Design: Drafting Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api Spec URL: https://wiki.openstack.org/wiki/Quantum/QoS Currently, two plugins (Cisco, Nicira) have methods focusing on quality of service. Quantum should offer an API that exposes quality of service operations for a tenant network. Project: neutron Series: havana Blueprint: quantum-qos-api-db Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/quantum-qos-api-db Spec URL: https://docs.google.com/document/d/1nGUqEb4CEdabbTDyeL2ECVicnBRNrK3amJcNi-D4Ffo/edit This blueprint will cover the QoS API and database models. The "policy" column will store a JSON object that specifies the action to be taken - it's sort of a hack - better suggestions appreciated - but the idea is that the action column could contain all the different types of QoS mechanisms that vendors use. Possible examples for the policy column in the qos table: {"action": {"mark": "af32"}} {"action": {"ratelimit": "100kbps"}} The model has many similarities with the NVP QoS DB models, so the goal will be to refactor the NVP QoS to use these models, and place the NVP QoS specific attributes inside the JSON object. Project: neutron Series: havana Blueprint: quantum-rootwrap-new-features Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-rootwrap-new-features Spec URL: None quantum-rootwrap diverged from the common rootwrap over the Grizzly cycle. A few new features (like logging and path search) were added to oslo-rootwrap. The first step in making them converge is to import those new features to quantum-rootwrap. Project: horizon Series: havana Blueprint: quantum-security-group Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/quantum-security-group Spec URL: None Support advanced features of Quantum security group. Quantum security group support is being implemented in Grizzly-1. Before implementing Horizon support, we need to discuss how to use quantum security group combined with Nova. Project: heat Series: havana Blueprint: quantum-security-group Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/quantum-security-group Spec URL: None This will make it possible to implement the following properties in AWS::EC2::SecurityGroup - VpcId - SecurityGroupEgress Project: neutron Series: havana Blueprint: quantum-vpnaas-ipsec-ssl Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/quantum-vpnaas-ipsec-ssl Spec URL: https://docs.google.com/document/d/1Jphcvnn7PKxqFEFFZQ1_PYkEx5J4aO5J5Q74R_PwgV8/edit# VPN as a Service in Quantum. This is an advanced service offered by Quantum to insert a VPN service to the Tenant's Network. This blueprint covers the REST API, CLI, Plugin, Agent and the drivers required to deploy and configure a Virtual VPN device on to the Tenant's Network. The configuration template provides options to configure Static, Dynamic Routing and HA for the VPN device for Site- to-Site VPN connection and for Single Site-to-Multiple Site VPN connections. This also supports Remote Users. Project: neutron Series: havana Blueprint: quantum-zvm-plugin Design: Approved Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/neutron/+spec/quantum-zvm-plugin Spec URL: None 1. Initial support of zVM virtual networks 2. Support virtual vswitch with user/port based VLAN 3. Support DHCP Project: nova Series: havana Blueprint: query-scheduler Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/query-scheduler Spec URL: None The current scheduler model of having it proxy requests to computes makes it difficult to reason about the workflow involved in creating/resizing an instance, and makes future state management work unwieldy. Nova, most likely conductor, should query the scheduler for placement decisions and then handle the workflow for create/resize itself. Given that conductor is optional at this point in time it may start by using the local conductor from the api and spawning a greenthread to handle the operation so the api doesn't have to wait around. Project: heat Series: havana Blueprint: rackspace-cloud-servers-provider Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/rackspace-cloud-servers-provider Spec URL: None Create a resource provider for Rackspace Cloud Servers (nova on Rackspace public cloud). Project: nova Series: havana Blueprint: ratio-resource-virtualization Design: Superseded Lifecycle: Complete Impl: Unknown Link: https://blueprints.launchpad.net/nova/+spec/ratio-resource-virtualization Spec URL: None We can configure the virtual ratio of resource where filter scheduler. However, there are times that we would like to set up the ratio of virtual resource in specific compute node. we cannot configure for each nodes using 'cpu_allocation_ratio' and 'memory_allocation_ratio'. Adding ratio column in compute nodes tables, and adding nova command which we can update the ratio value. Therefore, we could configure ratio of vcpus and memories in specific host more dynamically. Project: horizon Series: havana Blueprint: rbac Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/rbac Spec URL: None Horizon should not be defining permissions itself; instead those decisions should be enforced by the policy engines of the individual services (current plan is to have those roll up through Keystone). Once keystone supports retrieving this data in the V3 API Horizon should move to this model ASAP. Step 1 is to simply respect and enforce the RBAC policy; step two will be to allow management of it. Project: nova Series: havana Blueprint: rbac-improvements Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/rbac-improvements Spec URL: None The following is a proposal to improve the rule based api access in Nova. There are currently a number of issues which need to be addressed: 1. In some cases, despite having a rule defined in policy.json, a command can get blocked by the require_admin_context decorator within the sqlalchemy/api.py layer. If a user chooses to define another role or rule for a command which isn't the admin role, the command should succeed or fail based on the user's rule definition, 2. In some cases, a single rule can apply to multiple api calls. A Nova user should be able to define a rule for each api call. The rules need to be granular enough to support this. 3. In some cases, a policy failure does not return an HTTP 403. A policy failure should always return a consistent HTTP 403 error code. Other than the changes described above, current policy definition behavior will remain the same. Project: cinder Series: havana Blueprint: read-only-volumes Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/read-only-volumes Spec URL: None Provide the ability to attach volumes in the read-only mode. Read-only mode could be ensured by hypervisor configuration during the attachment. Libvirt, Xen, VMware and Hyper-V support R/O volumes. Use cases:   - immutable volumes   - cinder as a backend for glance https://blueprints.launchpad.net/glance/+spec/glance-cinder-driver   - shared volume https://blueprints.launchpad.net/cinder/+spec/shared- volume Project: nova Series: havana Blueprint: record-reason-for-disabling-service Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/record-reason-for-disabling-service Spec URL: None I'd like to be able to define a way to log a reason when I disable a service. This is really usefull on large deployment when you need to deal with a lot of nodes and you need a way to track why a specific service has been disabled (for example, maintenance, hw failures, and so on). The idea is to add a column (String 255) in the services table to store a reason field. Then we will add a new API extension to disable a service indicating a reason for that. Then we are going to change the nova client to support the new feature. Project: cinder Series: havana Blueprint: refactor-backup-service Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/refactor-backup-service Spec URL: None This blueprint aims to have a common interface between the volume driver and the backup service. In the current situation the there is only one volume driver (lvm) that supports backups and only one backup service (swift). In order to support more volume drivers and more backup targets there has to be some rework done in the existing interface. Project: nova Series: havana Blueprint: refactor-iscsi-fc-brick Design: New Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/refactor-iscsi-fc-brick Spec URL: None Cinder is now using the "brick" library inside of cinder now to do all of it's volume attaches/detaches for both iSCSI and Fibre Channel. The idea being that brick would eventually make it into OSLO. This blueprint is to include: 1) pull the brick code into nova 2) refactor the libvirt volume drivers for ISCSI and FibreChannel to use the brick attach code. Cinder's BluePrint https://blueprints.launchpad.net/cinder/+spec/cinder-refactor-attach Project: cinder Series: havana Blueprint: refactor-lvm-and-iscsi-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/refactor-lvm-and-iscsi-driver Spec URL: None The current LVM driver is a bit of a mix of iscsi and LVM, and it also only provides the ability to deal with LVM volumes that get an iscsi target associated with them. We should refactor the LVM driver and seperate the iscsi code to make it more easily shared/consumed for other uses. Project: glance Series: havana Blueprint: registry-api-v2 Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/registry-api-v2 Spec URL: None In order to address blueprint registry-db-driver a new API is needed for the registry service. This API must be compliant with current db_api in terms of input parameters and responses, in order to be capable of wrapping it in a db driver and support legacy deployments. The idea is to make it easier to implement new methods to the database API without having to modify the registry's API. The benefits of doing so are: 1) it reduces the places where things need to be modified when implementing new features in the DB side 2) It reduces duplicated code 3) This will help migrating Glance's API v1 to use the database driver, which will allow users using glance-api v1 to deprecate the registry if they whish so. RPC-over-HTTP has been chosen instead of a message-broker because it doesn't make sense to add more dependencies (MB and everything related to it) just for this feature. Project: glance Series: havana Blueprint: registry-db-driver Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/glance/+spec/registry-db-driver Spec URL: None This change is part of the kill-registry blueprint and is intended to implement a new db driver for the registry service in order to support legacy deployments based on 2 separate services. Adding a testing plan for folks willing to give this feature a try: * Install Glance * Configure as usual but: * glance-api: - Make sure Glance's API v2 is enabled in both glance-api.cof and glance-api- paste.ini - Make sure glance-api.conf uses: data_api = glance.db.registry.api * glance-registry: - Make sure Glance's Registry API v2 is enabled in glance-registry-paste.ini: paste.app_factory = glance.registry.api.v2:API.factory - Make sure glance-api.conf uses: data_api = glance.db.sqlalchemy.api * Use glanceclient normally but make sure it points to the api version 2 - glance --os-image-api-version 2 image-list Project: neutron Series: havana Blueprint: remove-bin-directory Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/remove-bin-directory Spec URL: None we will install binaries via console-scripts in setup.cfg, so the binaries under bin are duplicated. Project: ceilometer Series: havana Blueprint: remove-counter Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/ceilometer/+spec/remove-counter Spec URL: None Remove all counters references in Ceilometer codebase. Project: neutron Series: havana Blueprint: remove-dhcp-lease Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/remove-dhcp-lease Spec URL: None This blueprint will remove the dhcp-lease stuff from neutron as this is not needed. This will leverage dhcp_release on the quantum dhcp- agent Project: ceilometer Series: havana Blueprint: remove-disabled-pollsters-option Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/remove-disabled-pollsters-option Spec URL: None It's preferable to disable counters rather than pollsters, so remove that option. Project: ceilometer Series: havana Blueprint: remove-obsolete-storage-driver-methods Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/remove-obsolete-storage-driver-methods Spec URL: None The get_volume_sum(), get_volume_max(), and get_event_interval() methods of the storage API are only used by the V1 API. The values they return can be obtained from the return value of get_meter_statistics(), so we should update the V1 API implementation and then remove the methods from the storage drivers to keep that API clean and tight. Project: nova Series: havana Blueprint: remove-security-group-handler Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/remove-security-group-handler Spec URL: None Now that nova's security groups are more plugable and decoupled from the database we can should remove the security_group_handler code. The security_group_handle code was added to provide a hook into nova security groups so that one could get security group add/delete/change notification and proxy them somewhere else. Though there are several transactional issues that one opens themselves up to by using this. Since there is no code/security_group_handler drivers in nova that use this I think we should remove this feature. Project: neutron Series: havana Blueprint: remove-use-of-quantum Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/remove-use-of-quantum Spec URL: None An email from Mark Collier: We have to phase out the trademark or attention getting use of the code name "Quantum" when referring to the the OpenStack Networking project, as part of a legal agreement with Quantum Corporation, the owner of the "Quantum" trademark. The Board of Directors and Technical Committee members involved in Networking related development and documentation were notified so we could start working to remove "Quantum" from public references. We made a lot of progress updating public references during the Grizzly release cycle and will continue that work through Havana as well. The highest priority items to update are locations that are attention getting and public--our biggest area of work remaining is probably on the wiki, where we could really use everyone's help. In other official communications, we refer to the projects by their functional OpenStack names (Compute, Object Storage, Networking, etc). At the summit we have a session scheduled to talk about project names generally and the path forward for OpenStack Networking specifically. For instance, in places where there is a need for something shorter, such as the CLI, we could come up with a new code name or use something more descriptive like "os-network." This is a question it probably makes sense to look at across projects at the same time. If you have input on this, please come participate in the session Thursday April 18 at 4:10pm: http://openstacksummitapril2013.sched.org/event/95df68f88b519a 3e4981ed9da7cd1de5#.UWWOZBnR16A Project: horizon Series: havana Blueprint: resize-server Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/resize-server Spec URL: None Sometime we need resize the server when it is launched, such as the number of vcpu, the size of memory and the size of disk. We can achieve this by means of changing its flavor. Project: heat Series: havana Blueprint: resource-properties-schema Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-properties-schema Spec URL: None Suggestion from stevebake: It would be useful for clients (especially ui builders) to get the schema properties for each resource type. Not sure if they are best returned in this [/resource_types] call or in /resource_types/{type name} Project: heat Series: havana Blueprint: resource-template Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/resource-template Spec URL: None From: http://lists.openstack.org/pipermail/openstack- dev/2013-April/007989.html * Add an API to get a generic template version of any built-in resource (with all properties/outputs defined) that can be easily customised to make a new provider template. Project: oslo Series: havana Blueprint: rootwrap-quantum-features Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/rootwrap-quantum-features Spec URL: None quantum-rootwrap diverged from the common rootwrap over the Grizzly cycle. It added a few new features, like ExecFilters (chained filters) or specific filters (IPFilter, specific DnsMasqFilter). Before we can make Quantum use oslo-rootwrap we need to import those features into oslo-rootwrap. Project: oslo Series: havana Blueprint: rpc-multi-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/rpc-multi-api Spec URL: None It has become increasingly clear in nova that the ability to expose more than one API over rpc from a single service would be useful. Specifically, we have had a couple of cases now where we would like to add methods that apply to *all* services. There is no good way to do that right now. This blueprint is for adding the ability to expose multiple APIs, that exist within their own namespace, and are versioned independently of each other. Project: oslo Series: havana Blueprint: rpc-object-serialization Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/rpc-object-serialization Spec URL: https://wiki.openstack.org/wiki/ObjectProposal Oslo's RpcProxy and RpcDispatcher classes should support (de-)serialization hooks for subclasses to handle arguments and results. Project: nova Series: havana Blueprint: rpc-support-for-objects Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/rpc-support-for-objects Spec URL: None The parent blueprint of unified-objects will yield a new object type that can self-serialize and provide some lazy-loading abilities over the RPC wire. The first step in introducing this to nova is making sure the RPC layer either in Nova or Oslo can support both Grizzly- style primitive objects as well as these new-world objects. By supporting both (or really, just marshaling new-world objects into and out of primitive form before actually passing to the lower RPC layers), we can gradually convert nova systems to using the new objects. Project: nova Series: havana Blueprint: rpc-version-control Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/rpc-version-control Spec URL: None Right now, a version N+1 node can receive and handle version N messages, but can only send version N+1 messages. Allowing an admin to lock down the RPC version until all nodes have been upgraded allows for graceful cluster-wide upgrades. An example of locking down a version is setting: [rpc_api_client_caps] conductor = 1.48 in nova.conf on compute nodes as part of a Grizzly->Havana upgrade, before you upgrade the compute nodes. This ensures that Havana compute nodes will operate correctly a Grizzly nova-conductor. Project: oslo Series: havana Blueprint: rpc-version-control Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/rpc-version-control Spec URL: None This blueprint is for the required changes in oslo to support rpc clients being configured with a version cap. This is a requirement for doing rolling upgrades where you need to prevent clients from sending new messages until all nodes have been upgraded to the version that supports them. Project: oslo Series: havana Blueprint: run-tests-script Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/oslo/+spec/run-tests-script Spec URL: None Nova, Glance, and other projects have a `./run_tests.sh` script which is used to run unit-tests. These scripts are different, so it would be good to crate common version of `./run_tests.sh` that could be used by all projects. Project: heat Series: havana Blueprint: scalingpolicy-update-stack Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/scalingpolicy-update-stack Spec URL: None Implement UpdateStack support for ScalingPolicy resources Project: ceilometer Series: havana Blueprint: scheduler-counter Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/scheduler-counter Spec URL: None Provide a counter for scheduling event based on scheduler.run_instance.scheduled events. Project: cinder Series: havana Blueprint: scheduler-hints Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/scheduler-hints Spec URL: None Cinder already have filter scheduler and filter as well to support scheduler hints. It's time to add API extension to enable scheduler hints so that user is able to pass hints for scheduler to make smart decision when placing new volumes. Project: nova Series: havana Blueprint: scheduler-hints-api Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/scheduler-hints-api Spec URL: https://wiki.openstack.org/wiki/SchedulerHintsAPI The Nova API for instance creation supports a scheduler_hints mechanism whereby the user can pass additional placement related information into the Nova scheduler. The implementation of scheduler_hints lies (mostly) in the various scheduler filters, and the set of hints which are supported on any system therefore depends on the filters that have been configured (this could include non- standard filters). It is not currently possible for a user of the system to determine which hints are available. Hints that are not supported will be silently ignored by the scheduler This API extension will make the list of supported hints available to users by querying each of the configured scheduler filters and weighting functions Project: horizon Series: havana Blueprint: security-group-rule-templates Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/security-group-rule-templates Spec URL: None In the "Add Rule" dialog for Security Rules it would be a great help to everybody to have a variety of common port mappings available in a dropdown. One option would be to have an additional dropdown named "Common Rules" that simply modified the values of the others fields on change. Another option would be to add those options to the "IP Protocol" dropdown and rename the existing ones to "Custom TCP Rule" or the like. Project: neutron Series: havana Blueprint: security-group-rules-protocol-numbers Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/security-group-rules-protocol-numbers Spec URL: None Currently the only protocols that quantum security-groups supports are TCP, ICMP, UDP and the user has to pass in their human readable name. We should also allow protocol numbers to be passed in. This change will keep the api backwards compatible except now integers can be passed in for the protocol. Project: horizon Series: havana Blueprint: select-zone-when-creating-instance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/horizon/+spec/select-zone-when-creating-instance Spec URL: None The default nova scheduler AvailabilityZoneFilter allow user to select zone when creating a new instance, but currently we can only use this feature through ec2 interface. In horizon, the parameter already contains "zone" in the method, so we can show the zones selection when creating new instance. Project: nova Series: havana Blueprint: servers-add-volume-list Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/servers-add-volume-list Spec URL: None According to the bug - https://bugs.launchpad.net/cinder/+bug/1112998 - there is a need to include list of attached volumes with instance info. Corresponding fix was commited under the above bug (https://review.openstack.org/#/c/27067/) but was reverted then as it should be done as an API extension. Project: oslo Series: havana Blueprint: service-restart Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/service-restart Spec URL: None Sometimes, the operators might want to the change some configurations(i.e. configuration file) and restart an openstack service without going to the console to CTRL+C to stop the running service and start it again. They might just want to send a signal(i.e. SIGUSR1) to trigger the service to restart by itself to reload new configurations. Project: neutron Series: havana Blueprint: service-type-framework-cleanup Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/service-type-framework-cleanup Spec URL: https://wiki.openstack.org/wiki/Quantum/ServiceTypeFramework Per discussion in ML, there's a need to redefine some terms around service types framework and propose new API and logic to utilize this framework for multivendor support in services (currently in lbaas) Project: ceilometer Series: havana Blueprint: setuptools-console-scripts Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/setuptools-console-scripts Spec URL: None We should switch our bin/* scripts to become setuptools console scripts provided ones. Project: nova Series: havana Blueprint: shelve-instance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/shelve-instance Spec URL: None Based on the discussion summarized in https://etherpad.openstack.org/HavanaMothballServer Users would like the ability to stop servers when they don't need them, but retain it in their list of servers and keep all associated data and resources with it. If an instance is stopped for a long while an operator may wish to move that instance off of the hypervisor in order to minimize resource usage. In order to address a user who may wish to stop an instance at the end of a work day and resume it the following day/start of new week we're going to add a new shelved state and API operations to shelve/unshelve an instance. Shelving will be pretty much synonymous with shutting down an instance on the hypervisor, so anything in memory is not maintained. Unshelving will restart the instance. If an instance has been shelved for some amount of time, say 72 hours(configurable), then a periodic task can begin freeing up hypervisor resources by possibly snapshotting the disk and offloading to 'cold storage' and removing the instance from its host. This has the potential to lengthen the unshelve time, so deployers will need to inform users. Perhaps a new state could be added to indicate that the instance no longer exists on a host, but that discussion can be had later. Project: horizon Series: havana Blueprint: show-zone-for-admin Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/horizon/+spec/show-zone-for-admin Spec URL: None Currently, we can get zone information from GET /v2/tenent-id/os- availability-zone/detail, so we can show that information in the dashboard Project: cinder Series: havana Blueprint: solidfire-extend-size-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/solidfire-extend-size-support Spec URL: None Implement extend volume functionality in SolidFire driver. Project: ceilometer Series: havana Blueprint: specify-event-api Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/ceilometer/+spec/specify-event-api Spec URL: https://wiki.openstack.org/wiki/Ceilometer/blueprints/Ceilometer-specify-event-api This blueprint is focused on specifying the HTTP event API for the overall blueprint "Bring StackTach functionality into Ceilometer", at https://blueprints.launchpad.net/ceilometer/+spec/stacktach- integration and related blueprint, "Expose the event data via the HTTP interface", https://blueprints.launchpad.net/ceilometer/+spec/expose- event-data. Project: keystone Series: havana Blueprint: split-identity Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/split-identity Spec URL: https://etherpad.openstack.org/split-identity Split the identity backend into two separate backends: Identity (users and groups) and assignments projects (domains, projects, roles, role assignments). Project: keystone Series: havana Blueprint: sql-query-get Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/sql-query-get Spec URL: None From the SqlAlchemy docs: "get() is special in that it provides direct access to the identity map of the owning Session. If the given primary key identifier is present in the local identity map, the object is returned directly from this collection and no SQL is emitted, unless the object has been marked fully expired. If not present, a SELECT is performed in order to locate the object." Source: http://docs.sqlalchemy.org/en/rel_0_7/orm/query.html Related havana summit etherpad: https://etherpad.openstack.org/havana- keystone-performance Project: heat Series: havana Blueprint: stack-metadata Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/stack-metadata Spec URL: None From: http://lists.openstack.org/pipermail/openstack- dev/2013-April/007989.html Introduce the concept of _stack_ Metadata, and provide a way to access it in a template (pseudo-parameter?) Project: heat Series: havana Blueprint: stack-suspend-resume Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/stack-suspend-resume Spec URL: None User requested feature, add support for suspending a heat stack, then resuming it at some later point. Project: ceilometer Series: havana Blueprint: stacktach-integration Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/ceilometer/+spec/stacktach-integration Spec URL: https://etherpad.openstack.org/stacktach-cm-integration This is the Epic story for related blueprints. Project: ceilometer Series: havana Blueprint: storage-api-models Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/storage-api-models Spec URL: None We should define model classes for the storage drivers to return, instead of depending on dictionaries with basic python types. Project: keystone Series: havana Blueprint: store-quota-data Design: Approved Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/keystone/+spec/store-quota-data Spec URL: https://wiki.openstack.org/wiki/KeystoneCentralizedQuotaManagement In order to enable the use of quotas across different OpenStack components we need to store and access them centrally. Keystone can be used as that central datastore. Project: ceilometer Series: havana Blueprint: support-standard-audit-formats Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/ceilometer/+spec/support-standard-audit-formats Spec URL: https://wiki.openstack.org/wiki/Ceilometer/blueprints/support-standard-audit-formats#Provide_support_for_auditing_events_in_standardized_formats It is clear that core project developers appreciate the strengths of Ceilometer in having a reliable, core centralized service with the ability to track usage information towards statistical usage analysis and billing. It seems that many of these same projects are seeing similar “auditing” requirements but now with the emphasis on tracking access to services (by users and other services) for the purposes of security auditing. Leveraging Ceilometer’s design to enable different types of event auditing standards (and to provide for different purposes and views on events) seems to make good sense. Project: neutron Series: havana Blueprint: tailf-ncs Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/neutron/+spec/tailf-ncs Spec URL: https://docs.google.com/document/d/1bRHVTN60oY2y2NIZ0iT9CyLS0TWYTPA0MPhvmYJqQjU This blueprint covers creating a Tail-f NCS (Network Control System) plugin for OpenStack Networking. NCS provides the ability to provision an entire multi-vendor network in a transactional manner using diverse mechanisms such as OpenFlow, NETCONF, SNMP, and CLI. The plugin permits OpenStack Networking to use NCS to automatically provision a multi-vendor network in response to configuration changes. Project: heat Series: havana Blueprint: template-string-function Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/template-string-function Spec URL: None A function Fn::Template which allows heat environment values to be inserted into string blocks like UserData scripts using a template substitution format. There are many templating engines to choose from - probably simpler is better in this situation. One option is python Template Strings: http://docs.python.org/2/library/string.html #template-strings Having to escape $ with $$ shouldn't be too difficult for users.   "/tmp/setup.mysql" : {       "content" : { "Fn::Template" : { "Fn::Join" : ["\n", [        "CREATE DATABASE $DBName",        "GRANT ALL PRIVILEGES ON ${DBName}.* TO ${DBUsername}'@'localhost'",        "IDENTIFIED BY '${DBPassword};", "FLUSH PRIVILEGES;",        "EXIT"]]}}       }, For YAML this would make it possible to build strings without Fn::Join, eg /tmp/setup.mysql   content     Fn::Template: |-        CREATE DATABASE $DBName        GRANT ALL PRIVILEGES ON ${DBName}.* TO ${DBUsername}'@'localhost'        IDENTIFIED BY '${DBPassword}; FLUSH PRIVILEGES;        EXIT This is related to https://blueprints.launchpad.net/heat/+spec/bash-environment-function as it also addresses how values might be inserted into UserData scripts. Also, this is similar to the Fn::Replace suggestion in that blueprint but without the requirement to explicitly declare substitutions:   "/tmp/setup.mysql" : {       "content" : { "Fn::Replace" : {"$DBName$", { "Ref" : "DBName"}, "$DBPassword$", { "Ref" : "DBPassword"}, "$DBUsername$", { "Ref" : "DBUsername"}}, { "Fn::Join" : ["\n", [        "CREATE DATABASE $DBName$", "GRANT ALL PRIVILEGES ON $DBName$.* TO $DBUsername$'@'localhost'", "IDENTIFIED BY '$DBPassword$;",        "FLUSH PRIVILEGES;", "EXIT"]]}}       }, Project: oslo Series: havana Blueprint: test-migrations Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/oslo/+spec/test-migrations Spec URL: None Nova, Glance, and possibly other projects have a `test_migrations.py` which is used to run-through the migration upgrades and downgrades to check for problems. These individual versions of test_migration are starting to diverge, so it would be good to settle on a common version that all projects can reference. The nova version has been updated to fix issues with snake-walk and adding a post_downgrade check, so I'd propose that becomes the reference. Project: ceilometer Series: havana Blueprint: transformer-unit Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/transformer-unit Spec URL: None Build a generic transformer allowing one to transform value from an unit to another. For example, transforming CPU time consumed into a percentage (useful for alarming). Or transforming Fahrenheit to Celsius. :) Or from cumulative to gauge values via sampling (i.e. retaining the previous value in a local sqlite instance and then reporting [curr-prev] units). Project: oslo Series: havana Blueprint: trusted-messaging Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/oslo/+spec/trusted-messaging Spec URL: https://wiki.openstack.org/wiki/MessageSecurity Add signing and encryption for messages. Project: ceilometer Series: havana Blueprint: udp-publishing Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/ceilometer/+spec/udp-publishing Spec URL: None Build an UDP publisher and a receiver for the collector. Project: keystone Series: havana Blueprint: unified-logging-in-keystone Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/keystone/+spec/unified-logging-in-keystone Spec URL: None Some of OpenStack components such as Nova, Glance, Quantum already have a unified logger as part of oslo incubator project. It would be great to have the same logger in Keystone. Project: nova Series: havana Blueprint: usage-details-on-instance Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/usage-details-on-instance Spec URL: None Important usage details (launched_at, terminated_at) are left off the instance show response, this feature would provide those details on the show. These details are important as a user may want to track exactly when their instance goes active or is deleted. Project: heat Series: havana Blueprint: use-cloudinit-write Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/use-cloudinit-write Spec URL: None The current part handler is a bit complex and could be simplified by using the write functionality in the cloudinit. This wasn't possible with older versions of cloudinit, but is possible with 0.7 series. See: http://cloudinit.readthedocs.org/en/latest/topics/examples.html #writing-out-arbitrary-files See Joshua's review where this feature is suggested here: https://review.openstack.org/#/c/34476/ Project: cinder Series: havana Blueprint: use-copy-on-write-for-all-volume-cloning Design: Pending Approval Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/use-copy-on-write-for-all-volume-cloning Spec URL: None clone from snapshot currenlty uses copy-on-write but clone from volume does not (currently does a full copy). Let's see if we can find a solution to this. Project: nova Series: havana Blueprint: use-oslo-services Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/use-oslo-services Spec URL: None Refactor Nova to use Oslo's Service infrastructure. This includes the Launchers, Service, and ThreadGroup stuff from openstack.common.service. The code in Oslo is largely copy and pasted from Nova anyway, so it'd be good to get in there and DRY it up. Project: nova Series: havana Blueprint: user-defined-resume Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/user-defined-resume Spec URL: None The configuration option resume_guests_state_on_host_boot defines a global policy on whether or not to restart instances when a host reboots. Because a host may be down for a significant period of time many cloud applications monitor their own instances and launch replacements restarting all instances is rarely the required behavior This blueprint provides the user with a mechanism to specify at create time which of their instances should be restarted on a host reboot. Initially three values will be accepted: RESUME_DEFAULT - Follow the resume policy configured in the system RESUME_NEVER - Don't resume the instance RESUME_ALWAYS - Always resume the instance Ideally we would allow the user to specify max_downtime policy where instances are only resumed is the host has been stopped for less that the user specified duration (i.e "Only resume if the host is down for less than 5 minutes") - but not all ServiceGroup Drivers can support this model Project: cinder Series: havana Blueprint: user-locale-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/user-locale-api Spec URL: None Currently, error messages coming back from the API are translated using the same locale as the system the Cinder API is running on. Ideally, we would like to have the messages translated to the request senders locale, which we can support using the HTTP Accept-Language header to determine before sending back the translated response. Alternatively, there is the possibility of using the tenant/user data from Keystone to store a preferred locale. There is a similar blueprint for Nova that can be used to track the implementation work there: https://blueprints.launchpad.net/nova/+spec/user-locale-api Most of the work in Oslo and Nova should be easily transferable to the Cinder API. Project: keystone Series: havana Blueprint: user-locale-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/keystone/+spec/user-locale-api Spec URL: None Currently, error/exception messages coming back from Keystone are not translated at all. Ideally, we would like to have the messages translated to the request senders locale, which we can support using the HTTP Accept-Language header to determine a preferred locale before sending back the translated response. Alternatively, there is the possibility of using the tenant/user data to store a preferred locale for the request. There is a similar blueprint for Nova that can be used to track the implementation work there: https://blueprints.launchpad.net/nova/+spec/user-locale-api Project: nova Series: havana Blueprint: user-locale-api Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/user-locale-api Spec URL: None I18N/l10n support for API Messages based on the client locale Current Nova API messages are fixed to the locale of Host OS. However, clients from various countries will use the API, so API messages should be changed based on the locale of the client request. User locale can be determined from the Accept-Language header per request, or retrieved from the user or tenant records in Keystone, using the authentication context for the request. related to https://bugs.launchpad.net/nova/+bug/898766 A related blueprint in oslo-incubator will hold the general Messaging functionality for porting to other projects. https://blueprints.launchpad.net/oslo/+spec /delayed-message-translation Project: heat Series: havana Blueprint: user-locale-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/user-locale-api Spec URL: None Currently, error messages coming back from the API are translated using the same locale as the system the API is running on. Ideally, we would like to have the messages translated to the request senders locale, which we can support using the HTTP Accept-Language header to determine before sending back the translated response. Alternatively, there is the possibility of using the tenant/user data from Keystone to store a preferred locale. There is a similar blueprint for Nova that can be used to track the implementation work there: https://blueprints.launchpad.net/nova/+spec/user-locale-api Project: neutron Series: havana Blueprint: user-locale-api Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/user-locale-api Spec URL: None Currently, error messages coming back from the API are translated using the same locale as the system the Neutron API is running on. Ideally, we would like to have the messages translated to the request senders locale, which we can support using the HTTP Accept-Language header to determine before sending back the translated response. Alternatively, there is the possibility of using the tenant/user data from Keystone to store a preferred locale. There is a similar blueprint for Nova that can be used to track the implementation work there: https://blueprints.launchpad.net/nova/+spec/user-locale-api Project: nova Series: havana Blueprint: utilization-aware-scheduling Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling Spec URL: https://wiki.openstack.org/wiki/UtilizationAwareScheduling There are situations where it is desirable to be able to schedule VMs based upon transient resource usage beyond the current reliance on specific metrics like memory usage and CPU utilization. Advanced scheduling decisions can be made based upon enhanced usage statistics encompasing things like memory cache utilization, memory bandwidth utilization, network bandwith utilization or other, currently undefined metrics, that might be available in future platforms. This bleprint will provide an extensible framework that can be used to take advantage of current and future platform utilization metrics. Project: nova Series: havana Blueprint: utilization-based-scheduling Design: Superseded Lifecycle: Complete Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/utilization-based-scheduling Spec URL: http://wiki.openstack.org/UtilizationBasedSchedulingSpec Make up-to-date host utilization data available to the scheduler for additional scheduling capabilities. Project: nova Series: havana Blueprint: v3-api-core-as-extensions Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/v3-api-core-as-extensions Spec URL: None As part of the v2->v3 conversion, convert the core functionality of the v3 API to use the extension framework. This does not change what is considered core, just how it is implmented. Project: nova Series: havana Blueprint: v3-api-extension-framework Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/v3-api-extension-framework Spec URL: https://etherpad.openstack.org/NovaAPIExtensionFramework This is the framework for extensions to be used by the v3 API (and explicitly not the v2 API). Project: nova Series: havana Blueprint: v3-api-remove-project-id Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/v3-api-remove-project-id Spec URL: None As per http://lists.openstack.org/pipermail/openstack- dev/2013-May/008770.html there's strong support fro removing the project ID from the Nova API URL structure. This blueprint is to capture the work required to remove the project ID from the URL scheme, instead relying on the context to pass in the correct values (as it already does). Project: neutron Series: havana Blueprint: varmour-fwaas-driver Design: Approved Lifecycle: Not started Impl: Not started Link: https://blueprints.launchpad.net/neutron/+spec/varmour-fwaas-driver Spec URL: None This will serve to complete the vArmour implementation for FWaaS in Havana. This driver will be used to configure vArmour firewall in openstack environment Project: nova Series: havana Blueprint: vendor-data Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/vendor-data Spec URL: None There was not previously any way for a vendor to extend the metadata service to include vendor specific or site specific information. A "vendor-data" location in the metadata service can serve a couple purposes: * provide vendors a way to make instances aware of features or locations specific to the cloud where the instance is running. * provide an experimentation zone for metadata. By adding a way to easily add data to an instance metadata can be experimented with more easily, which could eventually lead to to moving of an item from vendor data to metadata. The plan is to * add an entry named 'vendor_data.json' to the metadata service and config-drive rendering. * add a example class that reads json formated content from a configured file. Some examples of things that a vendor might put in 'vendor-data' are: * information on local mirrors * location of a local proxy. * create a one time license registration codes * static networking routes Project: cinder Series: havana Blueprint: violin-memory-iscsi-volume-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/violin-memory-iscsi-volume-driver Spec URL: None The Violin v6000 Openstack Storage Driver is a plugin to Openstack that will add support for block storage service via Violin v6000 arrays using iSCSI. It will implement the common set of functionality required for the Havana release (per support by current v6000 systems). The driver will communicate with the array using a separate Violin-specific python library that provides version-independent access to the array's REST API. That library is maintained by Violin and will be downloadable for users from Violin's website. http://www .violin-memory.com/products/6000-flash-memory-array/ Project: nova Series: havana Blueprint: vm-host-quantum Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/vm-host-quantum Spec URL: None This is a small change in Nova's Quantum API to pass the hostname where a VM instance is launched (host picked by the Nova scheduler), to Quantum, using the port binding extensions. The change is made in the allocate_for_instance() API in nova/network/quantumv2/api.py. This would be useful for Quantum to provision the physical networking infrastructure connected to that host. Project: nova Series: havana Blueprint: vmware-configuration-section Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/vmware-configuration-section Spec URL: None Create a section specifically for vmware configuration variables. Project: nova Series: havana Blueprint: vmware-image-clone-strategy Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy Spec URL: None The VMwareAPI driver code currently spawns all virtual machines copying the disks using a linked-clone strategy. The VMwareAPI also offers a "full clone" strategy. 1. We would like to offer an administrator the ability to pick which strategy fits their environment best. (as a default) 2. The best behavior may be different for different classes of instance it would be nice to allow an instance flavor to control its own clone strategy see: https://www.vmware.com/support/ws5/doc/ws_clone_typeofclone.html In math terms, I need to allow instance to override image to override global config. That is i vs g vs c. I for instance. G for glance. C for Config. So, I need i=False vs g=True vs c=True to be False. But I need i=True vs g=False vs c=False to be True So basic boolean math won't cover it. Project: nova Series: havana Blueprint: vmware-nova-cinder-support Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support Spec URL: https://wiki.openstack.org/wiki/Nova/VMwareVmdkDriver/blueprint-full-spec This work is to enable the vmware cinder driver targeted for the Havana release. The Cinder BP has been accepted for havana. This code is in support of that and It involves enabling the following 4 use cases;   - get_volume_connector,   - attach_volume,   - detach_volume,   - spawn (changes to an existing method to support booting a VM from the volume). Project: cinder Series: havana Blueprint: vmware-vmdk-cinder-driver Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver Spec URL: https://wiki.openstack.org/wiki/Cinder/VMwareVmdkDriver/blueprint-full-spec The goal of this blueprint is to implement VMDK driver for cinder. The driver will allow management of cinder volume on any VMware vCenter Server or ESX managed datastore. In this project, we are essentially mapping the Cinder Volume construct to VMDK file(s) that form the persistent block storage for virtual machines within the VMware stack. Today, there is no cinder driver implementation for the VMware stack and the nova driver allows only attaching/detaching discovered iSCSI targets as RDM. This driver will allow life cycle management for a cinder volume that is backed by a VMDK file(s) within a VMware datastore. This project also positions Cinder to take advantage of features provided by VMFS and upcoming technologies such as vSAN, vVol and others. Because of the design of vCenter, each VMDK needs to be a "child" object of one or more VM's. In this implementation, we use a "shadow" VM to back the Cinder Volume. This ensures that VMware specific features such as snapshots, fast cloning, vMotion, etc. will continue to work without breaking any of the Cinder Volume constructs or abstractions. This virtual machine backing a volume will never be powered on and is only an abstraction for performing operations such as snapshots or cloning of the cinder volume. By using virtual machine as a representation for cinder volume we can perform any operation on a cinder volume that can be done on the corresponding virtual machine using the public SDK. Project: cinder Series: havana Blueprint: volume-acl Design: Approved Lifecycle: Started Impl: Good progress Link: https://blueprints.launchpad.net/cinder/+spec/volume-acl Spec URL: None The volume can only be accessed by a certain user in a certain project. There is no ACL rule for cinder volume. Adding ACL configration can make the volume read or written by other users or other projects. The volume creator has the capability to edit the ACL rule. The ACL model can be similar to the one in Amazon S3. Use case: several users can share the data in one volume. Project: nova Series: havana Blueprint: volume-affinity-weighter-function Design: Approved Lifecycle: Started Impl: Slow progress Link: https://blueprints.launchpad.net/nova/+spec/volume-affinity-weighter-function Spec URL: None To reduce latency it may be desired to try to place an instance on a host which contains a volume assosiated with this instance. This can be done implicitly with a specific weighter function enabled, which takes into account volume affinity. This will result in an instance scheduled anyway whether the host with the volume was available or not. Project: cinder Series: havana Blueprint: volume-host-attaching Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/volume-host-attaching Spec URL: None Allow client require to attach a volume to a host by api but an instance only. This change will allow attach_volume API support 'host_name' as a argument but not 'instance_uuid' only. Other blueprints (see below dependencies) require cinder allow a volume be attached to a host which running an openstack component, such as glance. Those components need to access the volume to read and write data. Project: nova Series: havana Blueprint: volume-rate-limiting Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/volume-rate-limiting Spec URL: None Volume rate-limiting can be done in back-end (Cinder storage back-end) or front-end (Nova hypervisor virtual disk driver). Implementing such feature in front-end has following benefits: 1) enables rate-limiting for all back-ends regardless of built-in feature set of back-ends; 2) allows Cinder to have unified IO control over all back-ends. Project: cinder Series: havana Blueprint: volume-resize Design: New Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/volume-resize Spec URL: None Add support for resizing volumes, more accurately this would be extend_volume at least at first. This would add an API call to specifically extend the size of an existing volume that is currently in and available state. The call would be something like: "cinder extend " Project: heat Series: havana Blueprint: volume-snapshots Design: Drafting Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/heat/+spec/volume-snapshots Spec URL: None Implement Volume Snapshots similar to what is available with EBS on EC2 Cinder snapshots are subtely different than EC2 ones, in the sense that they require the original volume to still be alive. Thus, backups seem like a better fit for this use case: https://blueprints.launchpad.net/cinder/+spec/volume-backups We need to implement backup creation when a volume resource is deleted, and the ability to create a volume from a backup. Project: nova Series: havana Blueprint: volume-swap Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/volume-swap Spec URL: None This feature allows a user or administrator to transparently swap out a cinder volume that connected to an instance. This may pause the vm while the volume is swapped, but no reads or writes should be lost. Project: cinder Series: havana Blueprint: volume-transfer Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/cinder/+spec/volume-transfer Spec URL: https://wiki.openstack.org/wiki/VolumeTransfer There is a need to support transferring of Cinder Volumes from one customer to another. An example might be where a specialty consultancy produces bespoke bootable volumes or volumes with large data sets. Once the volume is created, it can be transferred to the end customer using this process. Similarly, for bulk import of data to the cloud, the data ingress system can create a new cinder volume, copy the data from the physical device, and then transfer ownership of the device to the end-user. Project: neutron Series: havana Blueprint: vpnaas-python-apis Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/vpnaas-python-apis Spec URL: https://wiki.openstack.org/wiki/Quantum/VPNaaS Work items for VPNaaS Python APIs / CRUD Operations - Python plugin API (one-to-one mapping of WS API) - SQLAlchemy data models - CRUD operations (this should enable use of the API with what is effectively a "null" driver) Project: heat Series: havana Blueprint: vpnaas-support Design: Approved Lifecycle: Started Impl: Blocked Link: https://blueprints.launchpad.net/heat/+spec/vpnaas-support Spec URL: https://wiki.openstack.org/wiki/Heat/Blueprints/VPaaS_Support The point of this blueprint is to add VPNaaS components to resources supported by Heat. VPNaaS componets to add: - VPNServices - IKEPolicy - IPsecPolicy - VPNConnections Project: heat Series: havana Blueprint: watch-ceilometer Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/heat/+spec/watch-ceilometer Spec URL: None Move to using ceilometer as the metric/alarm back-end for our Cloudwatch resources - this will require several in-progress new features in ceilometer, and rework in heat to make our current metric logic pluggable so it can be optionally replaced by ceilometer. This blueprint will be used to track which ceilometer features we require to proceed with this work, and to capture any design discussions around the heat-side work which is required. Project: nova Series: havana Blueprint: whole-host-allocation Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/whole-host-allocation Spec URL: https://wiki.openstack.org/wiki/WholeHostAllocation Allow a tenant to allocate all of the capacity of a host for their exclusive use. The host remains part of the Nova configuration, i.e. this is different from bare metal provisioning in that the tenant is not getting access to the Host OS - just a dedicated pool of compute capacity. This gives the tenant guaranteed isolation for their instances, at the premium of paying for a whole host. The blueprint achieve this by building on the existing host aggregates and filter scheduler. Extending this further in the future could form the basis of hosted private clouds – i.e. schematics of having a private could without the operational overhead. In effect what we are doing is to make host aggregates a user facing feature (with appropriate controls), and providing an anonymous host allocation mechanism Project: cinder Series: havana Blueprint: windows-storage-driver-extended Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/windows-storage-driver-extended Spec URL: None In order to be compatible with the minimum required features, the next features will be implemented: 1) Copy volume to image 1) Copy image to volume 2) Clone volumes I'll refactor the tests so that the pickle files containing serialized mocks for unit tests will be replaced by mox. Project: nova Series: havana Blueprint: xen-support-for-hypervisor-versions Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/xen-support-for-hypervisor-versions Spec URL: None Ensure that the image lands on a hypervisor version that is equal or newer to the Xen tools version. This is to prevent an instance with newer tools from landing on an older host. approach: add xenapi tools version to image metadata, which would be copied over to the instance as well advertise the prominent XS version within a cell to the parent through capabilities add a cell-scheduler-filter to filter cells with compatible xen version hosts Project: nova Series: havana Blueprint: xenapi-guest-agent-cloud-init-interop Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/xenapi-guest-agent-cloud-init-interop Spec URL: None For a XenAPI cloud to feel more like libvirt cloud, cloud-init needs to work well. Ideally XenAPI clouds can work well without the agent and use config drive. Currently, operations such as setting the root- password depend on the XenAPI agent. For these cases, it would be good to use both the agent and cloud-init. There are many existing VMs using the agent, and the agent works on more OSes than cloud-init, so the code to use the agent is likely to stay around for some time. Clearly this depends on projects outside of nova, but the code in the XenAPI driver (for all agent based actions) ideally needs to also work with cloud-init. Project: nova Series: havana Blueprint: xenapi-ipxe-iso-boot-support Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/xenapi-ipxe-iso-boot-support Spec URL: None This gives customers (of service-providers running XenAPI) a means to roll their own images. The service-provider supplies an ISO with iPXE support rolled into it and then the customer can choose that image, boot to an OS of their choosing, then the customize the image in any way they want. Two virt-layer modifications are needed. The first is adding configurations for the iPXE ISO feature (network to use, boot menu, mkisofs_cmd). The second is the ability to inject networking info into the ISO after it is downloaded. This can be accomplished via a new post-image-download hook (fixup_disks) and a new dom0 plugin that knows how to mount an ISO, copy it, inject networking, rebundle it, and place the modified version back into the SR. Project: nova Series: havana Blueprint: xenapi-large-ephemeral-disk-support Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/xenapi-large-ephemeral-disk-support Spec URL: None The VHD format has a 2TB disk size limit, however, you may wish to add more than 2TB of ephemeral space to a VM. Note: the exact limit (due to overheads, etc) is around 2043 GB. Certain version on XenServer do not deal well with disks over that size, but under 2TB. To do this, you can add several disks to a VM, that in total give the user the total ephemeral space they want. In guest, the user can use LVM or similar to aggregate those disks into a single volume. In addition, which such large disks, we should allow admins to configure nova such that a partition is created on the disk, but no filesystem is created. Leaving the users free to do what they want with the space (or prehaps leaving cloud-init to configure the disks appropriately) At the moment it is not practical to have root disks bigger than 2TB, so for now, we will assume those are smaller than 2TB, and only worry about "oversized" ephemeral disks. Project: neutron Series: havana Blueprint: xenapi-ovs Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/neutron/+spec/xenapi-ovs Spec URL: None Run L2 agent in DomU alongside nova-compute. Implementation: - Use root-wrap to pass OVS commands through to Dom0 where OVS is running. - Make this work with DevStack Project: nova Series: havana Blueprint: xenapi-server-log Design: Approved Lifecycle: Complete Impl: Implemented Link: https://blueprints.launchpad.net/nova/+spec/xenapi-server-log Spec URL: None xenapi-server-log can be implemented by using the logging that is part of xenserver: https://github.com/jamesbulpin/xcp- xen-4.1.pq/blob/master/log-guest-consoles.patch The above logs also need to be rotated. Project: nova Series: havana Blueprint: xenapi-supported-image-import-export Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/xenapi-supported-image-import-export Spec URL: None Use supported interfaces for image upload / download to/from XenAPI compute nodes. At the moment XenServer is using tarballs of vhd chains as images, and using a dom0 plugin to download/upload them. This is a non-supported way, and using non-supported tools. This blueprint is about to come up with an image upload/download method through a supported interface. Project: nova Series: havana Blueprint: xenserver-core Design: Pending Approval Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/nova/+spec/xenserver-core Spec URL: None Xenserver-core (http://xenserver.org/blog/entry/tech-preview-of- xenserver-libvirt-ceph.html) can be installed on an existing CentOS 6.4 base system, and therefore the restrictions about running in dom0 are more relaxed (e.g. python2.6 is installed!). Nova runs in a domU against xenserver-core without any changes, but this blueprint fixes the issues encountered when running nova in dom0. It is fully restricted to the XenAPI driver with no changes needed in stock nova code. This initial support is limited to being able to boot VMs from local disk using VHDs, and may not include features such as boot from volume, use of security groups, VLANS, etc. Support for further features will be coming in later blueprints and bugs. Project: cinder Series: havana Blueprint: zadara-cinder-driver-update Design: Approved Lifecycle: Started Impl: Needs Code Review Link: https://blueprints.launchpad.net/cinder/+spec/zadara-cinder-driver-update Spec URL: None Need to update Zadara Cinder driver to support snapshot/clones/expands/etc functionality Project: cinder Series: havana Blueprint: zvm-cinder Design: Approved Lifecycle: Started Impl: Started Link: https://blueprints.launchpad.net/cinder/+spec/zvm-cinder Spec URL: None adapt zvm volume disk to cinder IBM z/VM managed SCSI disks through xCAT disk pool. OpenStack will call xCAT SCSI disk management REST API to carve or return disks from disk pool, also, the snapshot, the volume to image and image to volume features are supported.