Using ::WaitCondition as rolling_update mechanism on ::ResourceGroup

Asked by Ivens Zambrano

When trying to use OS::Heat::ResourceGroup to handle rolling upgrades we are facing the following issues:

The ResourceGroup defines a proprety: update_policy

    type: OS::Heat::ResourceGroup
        max_batch_size: 1
        min_in_service: 2
      count: 8
        type: test_detail.yaml

and inside test_detail.yaml we have:

    type: OS::Neutron::Port

    type: OS::Heat::WaitCondition

    type: OS::Heat::WaitConditionHandle

    type: OS::Nova::Server
      - port: { get_resource: interface1 }
      user_data_format: RAW
      user_data: { get_resource: user_data }

    type: OS::Heat::MultipartMime
      - config:
               wc_notify: { get_attr: [wait_handle, curl_cli] }
             template: |
               merge_how: 'list(append)+dict(recurse_array,no_replace)+str()'
                 - path: /run/cloud-init/
                   owner: root:root
                   permissions: '0777'
                   content: |
                      #!/bin/bash -x
                      wc_notify --data-binary '{"status": "SUCCESS"}'
                 - /run/cloud-init/

We want to use the wait condition notifications as the main driver for the rolling_update policy but it appears that the only condition working to hold all the VMs going for update at the same time is if we use "pause_time"as part of the "rolling_update" definition. The problem with this approach is: "pause_time" is not deterministic for the Server status!! we should not move to the next instance in the ResourceGroup just based on time, we need to be sure that the instance is ready providing the service before moving to the next one.

Is there a way to achieve this?? maybe using a different resource type??


Question information

English Edit question
Ubuntu heat Edit question
No assignee Edit question
Last query:
Last reply:

Can you help with this problem?

Provide an answer of your own, or ask Ivens Zambrano for more information if necessary.

To post a message you must log in.