How to setup monitoring of vnf in Tacker
Hi,
I have tacker stable/pike setup through kolla-ansible. Openstack VIM is registered and REACHABLE. Able to deploy/terminate VNF from tacker.
When applying monitoring policy, it is not working. I see icmp packet from tacker server to VIM server, for VIM status monitoring, but same is not there for VNF.
Below is the section having monitoring policy.
-------
properties:
image: CentOS-7
name: Self_Healing
flavor: Tacker_Flavor
availabilit
mgmt_driver: noop
monitoring_
name: ping
parameters:
count: 3
interval: 0.2
timeout: 2
retry: 6
actions:
failure: respawn
-------
Even if I shut down the instance manually in VIM, vnf manager shows instance as ACTIVE. No trace of same in tacker-server, tacker-conductor, mistral logs.
In deployed vnf event tab, last event is with Resource state "Active", Event type "Monitor" with monitoring details. No workflow created to monitor VNF.
Running Container Services are:
tacker_server, tacker_conductor, barbican_worker, barbican_
Kindly suggest what should be checked.
I tried to refer https:/
Question information
- Language:
- English Edit question
- Status:
- Solved
- For:
- tacker Edit question
- Assignee:
- Toshiaki Takahashi Edit question
- Solved by:
- Anshul Sehgal
- Solved:
- Last query:
- Last reply:
Revision history for this message
|
#1 |
I see that we have to enable monitoring driver by adding it in setup.py. (https:/
My question, doesnt tacker support ping and http_ping by default?
If so, is it mentioned clearly anywhere?
Revision history for this message
|
#2 |
This question was expired because it remained in the 'Open' state without activity for the last 15 days.
Revision history for this message
|
#3 |
Sorry for very late reply...
Tacker supports ping and http_ping by default.
I think you can use them without any monitor_driver setting.
(part of tacker.conf in my env)
-------
[tacker]
#
# From tacker.vnfm.monitor
#
# Monitor driver to communicate with Hosting VNF/logical service instance
# tacker plugin will use (list value)
#monitor_driver = ping,http_ping
-------
Your monitoring policy seems correct.
It works in latest Tacker correctly. (I show the entire VNFD I used at the end of this comment.)
Only tacker_conductor is required to work ping monitoring.
I'd like to check some points.
1. Would you show all VNFD?
I'm concerned that:
- Mgmt IP is set correctly?
- Do you set scaling policy together? Current Tacker has a bug about using scaling and ping monitoring together.
2. Would you check network connection between OpenStack Controller and VM?
Just ping VM's "Mgmt Ip Address" from Controller.
-------
$ openstack vnf list
+------
| ID | Name | Mgmt Ip Address | Status | VIM ID | VNFD ID |
+------
| 9223cf24-
+------
-------
3. Would you check without 'retry' setting?
Tacker had a bug about setting it. (Now fixed).
I'm not sure that it was fixed or not in pike cycle.
-------
tosca_definitio
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
VDU1:
type: tosca.nodes.
properties:
image: cirros-
name: test_healing
flavor: m1.tiny
name: ping
count: 3
retry: 6
actions:
CP1:
type: tosca.nodes.
properties:
management: true
order: 0
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
CP2:
type: tosca.nodes.
properties:
order: 1
requirements:
- virtualLink:
node: VL2
- virtualBinding:
node: VDU1
CP3:
type: tosca.nodes.
properties:
order: 2
requirements:
- virtualLink:
node: VL3
- virtualBinding:
node: VDU1
VL1:
type: tosca.nodes.nfv.VL
properties:
vendor: Tacker
VL2:
type: tosca.nodes.nfv.VL
properties:
vendor: Tacker
VL3:
type: tosca.nodes.nfv.VL
properties:
vendor: Tacker
-------
Revision history for this message
|
#4 |
Thanks Toshiaki for your answer!
Revision history for this message
|
#5 |
Thanks Toshiaki for the information and sorry from late reply from my end too.
I will test this with new version train or ussuri.
Revision history for this message
|
#6 |
Hi,
I have deployed tacker stable/train.
Still I am not getting success in monitoring.
For management IP, I am using external-
So after deploying a vnf, I get an external IP attached to the vnf. I am able to ping that from tacker-conductor container.
Now if I shutOff the vnf in my openstack VIM, still Tacker shows it ACTIVE and doesnt respawn it.
VNF- Catalog
-------
description: Demo example
metadata:
template_name: sample-tosca-vnfd
topology_template:
node_templates:
CP1:
properties:
management: true
order: 0
requirements:
- virtualLink:
node: VL1
- virtualBinding:
node: VDU1
type: tosca.nodes.
VDU1:
properties:
flavor: Tacker_Flavor
image: CentOS-root-login
actions:
name: ping
count: 3
retry: 6
name: test_healing
type: tosca.nodes.
VL1:
properties:
vendor: Tacker
type: tosca.nodes.nfv.VL
tosca_definitio
-------
tacker vnf-list
===========
(virtual_env) [root@localhost mistral]# tacker vnf-list
Deprecated: tacker command line is deprecated, will be deleted after Rocky is released.
+------
| id | name | mgmt_ip_address | status | vim_id | vnfd_id |
+------
| 7daa4d3a-
+------
tacker vnf-show Monitoring
=======
(virtual_env) [root@localhost mistral]# tacker vnf-show Monitoring
Deprecated: tacker command line is deprecated, will be deleted after Rocky is released.
+------
| Field | Value |
+------
| attributes | {"monitoring_
| created_at | 2020-08-06 14:01:03 |
| description | Demo example |
| error_reason | |
| id | 7daa4d3a-
| instance_id | 81140836-
| mgmt_ip_address | {"VDU1": "10.152.10.141"} |
| name | Monitoring |
| placement_attr | {"vim_name": "Staging_
| status | ACTIVE |
| tenant_id | c7fdb952003c49b
| updated_at | |
| vim_id | a3708b03-
| vnfd_id | a1e57a4a-
+------
Revision history for this message
|
#7 |
tacker-server logs: https:/
tacker-conductor logs: https:/
mistral-server logs: https:/
Revision history for this message
|
#8 |
Hi,
There is one observation.
If I restart the tacker-server container, then I see the monitoring in tacker-server logs.
Then if I delete the vnf, tacker doesnt stop monitoring. It keeps giving error that VNF ip is not reachable. If I again restart the tacker-server container, that error also goes off.
Seems like some service is not getting triggered at the time of deployment. It only comes in action when we restart the tacker service.
Revision history for this message
|
#9 |
Hi,
Can anyone help in this?
Revision history for this message
|
#10 |
Sorry for the late again.
Would you check the setting of tacker worker number?
-------
$ grep worker /etc/tacker/
# Number of separate worker processes for service (integer value)
#api_workers = 0
# (``swift_
-------
I haven't reproduced your trouble completely yet.
However, when I change the above "api_workers" setting to 3, ping monitoring does not work in my env.
And after I restart tacker-server, monitoring work again.
I think this setting is not working properly.
I will continue to investigate it.
Revision history for this message
|
#11 |
Thanks Toshiaki for your revert.
Below are the required details.
(tacker-
api_workers = 5
Note: This setting is not modified manually.
Complete tacker.conf (I should have provided it in the beginning. Sorry for missing it out.)
=======
cat /etc/kolla/
[DEFAULT]
debug = True
log_dir = /var/log/
transport_url = rabbit:
bind_host = 10.152.5.75
bind_port = 9890
api_workers = 5
service_plugins = nfvo,vnfm
[nfvo]
vim_drivers = openstack
[openstack_vim]
stack_retries = 60
stack_retry_wait = 10
[vim_keys]
use_barbican = True
[tacker]
monitor_driver = ping,http_ping
alarm_monitor_
[database]
connection = mysql+pymysql:
max_retries = -1
[keystone_
www_authenticat
auth_url = http://
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = tacker
password = 7orSBqFSVhrQWCx
memcache_
memcache_secret_key = Hcz0f1rpX6fqdWg
memcached_servers = 10.152.5.75:11211
[alarm_auth]
username = tacker
password = 7orSBqFSVhrQWCx
project_name = service
url = http://
[ceilometer]
host = 10.152.5.75
port = 9890
[oslo_messaging
transport_url = rabbit:
driver = messagingv2
topics = notifications
[glance_store]
filesystem_
I will also check with commenting that parameter(
Revision history for this message
|
#12 |
Hi Toshiaki,
Please find below update after commenting (#api_workers = 5).
After commenting, I restart tacker_server and tacker_conductor too.
After this, I am seeing ping monitor working.
I dont need to restart the tacker_server container anymore. I see ICMP packets for both VIM and my vnf.
If I terminate the VNF, then monitor also stops.
If I shut off the VM from openstack VIM, Tacker respawn VM with same IP.
I am very happy to see this working for the first time since I started working on Tacker. Thank you very much.
I have tested this with single VDU. Will check with multi VDU also.
I would request if you can guide that commenting #api_workers = 5 is standard configuration. Means it will not block any major feature of tacker.
Revision history for this message
|
#13 |
Hi,
I have tested multi VDU monitoring also. This is also working fine.(Shutting off single instance cause whole stack to reinitialize)
Also tested multi VDU monitoring with vdu_autoheal policy.(Shutting off single instance just delete and re-create the faulty instance).
In docs(https:/
Revision history for this message
|
#14 |
Thank you for your useful information!
First, we will fix the document.
I also think the problem is that the "api_workers" does not work correctly.
I'll write docs which explain that commenting it out is recommended, and also will consider making the "api_workers" work correct.
Revision history for this message
|
#15 |
Thank you very much Toshiaki.
Revision history for this message
|
#16 |
Thank you for such a detailed explanation. I followed your advice. I did it.