Problem : Access Horizon without Fuel node

Asked by Nicolas Deixonne


I've deployed Mirantis kilo release on VirtualBox. 2 Controllers (not 3 because of resources) and one Compute.
When I remove the master node in the Mirantis kilo release, I can'ping the public ip to access Horizon or even the API of each component.

The access comes up just when I get the master node up, then I can ping it. I really don't understand, HAProxy seems to work and is well configured but it's like if the virtual ip cannot exist without the fuel master node.

I had IceHoust and so, I tried to see the difference of configuration and I have move the snapshots of Icehouse to another computer. Surprise, surprise, I can't also not to access the public ip anymore, like there was a configuration on the computer I miss. That doesn't make any senses.

That seems that it is related to the Pacemaker component installed. In IceHouse, that's because it doesn't start any more but when I compare the configuration, that's very strange. It seems that Mirantis has changed his configuration with PaceMaker sice IceHouse.

Now, I can see that the virtual public ip needs to pass through the gateway which represents the ip of the fuel master. Why ? I would like to know because that doesn't make any senses for me to have this behaviour.

Any ideas ?


Question information

English Edit question
Fuel for OpenStack Edit question
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Dmitry Sutyagin (dsutyagin) said :


Check the type of the network in VirtualBox which is used as public network. If it is NAT, then you will not have direct access to it, this is by design - Unlike libvirt which assigns the gateway IP of NATed networks to the host which allows accessing these networks from host, VirtualBox does not do that and NAT networks are isolated from the host.
You can either set up port forwarding in VirtualBox settings for this network, or access the network via a router, such as Fuel - which works out of the box.

Revision history for this message
Dmitry Sutyagin (dsutyagin) said :

Additionally, I've come across an issue when NetworkManager (if host is Linux and uses this service) tries to auto-configure interfaces created by VirtualBox - when it fails it removes the IP addresses from these interfaces, making networks unreachable on the host (and breaking internet connection in the virtual cloud). The solution was to edit NetworkManager's config to set these interfaces as manually configured.

This issue was then added to documentation -

Revision history for this message
Nicolas Deixonne (powereborn) said :

Hi Dmitry,

Thank you for your response a lot.

Actually, I have deployed normally with the virtualbox script given by mirantis as the previous releases. That's why, there is nos NAT configuration for VirtualBox except for fuel master node.
That should work as the same, no ? I have so three host-only networks, one for accessing ssh, fuel; another for public ip, and the last one I don't care.

When I read the pacemaker configuration, it's written that for the public ip it goes to the gateway representing the fuel master always. I don't want of course to need fuel master node, that's not HA any more if the fuel master is down.

Go in a VM controller, type "crm configure show" and find "vip__public ocf:fuel:ns_IPaddr2", you will find a gateway, in my case it's "" which is actually the ip of the fuel master.

In IceHouse release we had a gateway="link" and not an ip. That's why we could have accessed from anywhere because we performed this with iptables and masquerade.

Then, my question is, why the configuration is so different now ?

I hope you will understand,
Thanks !

Revision history for this message
Dmitry Sutyagin (dsutyagin) said :


Thank you for the update. I am not going to focus on the change with vip__public resource in pacemaker yet, I have checked inside 7.0 test env and yes the value of gateway has changed from "link" to the actual IP of the public gateway, but this should not create any impact on the connectivity.

Please check that there is a default route inside haproxy namespace on the node which runs the `vip__public` resource:

root@node-5:~# ip netns exec haproxy ip r | grep default
default via dev b_public metric 10
default via dev hapr-ns metric 10000

The IP address of the gateway for b_public is the address you have specified during deployment in Networks tab, and if you want to NAT/route this network via your host then it shold be the IP configured in VirtualBox settings for this network. If it is not, and looks like it is not, then reconfigure VirtualBox network (but first - remove form Fuel or shut Fuel down to prevent IP collision) - assign as the IP address for the public host-only network.

The fact that is the IP of Fuel must be the way you have configured your Fuel server during installation. The fact that your public VIP points to this IP is must be how you have configured networks in Fuel UI/CLI prior to deployment. Usually is the IP of the "external" gateway, which is in case with VirtualBox deployment usually the IP of the VirtualBox bridge on your host.

If the IP is configured in VirtualBox and is indeed and Fuel server is offline, but you still cannot access public network, then please attach:
- screenshots showing your host-only network settings in VirtualBox (both tabs)
- a file with the output of `cibadmin -Q` command executed on any controller
- a file with the output of `ip netns exec haproxy ip r` executed on the host which runs the vip_public resource
- a screenshot of Networks tab for this environment from Fuel UI showing public gateway setting
- a file with the output of `ip a` command executed on your host if it is Linux-based/OSX or `ipconfig /a` if using Windows.

Revision history for this message
Nicolas Deixonne (powereborn) said :

Hi Dmitry,

I finally solved the problem but that's very strange.
When the cluster is started, I have to write the command "crm_resource --resource vip__public --force-start", in that case, after that, I can ping the public ip forever. Could you explain me why it does work with that command ?

Thank you for all your explaination, that's great and helpful, not just for me but for future other people who will meet this kind of problem.


Revision history for this message
Fabrizio Soppelsa (fsoppelsa) said :

That might be due to the even number of controllers (better to deploy 1 instead of 2).
With 2, you could experience split brains or have some quorum conflicts.

Can you help with this problem?

Provide an answer of your own, or ask Nicolas Deixonne for more information if necessary.

To post a message you must log in.