Vanilla Plugin with default configuration throws "Connection Refused"

Asked by Marc Solanas


My set up is:

Ubuntu 12.04
OpenStack Havana with Vanilla Plugin

I have deployed a cluster with the following node groups:

1 x master:

  -Uses 1 cinder volume : 2TB


2x slaves:

  -Uses 1 cinder volume: 2TB


Both node groups used the following flavor:

VCPUs: 32
RAM: 250000
Root disk: 300GB
Ephemeral: 300GB
Swap: 0

They also use the default Ubuntu Hadoop Vanilla image downloadable from

The /etc/hosts file in all nodes is: localhost test-master2T-001.novalocal test-master2T-001 test-slave2T-001.novalocal test-slave2T-001 test-slave2T-002.novalocal test-slave2T-002

Without changing any of the default configuration, the cluster boots correctly.

The problem is that, when running a job (for example, teragen 100GB), the map tasks fail many times, having to repeat them, thus increasing the job time. They seem to fail randomly, from one slave or the other, depending on the execution.

Checking the logs of the datanotes in the slaves, I can see this error:

 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Call to test-master2T-001/ failed on connection exception: Connection refused

Full error:

The logs of the datanode in the master, gives this error:

WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError: exception: Original Exception : Connection reset by peer

Full error:

I have tried changing hadoop.tmp.dir to point to the 2TB cinder volume /volumes/disk1/lib/hadoop/hdfs/tmp, but nothing changed.

Thank you in advance.

Question information

English Edit question
Sahara Edit question
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Dmitry Mescheryakov (dmitrymex) said :

Can you help with this problem?

Provide an answer of your own, or ask Marc Solanas for more information if necessary.

To post a message you must log in.