LinuxDC++ with multiple NICs?

Asked by Elliot Chance

I've been using linuxdc++ for sometime now and it's great. In the past i've tried bonding multiple physical ethernet ports together which works some of the time, but different hardware doesn't like it.

1. Assuming eth0 to eth3 each with a static IPs - does linuxdc++ use the first IP and only operate through that single static IP?
2. Is there a way the up and down traffic could be managed through multiple static IPs? Possibly;
  a) Run 4 instances and somehow tie each linuxdc++ instance to a physical NIC (this would appear on the hub like 4 people sharing the same number of files Person1, Person2, etc.) However even if this were possible I know its not recommended to run multiple instances of linuxdc++.
  b) Use a few virtual machines in vmware and that way I can give each machine a physical NIC but this is very resource demanding and i'm not counting on the performance being that great.

Like I said, bonding is ideal but can be choppy or not work at all on some hardware which is unfortunate.

Any ideas? Thanks.

Question information

English Edit question
LinuxDC++ Edit question
No assignee Edit question
Solved by:
Last query:
Last reply:
Revision history for this message
Razzloss (razzloss) said :

I'd recommend bonding the interfaces if at all possible, but

1. If you haven't changed the bind address (defined in the experts only tab) Linuxdcpp will listen for connections on all interfaces/addresses. In this case it is up to the OS to route the upstream traffic. Dividing incoming traffic between interfaces is probably not possible with multiple addresses. (At least I can't come up with anything at the moment)

2. a) Running multiple instances is perfectly fine. Only limitation is that they must not share the same profile directory (actually it should prevent the attempt to use the same profile dir for two or more instances). So you can redefine your HOME environment variable so that they all point to different places for each instance.
For example to start 2 instances one could use something like the following (from xterm)
HOME=$HOME/Person1 linuxdcpp &
HOME=$HOME/Person2 linuxdcpp &

This will create two different profile directories each in $HOME/PersonX/.dc++/ and you can bind each of the instances to a different port/address. This might work in dividing the up and down traffic between interfaces, but I'd guess this is also up to the OS, try and see how it behaves.

b) I'd drop this idea since running multiple instances has way less overhead. Particularly with vmware (haven't been able to get a decent IO speed with it even with 3 RAID0 15k SAS disks).


Revision history for this message
Elliot Chance (elliotchance) said :

Thanks for the quick response.

Using multiple instances seems like the go.

FYI - I tried vmware just out of curiosity and it's 90% slower (hashing) than the host linuxdc++.

So if I run 4 instances of linuxdc++ with different profile directories and change the bind address on each one this will in fact be completely transparent to the server as if 4 different people are connected to the hub?

If the above is true then it's also theoretically possible that a person downloading could in fact download from 2 or more of these instances at the same time right? And the dropping of a single linuxdc++ instance should not effect downloaders because there clients will automatically find the identical TTHs and continue downloading?

Revision history for this message
Best Razzloss (razzloss) said :

It will be transparent to the hub, if the source addresses on outgoing packets are set correctly. Meaning, that each connection is seen originating from the bound address. I have no idea if this is the case, or even if bind is used for outgoing connections (there's probably a quick and dirty way to ensure that it is by adding a few lines of code). I'm afraid you'll have to test and see how this behaves.

Last two questions are both correct and will be even if there was only one interface and multiple instances.


Revision history for this message
Elliot Chance (elliotchance) said :

Thanks Razzloss, that solved my question.