autonfs.sh[855]: Fileserver is down

Asked by ealthuis

This message appears on each of three clients when they attempt to mount the required partitions found on the server.

The setup: 4 computers are involved, the sever, two other desktops and a laptop. three have a 120Gb SSD and the laptop has a 250GB SSD. The SSD's are used for system and /home, but the server with 2 1GB hard drives is used for storage of most files and weekly backups which then are available to all 4 computers.

This system has been working for at least 5 years, but has now failed with the above message

The 1Gb drives each have 2 partitions, these are mounted by/etc/fstab and also mounted on/export for NFS use.
nfs-kernel-server is instaled on the server and nfs-common on the clients. Also on the clients is autonfs.sh to do the mounting of the server partitions.

I have no doubt that the failure is caused by installing ubuntu-unity7 18.04LTS, because that is when the refusals started
A long way around, I ended up with Bionic Beaver again and I rebuilt the NFS arrangement.

I checked permissions, all seem correct

I checked firewalls , and used firewalld to allow traffic between the clients and server.

Zone=home, service nfs and nfs3, port is set to 2049 with tcp and udp. These settings in "permanent storage"

Yet with all of those settings the action still remain "firewall is down"

I am at wit's end, could use some help

Question information

Language:
English Edit question
Status:
Answered
For:
Ubuntu Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:

This question was reopened

Revision history for this message
ealthuis (ealthuis) said :
#1

Pleas also show me how to move this problem from Libreoffice to Ubuntu

Revision history for this message
Manfred Hampl (m-hampl) said :
#2

I cannot help with the mount problem, but I can tell how to relocate this question document to the right package or project:

Open this question document on answers.launchpad.net https://answers.launchpad.net/ubuntu/+source/libreoffice/+question/677442

Below the original question text there is a block with heading "Question information"
In the center you can see among others:

Status:
Open

For:
(logo) Ubuntu libreoffice (pencil)

If you do a mouse click on the pencil icon, a new window will open where you can enter the target for the question.
If you want Ubuntu without globally, select "Ubuntu" as value for Distribution and empty the field besides Package.
If you want a certain Ubuntu package as target, select "Ubuntu" as value for Distribution and put the source package name into the "Package" field (like the current value "libreoffice".)

Revision history for this message
ealthuis (ealthuis) said :
#3

Made the change as above, thank you Manfred.

On with the actual problem, I will go over the permissions on all of the files and apps. See what that brings

Revision history for this message
ealthuis (ealthuis) said :
#4

Tried mounting directly and got this result:

ea@seanix:~$ sudo mount -t nfs -o proto=tcp,port=2049 192.168.0.103:/export/mass /media/mass
[sudo] password for ea:
mount.nfs: access denied by server while mounting 192.168.0.103:/export/mass
ea@seanix:~$

This indicates a block on the "server computer"

Revision history for this message
Manfred Hampl (m-hampl) said :
#5

I suggest that you check the "export" status on the NFS server, e.g. by listing the contents of the files /etc/exports (including /etc/exports.d/*.exports) and /var/lib/nfs/etab, as well as running the command "sudo exportfs -v".

Revision history for this message
ealthuis (ealthuis) said :
#6

I have listed all of the files you mentioned, most are similar, but the last one may have a problem in that the examples indicate I should enter something to identify NFS . I am at a loss with the syntax.

ea@discovery:~$ sudo exportfs -v
/export 192.168.1.0/24(rw,async,wdelay,insecure,root_squash,no_subtree_check,fsid=0,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass 192.168.1.0/24(rw,async,wdelay,insecure,root_squash,no_subtree_check,fsid=0,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass0 192.168.1.0/24(rw,async,wdelay,insecure,root_squash,no_subtree_check,fsid=0,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass1 192.168.1.0/24(rw,async,wdelay,insecure,root_squash,no_subtree_check,fsid=0,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass2 192.168.1.0/24(rw,async,wdelay,insecure,root_squash,no_subtree_check,fsid=0,sec=sys,rw,insecure,root_squash,no_all_squash)
ea@discovery:~$

/var/lib/nfs/etab
/export/mass2 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass1 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass0 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)

/etc/exports.d/*.exports
/export/mass2 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass1 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass0 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export/mass 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)
/export 192.168.1.0/24(rw,async,wdelay,hide,nocrossmnt,insecure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=0,anonuid=65534,anongid=65534,sec=sys,rw,insecure,root_squash,no_all_squash)

/etc/exports
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
# exporting to a local network
/export 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/mass 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/mass0 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/mass1 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/export/mass2 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async)

Revision history for this message
Manfred Hampl (m-hampl) said :
#7

I am a bit astonished to see references to identical filesystems both in /etc/exports and in /etc/exports.d/*.exports
I do not think that this makes sense.

Revision history for this message
ealthuis (ealthuis) said :
#8

I also do not make much sense out of that. I must add, that I did not ever put anything in /etc/exports.d/*.exports
or for that matter in exportfs

I just want to know how to make the lines:

# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)

work for me after removing the #

What tor replace "gss/krb5i" with as well as "/srv/nfs4/homes gss/krb5i"

Maybe that would solve my problem?

Revision history for this message
Manfred Hampl (m-hampl) said :
#9

The first paragraphs of http://chschneider.eu/linux/server/nfs.shtml maybe show the root cause of your problem.
If (with the upgrade from Ubuntu 16.04 to 18.04) the NFS version has changed from 3 to 4, then you have to re-consider your means of authentication.

I suggest that you read https://help.ubuntu.com/community/NFSv4Howto and/or https://help.ubuntu.com/community/SettingUpNFSHowTo

Revision history for this message
ealthuis (ealthuis) said :
#10

I have spent all morning since about 6:00 until 11:00 got nowhere, reading all suggested documents, will continue until I make it work

I did this:
ea@seanix:~$ sudo mount -t nfs -o proto=tcp,port=2049 192.168.0.103:/export/mass /media/mass

it timed out

I am going to call this "solved' as it is getting too complex and long. I will open a new question if needed.

Thank you once again Manfred for your help and insight
Emanuel

Revision history for this message
Manfred Hampl (m-hampl) said :
#11

Just a naive question to make sure that we do not overlook the obvious:

Your server's IP address seems to be 192.168.__0__.103 and it allows access for 192.168.__1__.0/24

What are the IP addresses of the clients, are they in the 192.168.1.* range?

Revision history for this message
ealthuis (ealthuis) said :
#12

I changed all places where I found .1.0 to .0.1

Then I manually mounted: sudo mount -t nfs -o proto=tcp,port=2049 192.168.0.103:export/mass /media/mass

Received this answer: mount.nfs: Connection timed out

Started looking in syslog and found shortly after re-start of system the next lines:

Jan 12 06:17:01 discovery CRON[2414]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Jan 12 06:20:18 discovery rpc.mountd[1256]: /srv/nfs4 and /export have same filehandle for 192.168.0.1/24, using first
Jan 12 06:20:18 discovery rpc.mountd[1256]: /srv/nfs4 and /export/mass have same filehandle for 192.168.0.1/24, using first
Jan 12 06:20:18 discovery rpc.mountd[1256]: /srv/nfs4 and /export/mass0 have same filehandle for 192.168.0.1/24, using first
Jan 12 06:20:18 discovery rpc.mountd[1256]: /srv/nfs4 and /export/mass1 have same filehandle for 192.168.0.1/24, using first
Jan 12 06:20:18 discovery rpc.mountd[1256]: /srv/nfs4 and /export/mass2 have same filehandle for 192.168.0.1/24, using first

After this no other references to nfs, but some about /mass/trash*

and no mounts on the clients

Revision history for this message
Manfred Hampl (m-hampl) said :
#13

Where does this "/srv/nfs4" come from? This looks like the sample in the template files, and probably is not applicable to your system!

Revision history for this message
ealthuis (ealthuis) said :
#14

/srv/nfs4 had /export and/home both were empty, so I removed the lot.

Need to go out for a while, will find the reason for that directory later.

Revision history for this message
ealthuis (ealthuis) said :
#15

Back at it, used the file " nfs4howto" and found one problem, The server nfs-server does not exist:

ea@seanix:~$ sudo mount -t nfs4 -o proto=tcp,port=2049 nfs-server:/ /mnt
[sudo] password for ea:
mount.nfs4: Failed to resolve server nfs-server: Name or service not known
ea@seanix:~$ sudo mount -t nfs -o proto=tcp,port=2049 nfs-server:/ /mnt
mount.nfs: Failed to resolve server nfs-server: Name or service not known
ea@seanix:~$ sudo apt-get install nfs-common
Reading package lists... Done
Building dependency tree
Reading state information... Done
nfs-common is already the newest version (1:1.3.4-2.1ubuntu5).
0 upgraded, 0 newly installed, 0 to remove and 6 not upgraded.
ea@seanix:~$

also nfs-kernel-server is the newest version.

Hosts.deny is set to ALL,
Hosts.allow is set to the required IP's

As another interesting point, in all the years that I used NFS, I have had no problems on the server, any that I did have was on the clients.

Revision history for this message
Manfred Hampl (m-hampl) said :
#16

You can use a command like
sudo mount -t nfs4 … nfs-server:/ /mnt
only if all your clients are able to translate the name "nfs-server" to its IP address, e.g. via domain name service.
To test this you can use the ping command:
ping nfs-server

If you do not have such system in place (which I assume), then you have to use the numeric IP address, e.g.
sudo mount -t nfs4 … 192.168.0.103:/ /mnt
(replace the IP address with the real value in your environment)

Revision history for this message
ealthuis (ealthuis) said :
#17

Thanks, you too must be curious as to what is the problem. I have done all the recommendations and I still get "timed out" on any mount with the correct IP address.

The problem as I see it is with 18.04LTS and/or NFS4.

I am just adding this comment as the problem has not been found and resolved. You too Manfred have spent a lot of time over this and for that I thank you.

Is time to look at a bug??

Can you help with this problem?

Provide an answer of your own, or ask ealthuis for more information if necessary.

To post a message you must log in.