Disk checked on every boot after upgrade to Quantal

Asked by Francisco Reverbel on 2012-11-12

After I upgraded to Quantal (12.10), /dev/sda1 is checked on every boot.

The command

sudo tune2fs -l /dev/sda1

tells me that the file system is not clean. If I enter recovery mode, get a root shell and run fsck, then the next reboot does not do a file system check. However, if I reboot once more, the file system check happens again.

It appears that the volume is not being properly unmounted for some reason. The only situation in which I do not get a file system check at boot time is when I get a root shell in recovery mode, then say "fsck" and "reboot". This problem did not happen on Precise (12.04).

/dev/sda1 is an SSD. Perhaps this is relevant? (Already tried to remove all the SSD tuning options from the fstab entry for /dev/sda1, but it didn' t help.)

Thanks,

Francisco

Question information

Language:
English Edit question
Status:
Solved
For:
Ubuntu util-linux Edit question
Assignee:
No assignee Edit question
Solved by:
Francisco Reverbel
Solved:
2012-11-14
Last query:
2012-11-14
Last reply:
2012-11-12
N1ck 7h0m4d4k15 (nicktux) said : #1

Try to use a physical number with tune2fs and -c option (max-mount-counts). By default this number is -1 which means: disregard. Try to increase the value , eg:

~$ sudo tune2fs -c 10 /dev/sda1

and see if your problem solved.

Maybe this is a bug with fsck's behavior with an SSD .

Thanks

Francisco Reverbel (reverbel) said : #2

Hello NikTh,

Setting max-mount-counts didn't solve the problem. Even with max-mount-counts set to 30, there is still a filesystem check on every reboot.

I do not understand how the filesystem enters the "not clean" state... This is what I am doing over and over:

1) I run "sudo tune2fs -l /dev/sda1" and see that the file system state is not clean, so I enter recovery mode and get a root shell.

2) From the root shell I run tune2fs again and verify that the filesystem is still not clean (as expected), so I run fsck.

3) Now (still from the root shell, in recovery mode) tune2fs tells me that the filesystem is clean. Good.

4) At this point I issue the reboot command from the root shell.

5) This time there is no filesystem check at reboot. Good.

6) Now I open a terminal window and say "sudo tune2fs -l /dev/sda1". For some reason, the filesystem in back to the "not clean" state...

Any help is very much appreciated. Thanks,

Francisco

N1ck 7h0m4d4k15 (nicktux) said : #3

Can you post the output of

~$ cat /etc/fstab

~$ sudo blkid

Thanks

Francisco Reverbel (reverbel) said : #4

This is my /etc/fstab:

~$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=29d97510-32de-489f-8328-550665be6503 / ext4 noatime,nodiratime,nobarrier,discard,data=writeback,errors=remount-ro 0 1
tmpfs /tmp tmpfs nodev,nosuid,exec,mode=1777 0 0

I have tried removing all the SSD tuning options (noatime,nodiratime,nobarrier,discard,data=writeback) and leaving just errors=remount-ro, but there was no change.

This is the output from blkid:

~$ sudo blkid
/dev/sda1: UUID="29d97510-32de-489f-8328-550665be6503" TYPE="ext4"

Thanks!

How old is the SSD?

Francisco Reverbel (reverbel) said : #6

Almost seven months. I got the SSD in a Samsung Series 9 ultrabook (model NP900X4B-A02US), which I bought brand new last April.

N1ck 7h0m4d4k15 (nicktux) said : #7

Try this

~$ sudo updatedb

~$ locate forcefsck

Is the second command have any results ?

Francisco Reverbel (reverbel) said : #8

Both commands run silently (no output):

reverbel@skinny:~$ sudo updatedb
reverbel@skinny:~$ locate forcefsck
reverbel@skinny:~$

N1ck 7h0m4d4k15 (nicktux) said : #9

Your fstab seems OK , (as you said you already tried to remove all the extra options) , well can only suggest another thing (as last effort)

Boot from a LiveCD and make a check of the filesystem. If filesystem is clean , then probably this is a bug and you should convert this question (close - solve) to a bug. If the system is not clean then something is going on with the filesystem
OR
with the shutdown of Ubuntu. Maybe not unmount properly all the devices.

Search to find any messages related to fsck inside boot.log

~$ grep -i -A10 -B10 "fsck" /var/log/boot.log

Thanks

Francisco Reverbel (reverbel) said : #10

Did what you suggested. (Sorry for the delay, I had to find an external CD drive.) Booted from a 12.04 LiveCD. /dev/sda1 was not clean:

ubuntu@ubuntu:~$ sudo fsck -V /dev/sda1
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 /dev/sda1
e2fsck 1.42 (29-Nov-2011)
/dev/sda1 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong (22280176, counted=22280160).
Fix<y>? yes

Free inodes count wrong (7397070, counted=7397069).
Fix<y>? yes

/dev/sda1: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sda1: 418099/7815168 files (0.5% non-contiguous), 8978542/31258702 blocks
ubuntu@ubuntu:~$ sudo fsck -V /dev/sda1
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 /dev/sda1
e2fsck 1.42 (29-Nov-2011)
/dev/sda1: clean, 418099/7815168 files, 8978542/31258702 blocks
ubuntu@ubuntu:~$

After fixing the filesystem, I was able to mount it, use it and unmount it:

ubuntu@ubuntu:~$ sudo mount -o rw,noatime,nodiratime,nobarrier,discard,data=writeback,errors=remount-ro /dev/sda1 /mnt
ubuntu@ubuntu:~$ ls /mnt
bin cdrom etc initrd.img lib libnss3.so media opt
root sbin srv tmp var vmlinuz.old
boot dev home initrd.img.old lib64 lost+found mnt proc run
  selinux sys usr vmlinuz
ubuntu@ubuntu:~$ sudo umount /dev/sda1
ubuntu@ubuntu:~$ sudo fsck -V /dev/sda1
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 /dev/sda1
e2fsck 1.42 (29-Nov-2011)
/dev/sda1: clean, 418099/7815168 files, 8978542/31258702 blocks
ubuntu@ubuntu:~$

/dev/sda1 remained clean after unmounting, so I rebooted to Quantal (12.10). This time there was no filesystem check at reboot. However, from a terminal window in Quantal, I see that /dev/sda1 is not clean again:

reverbel@skinny:~$ sudo tune2fs -l /dev/sda1
tune2fs 1.42.5 (29-Jul-2012)
Filesystem volume name: <none>
Last mounted on: /
Filesystem UUID: 29d97510-32de-489f-8328-550665be6503
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: not clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 7815168
Block count: 31258702
Reserved block count: 1562935
Free blocks: 22279820
Free inodes: 7397051
... [snip]

Since /dev/sda1 was clean when 12.10 was booting up and is not clean anymore when 12.10 is up and running, the problem does not seem to be at shutdown time. Should I file a bug?

Francisco Reverbel (reverbel) said : #11

Now I see that just mounting /dev/sda1 makes the filesystem state change from "clean" to "not clean". Unmounting /dev/sda1 brings it back to the clean state. This is a terminal session with the system booted from a 12.04 LiveCD:

ubuntu@ubuntu:~$ sudo fsck -V /dev/sda1
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 /dev/sda1
e2fsck 1.42 (29-Nov-2011)
/dev/sda1: clean, 418110/7815168 files, 8979230/31258702 blocks
ubuntu@ubuntu:~$ sudo tune2fs -l /dev/sda1 | grep "Filesystem state:"
Filesystem state: clean
ubuntu@ubuntu:~$ sudo mount -o rw,noatime,nodiratime,nobarrier,discard,data=writeback,errors=remount-ro /dev/sda1 /mnt
ubuntu@ubuntu:~$ sudo tune2fs -l /dev/sda1 | grep "Filesystem state:"
Filesystem state: not clean
ubuntu@ubuntu:~$ sudo umount /dev/sda1
ubuntu@ubuntu:~$ sudo tune2fs -l /dev/sda1 | grep "Filesystem state:"
Filesystem state: clean
ubuntu@ubuntu:~$ sudo fsck -V /dev/sda1
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 /dev/sda1
e2fsck 1.42 (29-Nov-2011)
/dev/sda1: clean, 418110/7815168 files, 8979230/31258702 blocks

The same thing happens if mount is issued without the SSD tuning options:

ubuntu@ubuntu:~$ sudo tune2fs -l /dev/sda1 | grep "Filesystem state:"
Filesystem state: clean
ubuntu@ubuntu:~$ sudo mount -o errors=remount-ro /dev/sda1 /mnt
ubuntu@ubuntu:~$ sudo tune2fs -l /dev/sda1 | grep "Filesystem state:"
Filesystem state: not clean
ubuntu@ubuntu:~$ sudo umount /dev/sda1
ubuntu@ubuntu:~$ sudo tune2fs -l /dev/sda1 | grep "Filesystem state:"
Filesystem state: clean
ubuntu@ubuntu:~$ sudo fsck -V /dev/sda1
fsck from util-linux 2.20.1
[/sbin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 /dev/sda1
e2fsck 1.42 (29-Nov-2011)
/dev/sda1: clean, 418110/7815168 files, 8979230/31258702 blocks

I don't see this behavior in other disks. Perhaps this happens because /dev/sda1 is an ext4 volume with no journal?

Anyway, the behavior above is consistent with your hypothesis of /dev/sda1 not being properly unmounted at shutdown time. Since the "filesystem check on every boot" symptom started appearing on Quantal, there may be a problem with shutdown in Quantal.

Francisco Reverbel (reverbel) said : #12

It turns out that the bug had already been filed:

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1073433

The bug description ("Ext4 corruption...") is over the top. The comments in the bug report make it clear that "the repairs reported by fsck are not caused by corruption, but are harmless and purely cosmetic fixes." There is also a workaround, which worked perfectly for me: unchecking "Enable Networking" before shutting the system down.