Firewire hard drive is not recognized in Ubuntu 10.4 beta 1

Asked by Screatch

Like the title says, my external NTFS hard drive is not recognized by Ubuntu 10.4 while connected via firewire, when connected via USB 2.0, everything works perfectly well.

I tried to start gparted, in search for my external hard drive.
Not seen in gparted either, however when i try launching gparted via terminal, i see an error
"Error opening /dev/sdc: No such device or address."
/dev/sdc supposed to be my external hard drive.

Nor fdisk -l, or lsusb does not even show it's connected.

Of course i can use my hard drive via USB, but firewire is faster and i prefer it.. I didn't experience this problem in Karmic Koala so i suppose it is something to do with Lucid Lynx.

I would greatly appreciate any solution of this problem.
Thank you in advance.

Question information

Language:
English Edit question
Status:
Answered
For:
Ubuntu Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
actionparsnip (andrew-woodhead666) said :
#1

Can you boot without the device attatched, then when the system has settled run:

sudo modprobe raw1394 dv1394

wait 10 seconds, attatch the device, wait 10 seconds then run:

dmesg | tail -n 20; lsmod

Thanks

Revision history for this message
Screatch (screatch) said :
#2

I got a fatal error on first one...

vitali@vitali-desktop:~$ sudo modprobe raw1394 dv1394
[sudo] password for vitali:
Sorry, try again.
[sudo] password for vitali:
FATAL: Error inserting raw1394 (/lib/modules/2.6.32-16-generic-pae/kernel/drivers/ieee1394/raw1394.ko): Unknown symbol in module, or unknown parameter (see dmesg)

But in case you are still interested in dmesg output, here you go.

vitali@vitali-desktop:~$ dmesg | tail -n 20; lsmod
[ 116.772539] ieee1394: Node changed: 0-00:1023 -> 0-01:1023
[ 123.811442] ieee1394: Node added: ID:BUS[0-00:1023] GUID[002037020031a232]
[ 123.846379] scsi6 : SBP-2 IEEE-1394
[ 124.939491] ieee1394: sbp2: Logged into SBP-2 device
[ 124.941623] ieee1394: sbp2: Node 0-00:1023: Max speed [S400] - Max payload [2048]
[ 125.152432] scsi 6:0:0:0: Direct-Access Seagate FreeAgent XTreme 4113 PQ: 0 ANSI: 4
[ 125.152636] sd 6:0:0:0: Attached scsi generic sg3 type 0
[ 125.159489] sd 6:0:0:0: [sdc] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB)
[ 125.164966] sd 6:0:0:0: [sdc] Write Protect is off
[ 125.164971] sd 6:0:0:0: [sdc] Mode Sense: 1c 00 00 00
[ 125.168065] sd 6:0:0:0: [sdc] Cache data unavailable
[ 125.168068] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[ 125.182606] sd 6:0:0:0: [sdc] Cache data unavailable
[ 125.182610] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[ 125.182614] sdc: sdc1
[ 125.288695] sd 6:0:0:0: [sdc] Cache data unavailable
[ 125.288703] sd 6:0:0:0: [sdc] Assuming drive cache: write through
[ 125.288708] sd 6:0:0:0: [sdc] Attached SCSI disk
[ 132.816044] ieee1394: sbp2: aborting sbp2 command
[ 132.816050] sd 6:0:0:0: [sdc] CDB: ATA command pass through(16): 85 08 2e 00 00 00 00 00 00 00 00 00 00 40 ec 00
Module Size Used by
sbp2 20216 1
binfmt_misc 6587 1
snd_hda_codec_idt 51882 1
snd_hda_intel 21813 2
snd_hda_codec 74201 2 snd_hda_codec_idt,snd_hda_intel
snd_hwdep 5412 1 snd_hda_codec
snd_pcm_oss 35244 0
snd_mixer_oss 13746 1 snd_pcm_oss
snd_pcm 71014 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss
snd_seq_dummy 1338 0
snd_seq_oss 26726 0
snd_seq_midi 4557 0
snd_rawmidi 19024 1 snd_seq_midi
snd_seq_midi_event 6003 2 snd_seq_oss,snd_seq_midi
snd_seq 47231 6 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_seq_midi_event
snd_timer 19066 2 snd_pcm,snd_seq
snd_seq_device 5700 5 snd_seq_dummy,snd_seq_oss,snd_seq_midi,snd_rawmidi,snd_seq
nvidia 9933456 38
fbcon 35102 71
tileblit 2031 1 fbcon
font 7557 1 fbcon
bitblit 4675 1 fbcon
ppdev 5259 0
snd 54116 16 snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_seq_oss,snd_rawmidi,snd_seq,snd_timer,snd_seq_device
softcursor 1189 1 bitblit
psmouse 62957 0
serio_raw 3978 0
intel_agp 24413 0
lp 7028 0
parport_pc 26378 1
soundcore 6620 1 snd
snd_page_alloc 7268 2 snd_hda_intel,snd_pcm
vga16fb 11321 1
vgastate 8961 1 vga16fb
agpgart 31820 2 nvidia,intel_agp
parport 32603 3 ppdev,lp,parport_pc
ohci1394 27430 1
usbhid 36046 0
hid 67000 1 usbhid
ieee1394 81309 2 sbp2,ohci1394
e1000e 120720 0
=======================
My hard drive is named Seagate Freeagent XTreme

Revision history for this message
Huygens (huygens-25) said :
#3

I think this is a bug. Lucid seems to be still using the old firewire stack and it does not seems to play nice with the latest kernel.
I have also a firewire harddrive at home, and with Lucid I cannot mount it as it is not recognised by the system
[ 721.380350] ieee1394: The root node is not cycle master capable; selecting a new root node and resetting...
[ 721.703926] ieee1394: Node added: ID:BUS[0-00:1023] GUID[0090a991e0107124]
[ 721.704034] ieee1394: Node changed: 0-00:1023 -> 0-01:1023
[ 721.740525] scsi5 : SBP-2 IEEE-1394
[ 723.062713] ieee1394: sbp2: Logged into SBP-2 device
[ 723.063601] ieee1394: sbp2: Node 0-00:1023: Max speed [S400] - Max payload [2048]
[ 744.040223] ieee1394: sbp2: aborting sbp2 command
[ 744.040236] scsi 5:0:1:0: CDB: Inquiry: 12 00 00 00 24 00
[ 754.040203] ieee1394: sbp2: aborting sbp2 command
[ 754.040214] scsi 5:0:1:0: CDB: Test Unit Ready: 00 00 00 00 00 00
[ 754.040823] ieee1394: sbp2: reset requested
[ 754.040828] ieee1394: sbp2: generating sbp2 fetch agent reset
[ 764.040233] ieee1394: sbp2: aborting sbp2 command
[ 764.040242] scsi 5:0:1:0: CDB: Test Unit Ready: 00 00 00 00 00 00
[ 764.040869] scsi 5:0:1:0: Device offlined - not ready after error recovery
[ 764.040943] ieee1394: sbp2: scsi_add_device failed

In addition one can clearly see that Lucid is still using the old driver stack:
Module Size Used by
sbp2 23299 0
ohci1394 30548 0
ieee1394 94894 2 sbp2,ohci1394

(the new stack is firewire_sbp2, firewire_core)

Revision history for this message
Huygens (huygens-25) said :
#4

I have try switching to the new stack following Juju migration guide (http://ieee1394.wiki.kernel.org/index.php/Juju_Migration) however it does not work better.

I can see that the new firewire stack is loaded in lsmod:
Module Size Used by
firewire_sbp2 14977 0
firewire_ohci 25375 0
firewire_core 51537 2 firewire_sbp2,firewire_ohci
crc_itu_t 1715 1 firewire_core
ohci1394 30548 0
ieee1394 94894 1 ohci1394

However, when pluging in the firewire hard disk, it is still ieee1394 module which is loaded:
[ 184.830414] ieee1394: The root node is not cycle master capable; selecting a new root node and resetting...
[ 185.154117] ieee1394: Node added: ID:BUS[0-00:1023] GUID[0090a991e0107124]
[ 185.154225] ieee1394: Node changed: 0-00:1023 -> 0-01:1023

But it does not do anything (perhaps because I have "blacklisted it" just as requested in Juju migration guide).

This seems to be related to the migration of the firewire stack problem that is already present in Karmic : Bug #529524 (https://bugs.launchpad.net/bugs/529524)

However, this is even worse than in Karmic because the old stack does not work anylonger. I will switch this answer to a bug report.

Revision history for this message
Michael Lustfield (michaellustfield) said :
#5

For anyone else interested: Further information should go into the bug report listed above.

Can you help with this problem?

Provide an answer of your own, or ask Screatch for more information if necessary.

To post a message you must log in.