Could not get qemu trace events in /var/log/libvirt/qemu/

Asked by krishnamraju eraparaju

I am debugging a migration issue(migration fails with message: "load of migration failed: Input/output error").

I could not get any log message in " /var/log/libvirt/qemu/" directory other than "initiating migration" message.

Added below two lines in /etc/libvirt/libvirtd.conf and restarted libvirtd
               log_filters="1:qemu 1:libvirt 1:util"
               log_outputs="1:file:/var/log/libvirt/libvirtd.log"

Source side libvirt logs:
————————————
2022-09-17 18:48:02.620+0000: 19373: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f03f40030e0 event={"timestamp": {"seconds": 1663440482, "microseconds": 619909}, "event": "MIGRATION", "data": {"status": "setup"}}
2022-09-17 18:48:02.764+0000: 19373: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f03f40030e0 event={"timestamp": {"seconds": 1663440482, "microseconds": 763740}, "event": "MIGRATION_PASS", "data": {"pass": 1}}
2022-09-17 18:48:02.764+0000: 19373: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f03f40030e0 event={"timestamp": {"seconds": 1663440482, "microseconds": 763889}, "event": "MIGRATION", "data": {"status": "active"}}
2022-09-17 18:48:12.863+0000: 19373: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f03f40030e0 event={"timestamp": {"seconds": 1663440492, "microseconds": 863038}, "event": "MIGRATION_PASS", "data": {"pass": 2}}
2022-09-17 18:48:12.865+0000: 19373: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f03f40030e0 event={"timestamp": {"seconds": 1663440492, "microseconds": 865130}, "event": "STOP"}
2022-09-17 18:48:12.866+0000: 19373: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f03f40030e0 event={"timestamp": {"seconds": 1663440492, "microseconds": 865800}, "event": "MIGRATION", "data": {"status": "failed"}}
2022-09-17 18:48:12.867+0000: 19373: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f03f40030e0 event={"timestamp": {"seconds": 1663440492, "microseconds": 865862}, "event": "RESUME"}

Target side libvirt logs:
————————————
2022-09-17 18:48:02.563+0000: 28937: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f827003fe40 event={"timestamp": {"seconds": 1663440482, "microseconds": 562521}, "event": "MIGRATION", "data": {"status": "setup"}}
2022-09-17 18:48:02.643+0000: 28937: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f827003fe40 event={"timestamp": {"seconds": 1663440482, "microseconds": 643001}, "event": "MIGRATION", "data": {"status": "active"}}
2022-09-17 18:48:12.948+0000: 28937: info : qemuMonitorJSONIOProcessLine:235 : QEMU_MONITOR_RECV_EVENT: mon=0x7f827003fe40 event={"timestamp": {"seconds": 1663440492, "microseconds": 947620}, "event": "MIGRATION", "data": {"status": "failed"}}

I could not debug further, libvirtd.log just says "failed", nothing else on why it's failed.
Qemu just logs a single message("initiating migration" ), though I enabled all the qemu trace-events.

Please help me debug this issue.

Thanks.

Question information

Language:
English Edit question
Status:
Expired
For:
Ubuntu libvirt Edit question
Assignee:
No assignee Edit question
Last query:
Last reply:
Revision history for this message
Manfred Hampl (m-hampl) said :
#1
Revision history for this message
krishnamraju eraparaju (krishna8ibm) said :
#2

Manfred, Thanks for providing the link to the troubleshooting document.

I noticed below IO errors on the VM:
[ 1.884465] XFS (dm-0): Mounting V5 Filesystem
[ 1.923753] blk_update_request: I/O error, dev vda, sector 107141655
[ 1.924982] blk_update_request: I/O error, dev vda, sector 107142663
[ 1.926155] blk_update_request: I/O error, dev vda, sector 107143671
[ 1.930234] blk_update_request: I/O error, dev vda, sector 107144679
[ 1.931338] blk_update_request: I/O error, dev vda, sector 107145687
[ 1.932403] XFS (dm-0): xfs_do_force_shutdown(0x1) called from line 1251 of file fs/xfs/xfs_buf.c. Return address = ffffffffc03631a1
[ 1.932407] XFS (dm-0): I/O Error Detected. Shutting down filesystem
[ 1.933433] XFS (dm-0): Please umount the filesystem and rectify the problem(s)
[ 1.934560] XFS (dm-0): metadata I/O error in "xlog_bwrite" at daddr 0x61faa17 len 8192 error 5
[ 1.935998] XFS (dm-0): failed to locate log tail
[ 1.935999] XFS (dm-0): log mount/recovery failed: error -5
[ 1.936062] XFS (dm-0): log mount failed
[ 1.999366] tsc: Refined TSC clocksource calibration: 2593.922 MHz
[14207154.728704] XFS (dm-0): Mounting V5 Filesystem
[14207154.769608] blk_update_request: I/O error, dev vda, sector 107141655
[14207154.771481] blk_update_request: I/O error, dev vda, sector 107142663
[14207154.773104] blk_update_request: I/O error, dev vda, sector 107143671
[14207154.774761] blk_update_request: I/O error, dev vda, sector 107144679
[14207154.776345] blk_update_request: I/O error, dev vda, sector 107145687
[14207154.777955] XFS (dm-0): xfs_do_force_shutdown(0x1) called from line 1251 of file fs/xfs/xfs_buf.c. Return address = ffffffffc03631a1
[14207154.777959] XFS (dm-0): I/O Error Detected. Shutting down filesystem
[14207154.779574] XFS (dm-0): Please umount the filesystem and rectify the problem(s)
[14207154.781369] XFS (dm-0): metadata I/O error in "xlog_bwrite" at daddr 0x61faa17 len 8192 error 5
[14207154.783643] XFS (dm-0): failed to locate log tail
[14207154.783649] XFS (dm-0): log mount/recovery failed: error -5
[14207154.783773] XFS (dm-0): log mount failed
[14210145.019464] random: crng init done

Looks like bad storage device is causing this issue.

But I am still not understanding why trace events are not getting logged in " /var/log/libvirt/qemu/" directory, I could only see single trace message that is: "initiating migration". On other hosts, I do the same settings in /etc/libvirt/libvirtd.conf(IE, log_filters="1:qemu 1:libvirt 1:util") and I see traces getting logged in /var/log/libvirt/qemu/.

Revision history for this message
Launchpad Janitor (janitor) said :
#3

This question was expired because it remained in the 'Open' state without activity for the last 15 days.