I have XEN guest running RHEL6, and it has a LUN presented from the Dom0. This contains an LVM volume groups called vg_ALHINT (INT for Integration and ALH is an abbreviation of its Oracle database name). The data is Oracle 11g. The VG was imported, activated and udev created the maps for each logical volume.
However device mapper did not create mappings for one of the logical volumes [LV], and for the LV in question it created /dev/dm-2 with different major minor number compared to the rest of LVs.
# dmsetup table
vg_ALHINT-arch: 0 4300800 linear 202:16 46139392
vg0-lv6: 0 20971520 linear 202:2 30869504
vg_ALHINT-safeset2: 0 4194304 linear 202:16 35653632
vg0-lv5: 0 2097152 linear 202:2 28772352
vg_ALHINT-safeset1: 0 4186112 linear 202:16 54528000
vg0-lv4: 0 524288 linear 202:2 28248064
vg0-lv3: 0 4194304 linear 202:2 24053760
vg_ALHINT-oradata: **
vg0-lv2: 0 4194304 linear 202:2 19859456
vg0-lv1: 0 2097152 linear 202:2 17762304
vg0-lv0: 0 17760256 linear 202:2 2048
vg_ALHINT-admin: 0 4194304 linear 202:16 41945088
** You can see above vg_ALHINT-oradata is empty.
# ls -l /dev/mapper/
total 0
[root@iui-alhdb01 ~]# ls -l /dev/mapper/
total 0
crw-rw---- 1 root root 10, 58 Apr 3 13:43 control
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv0 -> ../dm-0
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv1 -> ../dm-1
lrwxrwxrwx 1 root root 7 Apr 3 14:35 vg0-lv2 -> ../dm-2
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv3 -> ../dm-3
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv4 -> ../dm-4
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv5 -> ../dm-5
lrwxrwxrwx 1 root root 7 Apr 3 13:43 vg0-lv6 -> ../dm-6
lrwxrwxrwx 1 root root 7 Apr 3 13:59 vg_ALHINT-admin -> ../dm-8
lrwxrwxrwx 1 root root 7 Apr 3 13:59 vg_ALHINT-arch -> ../dm-9
brw-rw---- 1 root disk 253, 7 Apr 3 14:37 vg_ALHINT-oradata
lrwxrwxrwx 1 root root 8 Apr 3 13:59 vg_ALHINT-safeset1 -> ../dm-10
lrwxrwxrwx 1 root root 8 Apr 3 13:59 vg_ALHINT-safeset2 -> ../dm-11
vg_ALHINT-oradata was not created until when I ran dmsetup mknodes
# cat /proc/partitions
major minor #blocks name
202 0 26214400 xvda
202 1 262144 xvda1
202 2 25951232 xvda2
253 0 8880128 dm-0
253 1 1048576 dm-1
253 2 2097152 dm-2
253 3 2097152 dm-3
253 4 262144 dm-4
253 5 1048576 dm-5
253 6 10485760 dm-6
202 16 29360128 xvdb
253 8 2097152 dm-8
253 9 2150400 dm-9
253 10 2093056 dm-10
253 11 2097152 dm-11
dm-7 would have been vg_ALHINT-oradata and it's missing.
I ran dmsetup mknodes and dm-7 was created yet still missing from /proc/paritions.
# ls -l /dev/dm-7
brw-rw---- 1 root disk 253, 7 Apr 3 13:59 /dev/dm-7
Its major and minor numbers are 253:7 yet the devices and the same LVs in its VG have 202:nn
lvs tells me this LV was suspended:
# lvs
Logging initialised at Thu Apr 3 14:44:19 2014
Set umask from 0022 to 0077
Finding all logical volumes
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv0 vg0 -wi-ao---- 8.47g
lv1 vg0 -wi-ao---- 1.00g
lv2 vg0 -wi-ao---- 2.00g
lv3 vg0 -wi-ao---- 2.00g
lv4 vg0 -wi-ao---- 256.00m
lv5 vg0 -wi-ao---- 1.00g
lv6 vg0 -wi-ao---- 10.00g
admin vg_ALHINT -wi-a----- 2.00g
arch vg_ALHINT -wi-a----- 2.05g
oradata vg_ALHINT -wi-s----- 39.95g
safeset1 vg_ALHINT -wi-a----- 2.00g
safeset2 vg_ALHINT -wi-a----- 2.00g
Wiping internal VG cache
The disc was created from a snapshot from our production databases. Oracle was shutdown and VG had been exported prior to the snapshot. I should note I perform this same take for 100s of databases weekly via a script. Because this was a snapshot then I have the table from device mapper from the original and I used this to try and recreate its missing table:
0 35651584 linear 202:16 2048
35651584 4087808 linear 202:16 50440192
39739392 2097152 linear 202:16 39847936
41836544 41943040 linear 202:16 58714112
After suspending the device with dmsetup suspend /dev/dm-7 I ran dmsetup load /dev/dm-7 $table.txt
Next I tried to resume this device,
# dmsetup resume /dev/dm-7
device-mapper: resume ioctl on vg_ALHINT-oradata failed: Invalid argument
Command failed
#
Any ideas because I'm really lost. (Yep I've rebooted and re-snapshotted this lots and always have the same problem. I've even reinstalled this server and run yum update.)
// EDIT.
I forgot to add that this is the original dmsetup table from our production environment and I tried to load the oradata layout into our integration server like I noted above.
# dmsetup table
vg_ALHPRD-safeset2: 0 4194304 linear 202:32 35653632
vg_ALHPRD-safeset1: 0 4186112 linear 202:32 54528000
vg_ALHPRD-oradata: 0 35651584 linear 202:32 2048
vg_ALHPRD-oradata: 35651584 4087808 linear 202:32 50440192
vg_ALHPRD-oradata: 39739392 2097152 linear 202:32 39847936
vg_ALHPRD-oradata: 41836544 41943040 linear 202:32 58714112
vg_ALHPRD-admin: 0 4194304 linear 202:32 41945088
//EDIT
I ran vgscan --mknodes and had:
The link /dev/vg_ALHINT/oradata should have been created by udev but it was not found. Falling back to direct link creation.
# ls -l /dev/vg_ALHINT/oradata
lrwxrwxrwx 1 root root 29 Apr 3 14:50 /dev/vg_ALHINT/oradata -> /dev/mapper/vg_ALHINT-oradata
Still cannot activate this and had this error message:
device-mapper: resume ioctl on failed: Invalid argument Unable to resume vg_ALHINT-oradata (253:7)
//EDIT
I see stack traces in /var/log/messages:
Apr 3 13:58:09 iui-alhdb01 kernel: blkfront: xvdb: barriers disabled
Apr 3 13:58:09 iui-alhdb01 kernel: xvdb: unknown partition table
Apr 3 13:59:35 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c612 02 freq_set kernel 5.242 PPM
Apr 3 14:02:31 iui-alhdb01 ntpd[1093]: 0.0.0.0 c615 05 clock_sync
Apr 3 14:30:13 iui-alhdb01 kernel: device-mapper: table: 253:2: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 14:33:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr 3 14:33:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr 3 14:33:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 3 14:33:34 iui-alhdb01 kernel: vi D 0000000000000000 0 1394 1271 0x00000084
Apr 3 14:33:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr 3 14:33:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr 3 14:33:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr 3 14:33:34 iui-alhdb01 kernel: Call Trace:
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr 3 14:33:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr 3 14:35:34 iui-alhdb01 kernel: INFO: task vi:1394 blocked for more than 120 seconds.
Apr 3 14:35:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr 3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 3 14:35:34 iui-alhdb01 kernel: vi D 0000000000000000 0 1394 1271 0x00000084
Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007aef19b8 0000000000000082 ffff88007aef1978 ffffffffa000443c
Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007d208d80 ffff880037cabc08 ffff880037cda0c8 ffff8800022168a8
Apr 3 14:35:34 iui-alhdb01 kernel: ffff880037da45f8 ffff88007aef1fd8 000000000000fbc8 ffff880037da45f8
Apr 3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf230>] sync_buffer+0x40/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8152918f>] __wait_on_bit+0x5f/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1f0>] ? sync_buffer+0x0/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81529238>] out_of_line_wait_on_bit+0x78/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b310>] ? wake_bit_function+0x0/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811bf1e6>] __wait_on_buffer+0x26/0x30
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0085875>] __ext4_get_inode_loc+0x1e5/0x3b0 [ext4]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa0088006>] ext4_iget+0x86/0x7d0 [ext4]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa008ec35>] ext4_lookup+0xa5/0x140 [ext4]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198b05>] do_lookup+0x1a5/0x230
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81198e90>] __link_path_walk+0x200/0xff0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8114a667>] ? handle_pte_fault+0xf7/0xb00
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811a3c6a>] ? dput+0x9a/0x150
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81199f3a>] path_walk+0x6a/0xe0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119a14b>] filename_lookup+0x6b/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119b277>] user_path_at+0x57/0xa0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8104a98c>] ? __do_page_fault+0x1ec/0x480
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119707b>] ? putname+0x2b/0x40
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118eac0>] vfs_fstatat+0x50/0xa0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4645>] ? nr_blockdev_pages+0x15/0x70
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8115c4ad>] ? si_swapinfo+0x1d/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec3b>] vfs_stat+0x1b/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118ec64>] sys_newstat+0x24/0x50
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e2057>] ? audit_syscall_entry+0x1d7/0x200
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr 3 14:35:34 iui-alhdb01 kernel: INFO: task vgdisplay:1437 blocked for more than 120 seconds.
Apr 3 14:35:34 iui-alhdb01 kernel: Not tainted 2.6.32-431.11.2.el6.x86_64 #1
Apr 3 14:35:34 iui-alhdb01 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Apr 3 14:35:34 iui-alhdb01 kernel: vgdisplay D 0000000000000000 0 1437 1423 0x00000080
Apr 3 14:35:34 iui-alhdb01 kernel: ffff88007da35a18 0000000000000086 ffff88007da359d8 ffffffffa000443c
Apr 3 14:35:34 iui-alhdb01 kernel: 000000000007fff0 0000000000010000 ffff88007da359d8 ffff88007d24d380
Apr 3 14:35:34 iui-alhdb01 kernel: ffff880037c8c5f8 ffff88007da35fd8 000000000000fbc8 ffff880037c8c5f8
Apr 3 14:35:34 iui-alhdb01 kernel: Call Trace:
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffffa000443c>] ? dm_table_unplug_all+0x5c/0x100 [dm_mod]
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810a7091>] ? ktime_get_ts+0xb1/0xf0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff815286c3>] io_schedule+0x73/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c8a9d>] __blockdev_direct_IO_newtrunc+0xb7d/0x1270
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c9207>] __blockdev_direct_IO+0x77/0xe0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5487>] blkdev_direct_IO+0x57/0x60
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4400>] ? blkdev_get_block+0x0/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811217bb>] generic_file_aio_read+0x6bb/0x700
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fd0>] ? blkdev_get+0x10/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c5fe0>] ? blkdev_open+0x0/0xc0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8118617f>] ? __dentry_open+0x23f/0x360
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c4841>] blkdev_aio_read+0x51/0x80
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81188e8a>] do_sync_read+0xfa/0x140
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810ec3f6>] ? rcu_process_dyntick+0xd6/0x120
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8109b290>] ? autoremove_wake_function+0x0/0x40
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811c479c>] ? block_ioctl+0x3c/0x40
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119dc12>] ? vfs_ioctl+0x22/0xa0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8119ddb4>] ? do_vfs_ioctl+0x84/0x580
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81226496>] ? security_file_permission+0x16/0x20
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff81189775>] vfs_read+0xb5/0x1a0
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff811898b1>] sys_read+0x51/0x90
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff810e1e4e>] ? __audit_syscall_exit+0x25e/0x290
Apr 3 14:35:34 iui-alhdb01 kernel: [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b
Apr 3 14:39:19 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 14:53:57 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 15:02:42 iui-alhdb01 yum[1544]: Installed: sos-2.2-47.el6.noarch
Apr 3 15:52:29 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256
Apr 3 15:59:08 iui-alhdb01 kernel: device-mapper: table: 253:7: xvdb too small for target: start=58714112, len=41943040, dev_size=58720256