Skip to main content
added 1367 characters in body
Source Link
si_l
  • 51
  • 1
  • 1
  • 5

Because I cannot think of any better way around this for the moment I think if I remove this disc and present another of 48G then I'll have a disc for this, but I cannot think why this would be so problematic.

Versions:

lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5

Workaround Because I could not think of solutions I did this: I did not write earlier that I run OVM 3.2 on top of, so part of my solution will include OVM. i) Shutdown guests on Xen via OVM. ii) Remove discs iii) Delete LUNs from OVM iv) Unpresent LUNs from hypervisors. v) OVM rescan storage. vi) Wait for 30 mins ;) vii) Present my discs to the Hypervisors with different LUN IDs. viii) OVM rescan storage

And now fantastically I see 48G discs.

# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 9:0:0:127  sdt   65:48    active undef running
  |- 9:0:1:127  sdbh  67:176   active undef running
  |- 9:0:2:127  sddo  71:96    active undef running
  |- 9:0:3:127  sdfb  129:208  active undef running
  |- 10:0:3:127 sdmz  70:432   active undef running
  |- 10:0:0:127 sdoh  128:464  active undef running
  |- 10:0:1:127 sdop  129:336  active undef running
  |- 10:0:2:127 sdqm  132:352  active undef running
  |- 7:0:1:127  sdzu  131:640  active undef running
  |- 7:0:0:127  sdxh  71:624   active undef running
  |- 7:0:3:127  sdaed 66:912   active undef running
  |- 7:0:2:127  sdaab 131:752  active undef running
  |- 8:0:0:127  sdakm 132:992  active undef running
  |- 8:0:1:127  sdall 134:880  active undef running
  |- 8:0:2:127  sdamx 8:1232   active undef running
  `- 8:0:3:127  sdaqa 69:1248  active undef running

Because I cannot think of any better way around this for the moment I think if I remove this disc and present another of 48G then I'll have a disc for this, but I cannot think why this would be so problematic.

Versions:

lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5

Versions:

lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5

Workaround Because I could not think of solutions I did this: I did not write earlier that I run OVM 3.2 on top of, so part of my solution will include OVM. i) Shutdown guests on Xen via OVM. ii) Remove discs iii) Delete LUNs from OVM iv) Unpresent LUNs from hypervisors. v) OVM rescan storage. vi) Wait for 30 mins ;) vii) Present my discs to the Hypervisors with different LUN IDs. viii) OVM rescan storage

And now fantastically I see 48G discs.

# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 9:0:0:127  sdt   65:48    active undef running
  |- 9:0:1:127  sdbh  67:176   active undef running
  |- 9:0:2:127  sddo  71:96    active undef running
  |- 9:0:3:127  sdfb  129:208  active undef running
  |- 10:0:3:127 sdmz  70:432   active undef running
  |- 10:0:0:127 sdoh  128:464  active undef running
  |- 10:0:1:127 sdop  129:336  active undef running
  |- 10:0:2:127 sdqm  132:352  active undef running
  |- 7:0:1:127  sdzu  131:640  active undef running
  |- 7:0:0:127  sdxh  71:624   active undef running
  |- 7:0:3:127  sdaed 66:912   active undef running
  |- 7:0:2:127  sdaab 131:752  active undef running
  |- 8:0:0:127  sdakm 132:992  active undef running
  |- 8:0:1:127  sdall 134:880  active undef running
  |- 8:0:2:127  sdamx 8:1232   active undef running
  `- 8:0:3:127  sdaqa 69:1248  active undef running
Source Link
si_l
  • 51
  • 1
  • 1
  • 5

Looks like multipath on the Hypervisor refuses to update its maps for LUN sizes.

This LUN was originally 28Gb and was later grown to 48Gb on the storage array.

The VG information thinks its 48G and indeed this disc is 48G, but multipath won't update and thinks it's still 28G.

Multipath clinging to 28G:

# multipath -l 350002acf962421ba
350002acf962421ba dm-17 3PARdata,VV
size=28G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  |- 8:0:0:22   sdt   65:48    active undef running
  |- 10:0:0:22  sdbh  67:176   active undef running
  |- 7:0:0:22   sddq  71:128   active undef running
  |- 9:0:0:22   sdfb  129:208  active undef running
  |- 8:0:1:22   sdmz  70:432   active undef running
  |- 7:0:1:22   sdoj  128:496  active undef running
  |- 10:0:1:22  sdop  129:336  active undef running
  |- 9:0:1:22   sdqm  132:352  active undef running
  |- 7:0:2:22   sdxh  71:624   active undef running
  |- 8:0:2:22   sdzy  131:704  active undef running
  |- 10:0:2:22  sdaab 131:752  active undef running
  |- 9:0:2:22   sdaed 66:912   active undef running
  |- 7:0:3:22   sdakm 132:992  active undef running
  |- 10:0:3:22  sdall 134:880  active undef running
  |- 8:0:3:22   sdamx 8:1232   active undef running
  `- 9:0:3:22   sdaqa 69:1248  active undef running

Real disc size on storage:

# showvv ALHIDB_SNP_001
                                                                          -Rsvd(MB)-- -(MB)-
  Id Name           Prov Type  CopyOf            BsId Rd -Detailed_State- Adm Snp Usr  VSize
4098 ALHIDB_SNP_001 snp  vcopy ALHIDB_SNP_001.ro 5650 RW normal            --  --  --  49152

Just to be sure I have the right disc:

# showvlun -showcols VVName,VV_WWN| grep -i  0002acf962421ba
ALHIDB_SNP_001          50002ACF962421BA 

And the VG thinks its 48G

  --- Volume group ---
  VG Name               vg_ALHINT
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  30
  VG Access             read/write
  VG Status             exported/resizable
  MAX LV                0
  Cur LV                5
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               48.00 GiB
  PE Size               4.00 MiB
  Total PE              12287
  Alloc PE / Size       12287 / 48.00 GiB
  Free  PE / Size       0 / 0
  VG UUID               qqZ9Vi-5Ob1-R6zb-YeWa-jDfg-9wc7-E2wsem

When I rescan the HBAs for new discs and reconfigure multipthing, the disc still displays 28G, so I tried this an dhad no change:

# multipathd -k'resize map 350002acf962421ba'

Because I cannot think of any better way around this for the moment I think if I remove this disc and present another of 48G then I'll have a disc for this, but I cannot think why this would be so problematic.

Versions:

lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5