Because I cannot think of any better way around this for the moment I think if I remove this disc and present another of 48G then I'll have a disc for this, but I cannot think why this would be so problematic.
Versions:
lvm2-2.02.56-8.100.3.el5
device-mapper-multipath-libs-0.4.9-46.100.5.el5
Workaround Because I could not think of solutions I did this: I did not write earlier that I run OVM 3.2 on top of, so part of my solution will include OVM. i) Shutdown guests on Xen via OVM. ii) Remove discs iii) Delete LUNs from OVM iv) Unpresent LUNs from hypervisors. v) OVM rescan storage. vi) Wait for 30 mins ;) vii) Present my discs to the Hypervisors with different LUN IDs. viii) OVM rescan storage
And now fantastically I see 48G discs.
# multipath -l 350002acf962421ba
350002acf962421ba dm-18 3PARdata,VV
size=48G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 9:0:0:127 sdt 65:48 active undef running
|- 9:0:1:127 sdbh 67:176 active undef running
|- 9:0:2:127 sddo 71:96 active undef running
|- 9:0:3:127 sdfb 129:208 active undef running
|- 10:0:3:127 sdmz 70:432 active undef running
|- 10:0:0:127 sdoh 128:464 active undef running
|- 10:0:1:127 sdop 129:336 active undef running
|- 10:0:2:127 sdqm 132:352 active undef running
|- 7:0:1:127 sdzu 131:640 active undef running
|- 7:0:0:127 sdxh 71:624 active undef running
|- 7:0:3:127 sdaed 66:912 active undef running
|- 7:0:2:127 sdaab 131:752 active undef running
|- 8:0:0:127 sdakm 132:992 active undef running
|- 8:0:1:127 sdall 134:880 active undef running
|- 8:0:2:127 sdamx 8:1232 active undef running
`- 8:0:3:127 sdaqa 69:1248 active undef running