When creating the backing array for (e.g.) a collection, you do not really care about the exact size of the array you create, it only needs to be at least as large as you calculated.
But thanks to the memory allocation and the VM's array header, it would in some cases be possible to create a somewhat larger array without consuming any more memory - for the Oracle 32 bit VM (at least thats what several sources on the internet claim), memory granularity is 8 (meaning any memory allocation is rounded up to the next 8 byte-boundary), and array header overhead is 12 bytes.
That means when allocating Object[2], that should consume 20 bytes (12 + 2 * 4), but it will actually take 24 bytes thanks to granularity. It would be possible to create an Object[3] for just the same memory cost, meaning a collection would have to resize its backing array a little later. The same principle could be applied to primitve arrays, e.g. byte[] used for I/O buffers, char[] in string builder etc.
While such an optimization won't have a really noticeable effect, except under the most extreme circumstances, it wouldn't be much trouble to call a static method to "optimze" an array size.
Problem is, there is no such "round array size up to memory granularity" in the JDK. And writing such a method myself would require to determine some crucial parameters of the VM: memory granularity, array header overhead and finally the size of each type (mainly a problem for references, since their size can vary with architecture and VM options).
So is there a method to determine these parameters, or achieve the desired "round up" by other means?
ArrayListwith static structures (like Array). Specifically, Java Arrays are not dynamically sized. So that rounding you speak of is how you might estimate the memory usage of an array (and perhaps there's an optimization around alignment), but the array still only has the precisely requested size.