I have a function in an embedded system which translate an image encoded in a 32 bit integer to 3x9 matrix (an array of arrays)
pub fn image_to_preformated_vector(image: u32) -> [[u8; 9]; 3] {
[
[
1 & (image >> 0) as u8,
1 & (image >> 1) as u8,
1 & (image >> 2) as u8,
1 & (image >> 3) as u8,
1 & (image >> 4) as u8,
1 & (image >> 5) as u8,
1 & (image >> 6) as u8,
1 & (image >> 7) as u8,
1 & (image >> 8) as u8,
],
[
1 & (image >> 9) as u8,
1 & (image >> 10) as u8,
1 & (image >> 11) as u8,
1 & (image >> 12) as u8,
1 & (image >> 13) as u8,
1 & (image >> 14) as u8,
1 & (image >> 15) as u8,
1 & (image >> 16) as u8,
1 & (image >> 17) as u8,
],
[
1 & (image >> 18) as u8,
1 & (image >> 19) as u8,
1 & (image >> 20) as u8,
1 & (image >> 21) as u8,
1 & (image >> 22) as u8,
1 & (image >> 23) as u8,
1 & (image >> 24) as u8,
1 & (image >> 25) as u8,
1 & (image >> 26) as u8,
],
]
}
Alternative implementation
pub fn image_to_preformated_vector(image: u32) -> [[u8; 9]; 3] {
[
[0, 1, 2, 3, 4, 5, 6, 7, 8],
[9, 10, 11, 12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23, 24, 25, 26],
]
.map(|v| v.map(|i| 1_u8 & (image >> i) as u8))
}
Which is the preferred implementation, and why? Is there an even better ways of writing it?