I have a program that needs to take in 4 bytes and convert them to an IEEE-754 float. The bytes are transferred out of order, but I can put them back in order just fine. My problem is casting them to a float. The relevant parts of code:
//Union to store bytes and float on top of each other
typedef union {
unsigned char b[4];
float f;
} bfloat;
//Create instance of the union
bfloat Temperature;
//Add float data using transmitted bytes
MMI.Temperature.b[2] = 0xD1;//MMIResponseMsg[7];
MMI.Temperature.b[3] = 0xE1;//MMIResponseMsg[8];
MMI.Temperature.b[0] = 0x41;//MMIResponseMsg[9];
MMI.Temperature.b[1] = 0xD7;//MMIResponseMsg[10];
//Attempting to read the float value
lWhole=(long) ((float)MMI.Temperature.f);
//DEBUGGING
stevenFloat = (float)MMI.Temperature.f;
lWhole is a long and stevenFloat is a float. When debugging I can see that the values I assign to the byte array are being stored correctly, however the values of stevenFloat and lWhole are incorrect. They seem to hover close to 0, or close to the max float/long values. A long and float are both 32 bits with my compiler.
Does anyone know why this isn't working? It looked correct to me when I received the code to work on and it appears to be a common solution online, I am just stumped.
0x41d7d1e1into a hex to float converter, I got ~27, which seems like a nice number.0xD1E141D7?bFloatand explicitly assign thebFloat::fvalue to be the expected value. Then examinebFloat::bto see what the actual bytes come out to be, and it may give you the clue you need to fix your problem