We are converting a C++ math library to C#. The library mixes the use of floats and doubles (casting between them sometimes) and we are trying to do the same, in order to get the exact same results in C# that we had in C++ but it is proving to be very difficult if not impossible.
I think the problem is one or more of the following, but I am not an expert:
Converting floats to double and double to floats is causing unpredictable results and done differently in C++ and C#
C++ and C# handle float precision differently, and they can't mimic each other
There is a setting somewhere in .NET to make it perform like C++, but I can't find it (both are 32-bit)
Can somebody explain to me the possible problems and maybe link me to some authoritative documentation from Microsoft I can use to help explain the situation and the reason for the differences?
EDIT
We are using VC6 and .NET4.0
I can't give examples of the calculations, because of an NDA, but I can show some numbers for the differences... probably very useless by themselves:
8.085004000000000 (C#) vs.
8.084980000000000 (C++)
8.848165000000000 (C#) vs.
8.848170000000000 (C++)
0.015263214111328 (C#) vs.
0.015263900756836 (C++)
It should be noted that these numbers include compounded problems. These are the results of calculations.