Our library has a lot of chained functions that are called thousands of times when solving an engineering problem on a mesh every time step during a simulation. In these functions, we must create arrays whose sizes are only known at runtime, depending on the application. There are three choices we have tried so far, as shown below:
void compute_something( const int& n )
{
double fields1[n]; // Option 1.
auto *fields2 = new double[n]; // Option 2.
std::vector<double> fields3(n); // Option 3.
// .... a lot more operations on the field variables ....
}
From these choices, Option 1 has worked with our current compiler, but we know it's not safe because we may overflow the stack (plus, it's non standard). Option 2 and Option 3 are, on the other hand, safer, but using them as frequently as we do, is impacting the performance in our applications to the point that the code runs ~6 times slower than using Option 1.
What are other options to handle memory allocation efficiently for dynamic-sized arrays in C++? We have considered constraining the parameter n, so that we can provide the compiler with an upper bound on the array size (and optimization would follow); however, in some functions, n can be pretty much arbitrary and it's hard to come up with a precise upper bound. Is there a way to circumvent the overhead in dynamic memory allocation? Any advice would be greatly appreciated.
std::vector. 2.std::vector. 3.std::vector.