1

Our library has a lot of chained functions that are called thousands of times when solving an engineering problem on a mesh every time step during a simulation. In these functions, we must create arrays whose sizes are only known at runtime, depending on the application. There are three choices we have tried so far, as shown below:

void compute_something( const int& n )
{
    double fields1[n];               // Option 1.
    auto *fields2 = new double[n];   // Option 2.
    std::vector<double> fields3(n);  // Option 3.

    // .... a lot more operations on the field variables ....
}

From these choices, Option 1 has worked with our current compiler, but we know it's not safe because we may overflow the stack (plus, it's non standard). Option 2 and Option 3 are, on the other hand, safer, but using them as frequently as we do, is impacting the performance in our applications to the point that the code runs ~6 times slower than using Option 1.

What are other options to handle memory allocation efficiently for dynamic-sized arrays in C++? We have considered constraining the parameter n, so that we can provide the compiler with an upper bound on the array size (and optimization would follow); however, in some functions, n can be pretty much arbitrary and it's hard to come up with a precise upper bound. Is there a way to circumvent the overhead in dynamic memory allocation? Any advice would be greatly appreciated.

8
  • 4
    Choose a language — C or C++, not both. The answers are quite different, depending on which you choose! The code is C++, not C, so presumably the C tag should go. Commented Oct 6, 2022 at 4:17
  • 5
    There are three viable options. 1. std::vector. 2. std::vector. 3. std::vector. Commented Oct 6, 2022 at 4:22
  • 1
    Option 1 is valid in C (C99 mandatory; C11, C18 optional; C23 requires support for VLAs, but doesn't require support for automatic storage of VLAs). It does run a risk of overflowing the stack — you'd have to be careful about stack depth. Commented Oct 6, 2022 at 4:22
  • 4
    One way to reduce the cost of dynamic memory allocation is to do less of it. Can you allocate more carefully — and reuse rather than release? That might be easier in C than in C++. Commented Oct 6, 2022 at 4:24
  • 1
    Option 1 is not valid in standard C++. Option 2 is valid, but highly error prone (potential leaks, etc). Option 3 is valid, and mitigates several errors associated with Option 2. If performance matters, focus on algorithms - and minimise how often a dynamically allocated array is allocated/deallocated (i.e. take care with changing size and capacity of a vector). Commented Oct 6, 2022 at 4:44

1 Answer 1

1
  1. Create a cache at startup and pre-allocate with a reasonable size.
  2. Pass the cache to your compute function or make it part of your class if compute() is a method
  3. Resize the cache
std::vector<double> fields;
fields.reserve( reasonable_size );
...
void compute( int n, std::vector<double>& fields ) {       
     fields.resize(n);
     // .... a lot more operations on the field variables ....
}

This has a few benefits.

  1. First, most of the time the size of the vector will be changed but no allocation will take place due to the exponential nature of std::vector's memory management.

  2. Second, you will be reusing the same memory so it will be likely it will stay in cache.

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.