I agree with everything G. Sliepen said in their answer. Especially the advice of creating an actual 3D image rather than an array of 2D images (ideally you'd create an image object that can have an arbitrary number of dimensions), and of using an array type to hold coordinates and sizes.
I would like to add a question: what is the use case of this function? I imagine you could want to add a Gaussian blob to an existing image. With your function, you’d create a new image with the Gaussian blob, and then add it to some other image. Why not take an existing image as input, and add the Gaussian to it?
Another use case could be to create a kernel for a Gaussian filter. But a Gaussian filter is separable and should be implemented as such, not as a convolution with a 3D kernel. So this is not a good use case.
You could add some efficiency by computing only the Gaussian where it is not approximately zero. Typically we crop it at 3 sigma, but depending on the application it could be a different size. Pixels outside that (hyper-)cuboid you’d set to zero, saving you some quite expensive computation. This is of course most impactful when drawing small Gaussian blobs in a large image.
I encourage you to take a look at how I implemented an arbitrary-dimensional image in DIPlib: https://github.com/DIPlib/diplib/blob/master/include/diplib/library/image.h
The DIPlib function that adds a Gaussian blob to an existing image has the following signature:
void DrawBandlimitedPoint(
Image& out,
FloatArray origin,
Image::Pixel const& value = { 1 },
FloatArray sigmas = { 1.0 },
dfloat truncation = 3.0
);