As far as I know the Linux stack size limit is 8 MB, but on 64-bit machines there's no reason why it couldn't be massively increased, e.g. to 4 GB. This would allow programmers to mostly not worry about storing large values on the stack, or using recursive algorithms instead of iterative.
It should also be faster because stack allocation is much much faster than heap allocation. We could see a whole new class of stack allocated data structures - imagine std::stack_vector<T> which allocates on the stack.
Is there a downside I'm not seeing? Or is it just that nobody has cared enough to make the change?
std::vector<>always stores its data on the heap. Only a pointer to the data and the size/capacity counters are stored on the stack.