It's called an ArrayList
Contrary to what @trutheality says, they don't require a fixed capacity and the nodes don't store an index to the next item. To get around the size constraints of a typical array, ArrayLists are designed to automatically resize when they reach a pre-defined min/max boundary.
Resizing the internal array is expensive. It includes creating a new array and moving the data from the old array to the new. As such, it's beneficial to limit the number of resize operations needed.
One approach is to double the array size when the list reaches the max capacity, and shrink it by half when the list hits 1/4 capacity.
The reason an array isn't shrunk at half capacity is to avoid thrashing. Thrashing, is when an array increases/decreases in size on the edge of the capacity boundary, causing a lot of resize operations with few changes the internal data.
Despite the expense of resizing -- since it only happens when the dataset doubles -- the actual performance cost is only O(log n). So, in insertion cost is linearithmic O(N log N) while retrieval is constant O(N).
There is one major weakness of ArrayLists. If you add/remove arbitrary items from the list the array contents will have to be shifted to accommodate the changes. An operation which costs linear O(N) time.
Even though the cost of changing arbitrary items in a traditional LinkedList is cheap (ie constant O(1) time), the operation requires a lookup to find the position in the chain which costs linear O(N) time. Unless you're creating a Queue, where both ends of the list are being mutated frequently, using an ArrayList as a list foundation is probably the better choice.
Source: Currently taking an Algorithms course and just finished implementing both an ArrayList and a LinkedList from scratch.