97

Is the runtime complexity defined by the JS standard on common Array functions like push, pop, shift, slice or splice? Esp. I'm interested in removing and inserting entries at random positions. If the complexity is not defined, what can I expect, e.g. in V8?

(This question was inspired by this. Also, this benchmark, posted here, also makes me curious, but maybe that is some unrelated phenomena.)

(A very related question is here. However, one of the comments on the accepted answer says that it is wrong now. Also, the accepted answer does not have any reference that the standard really defines it that way.).

0

3 Answers 3

185

The ECMA specification does not specify a bounding complexity, however, you can derive one from the specification's algorithms.

push is O(1), however, in practice it will encounter an O(N) copy costs at engine defined boundaries as the slot array needs to be reallocated. These boundaries are typically logarithmic.

pop is O(1) with a similar caveat to push but the O(N) copy is rarely encountered as it is often folded into garbage collection (e.g. a copying collector could only copy the used part of an array).

shift is at worst O(N) however it can, in specially cases, be implemented as O(1) at the cost of slowing down indexing so your mileage may vary.

slice is O(N) where N is end - start. Not a tremendous amount of optimization opportunity here without significantly slowing down writes to both arrays.

splice is, worst case, O(N). There are array storage techniques that divide N by a constant but they significantly slow down indexing. If an engine uses such techniques you might notice unusually slow operations as it switches between storage techniques triggered by access pattern changes.

One you didn't mention, is sort. It is, in the average case, O(N log N). However, depending on the algorithm chosen by the engine, you could get O(N^2) in some cases. For example, if the engine uses QuickSort (even with an late out to InsertionSort), it has well-known N^2 cases. This could be a source of DoS for your application. If this is a concern either limit the size of the arrays you sort (maybe merging the sub-arrays) or bail-out to HeapSort.

Sign up to request clarification or add additional context in comments.

8 Comments

This is great, but without a source specific to javascript, it's hard to verify. For instance, I know that sort can be O(NlogN), but is it in every language? No. (Also, what about filter, reverse, etc?)
These are hard to verify but you can rely on them being no worse complexity than the algorithm in the spec. As for filter and reverse they are O(N).
What about concat?
Please mention unshift(), I assume that's O(n) as well.
concat and unshift are almost certainly O(n)
|
17

in very simple words

push -> O(1)

pop -> O(1)

shift -> O(N)

slice -> O(N)

splice -> O(N)

Here is a complete explanation about time complexity of Arrays in JavaScript.

4 Comments

Yes, exactly! Array.prototype.splice is a universal multifunctional tool, and its complexity depends on what is done with it. For example, it can remove and/or add the last element (as push or pop), then complexity will be O(1); if it performs same as shift /unshift, its complexity will be O(n). Also other, "middle" situations are possible.
Yeah, So we can say, in the worst case, it's still O(n).
That article hypothesizes a lot but doesn't cite the standard (which also doesn't specify complexities) or any specific engine. Just because it would be smart to implement things that way, doesn't mean any specific implementation is forced to do so.
@SarwarSateer - push() is not O(1), it's amortized O(1) as the top-rated answer explains - It will encounter an O(N) copy costs at engine-defined boundaries as the array needs to be reallocated
4

Just a side note, it is possible to implement shift / unshift, push / pop methods in O(1) using RingBuffer (i.o.w CircularQueue / CircularBuffer) data structure. It would be O(1) for worst case whenever the circular buffer is not required to grow. Did anyone actually measure performance of these operations? It's always better know rather than to guess...

3 Comments

yeah I tried it out a couple months ago on a jsperf - shift and unshift did indeed perform closer to O(n) than O(1), and were a lot slower with large n. made this Queue implementation to avoid shift and unshift gist.github.com/tbjgolden/142f2e0b2c1670812959e3588c4fa8a2
@TomGolden - Sorry, not to be too abrupt here, but that implementation needs several caveats. It never lets anything go. It accumulates not only unused array slots, but it also holds references to objects in those unused array slots. If you push large objects into that queue, it will hold them forever, causing memory usage to go up on O(n) if the size of the total items added during the usage of the queue. At the very least, please set the array slots to undefined after removing them with dequeue.
Folks need to remember that BigO is time AND SPACE. These dequeue implementations in javascript will never be O(1), ever. It will at minimum increase space a lot, by keeping duplications, controls, etc... But time won´t either be 1 ever, as it will always require computation to retrieve as you'd expect by having to get from controls and duplications

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.