If your output is really what you desire, you are working way too hard for this.
In your original solution you work from the back, and sum all the needed elements in each step to generate the new sum. This is inefficient. Working from the back, you can reuse the sum from the previous step and just add the current element on top of it.
def parts_sums_from_the_back(ls):
nls = []
sum_ = 0
for element in reversed(ls):
sum_ += element
nls.insert(0, sum_)
return nls
You can also work from the front. Then you start with the sum over all elements, and subtract the elements as you go along:
def parts_sums_from_the_front(ls):
nls = [sum(ls)]
for i in range(len(ls)-1):
nls.append(nls[i]-ls[i])
return nls
Sidenote: As you can see, sum is already taken in Python, so you should not use it to name your own variables.
Also, since what you are doing is not uncommon it even has a special name: cumulative sum. Just think about it for a second how your code can be expressed in terms of a cumulative sum. In the meantime, let me introduce you to NumPy. NumPy is a specialized library for numeric computations, and as such also has a implementation of a cumulative sum.
import numpy as np
def parts_sums_np(ls):
return np.cumsum(ls[::-1]).tolist()[::-1]
Spoiler: As you can see, your result is cumsum of the reversed list, reversed again.
Addendum: Since I'm usually working with Python in a scientific context I'm quick to use NumPy for quite a lof of tasks. Fortunately there are experienced people like Maarten Fabré who know a thing or two about Python. He gave the following implementation in a comment below which only uses tools from Python's standard library:
from itertools import accumulate
def parts_sums_itertools(ls):
return list(accumulate(reversed(ls)))[::-1]
itertools.accumulate is a very handy function to express all kind of cumulative computations. If used on a sequence with no further arguments it computes the cumsum, but it can also be used to implement other algorithms like cumulative multiplication.
I tested all of them with this little snippet:
ls = [1, 4, 6, 4]
nsl = [15, 14, 10, 4]
print("parts_sums", parts_sums(ls) == nsl)
print("from_the_back", parts_sums_from_the_back(ls) == nsl)
print("from_the_front", parts_sums_from_the_front(ls) == nsl)
print("numpy", parts_sums_np(ls) == nsl)
print("itertools", parts_sums_itertools(ls) == nls)
which happily prints
parts_sums True
from_the_back True
from_the_front True
numpy True
itertools True
Where parts_sums is your original implementation.
Timing
I also performed some preliminary timing. With the input given in your question the results are as follows:
parts_sums: 5µs
from_the_back: 1.6µs
from_the_front: 2µs
numpy: 13µs
itertools: 1.15µs
As you can see itertools takes the leadworking from the back is the most efficient approach since it doesn't make it necessary to look at all elements of the list beforehand. NumPy performs actually quite poorly here. This is likely due to the overhead from going from Python to the C backend of NumPy and back again.
I repeated the timing for an input with 1000 elements. The results are as follows:
parts_sums: 185ms
from_the_back: 690µs
from_the_front: 285µs
numpy: 170µs
itertools: 63.1µs
As you can see, all the repeated summing makes your original implementation scale really badly. Also, from_the_front and from_the_back have switched places. This is likely because .insert(0, ...) is more expensive than .append(...). You can work against this dynamic array growing since you know exactly how large the array will be at the end. If you accomodate for this (see code below), the time goes down from over 600µs to around 270µs. At this point you also see that NumPy starts to play its strengths. NumPy might get stronger here, but itertools still dominates the comparison by a considerable margin.
def parts_sums_from_the_back_pre(ls):
nls = [None, ]*len(ls)
sum_ = 0
for i, element in enumerate(reversed(ls)):
sum_ += element
nls[-(i+1)] = sum_
return nls
Appendix: Further timing
I performed some extended timings to generate a visual presentation how the function runtimes scale with the list length.

As expected, the original implementation scales very badly with the size of the input. This becomes even more obvious if you look at the plot with linear axis scale on the left (click for full resolution).
While creating the test routine I also came to the conclusion that NumPy loses against itertools because it has to convert the data from Python to NumPy's own format. If you repeat the test above using input already in NumPy's own format, NumPy steals the lead from the itertools implementation.
