Other answers have provided valuable feedback. Another area for improvement outside of the algorithmic opportunities is to have your function return a generator rather than a list.
I'm going to make minimal changes other than this.
def better_prime_finder(lim):
primes = list(range(2, lim + 1))
for n in range(lim):
try:
p = primes[n]
except IndexError:
return
yield p
for i in range(2, lim):
if i*p not in primes:
continue
primes.remove(i*p)
I have also had the function return when the IndexError is raised, since at this point we do not need to continually check primes[0]. No useful work is being done by this.
By making these few small changes, we open up a whole world of flexibility. If, for instance, I wanted to compute primes up to 1,000, but I only care about the first 20 primes that are greater than 10, I can do that using itertools and then consume that generator to create a list.
>>> list(
... itertools.islice(
... itertools.dropwhile(
... lambda x: x <= 10,
... better_prime_finder(1000)
... ),
... 20
... )
... )
[11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89]
The beauty of this, though, is that you don't necessarily need to create a list when consuming the generator. You might just directly print each prime without the extra step of creating the list.
Now, because of the algorithmic issues, this still runs very slowly if we specify a limit of something like a million, but once it yields 89 it's not doing any further work.
We can apply this same generator approach even if we implement something dramatically more efficient than your current approach like the Sieve of Eratosthenes.