12

Suppose I have a numpy array

a = np.array([1, 100, 123, -400, 85, -98])

And I want to limit each value between -100 and 90. So basically, I want the numpy array to be like this:

a = np.array([1, 90, 90, -100, 85, -98])

I know this can be done through iterating over the numpy array, but is there any other efficient method to carry out this task?

5
  • 8
    np.clip / nd.ndarray.clip Commented Mar 8, 2019 at 15:24
  • hi! Is there a reason why you do not want to use list comprehension ? Commented Mar 9, 2019 at 0:08
  • @jeannej I think it is inefficient in terms of speed Commented Mar 10, 2019 at 16:16
  • 1
    @ѕняєєѕιиgнι in fact it is the opposite, I will post an answer that demonstrates that Commented Mar 11, 2019 at 17:08
  • There are useful comparisons among the methods in this link. Commented Dec 25, 2021 at 23:08

2 Answers 2

22

There are several ways of doing so. First, using a numpy function as proposed by Sridhar Murali :

a = np.array([1, 100, 123, -400, 85, -98]) 
np.clip(a,-100,90)

Second, using numpy array comparison :

a = np.array([1, 100, 123, -400, 85, -98])
a[a>90] = 90
a[a<-100] = -100

Third, if a numpy is not required for the rest of your code, using list comprehension :

a = [1, 100, 123, -400, 85, -98]
a = [-100 if x<-100 else 90 if x>90 else x for x in a]

They all give the same result :

a = [1, 90, 90, -100, 85, -98]

As for coding style, I would prefer numpy comparison or list comprehension as they state clearly what is done, but it is up to you really. As for speed, with timeit.repeat on 100000 repetitions, I get on average, from the best to the worst :

  1. 4.8e-3 sec for list comprehension
  2. 1.8e-1 sec for numpy array comparison
  3. 2.7e-1 sec for np.clip function

Clearly, if an array is not necessary afterwards, list comprehension is the way to go. And if you need an array, direct comparison is almost twice more efficient that the clip function, while more readable.

Sign up to request clarification or add additional context in comments.

5 Comments

I don't get why this has been chosen as the best answer while being misleading. How is direct comparison "more efficient" than clip if you report that the latter takes less time to execute? And no, list comprehensions are cool syntaxic sugar but not the way to go if you have large arrays where vectorization is key to gain massive speedups. In your timings you just got lucky because the arrays were very small to begin with.
@BS. I think you missed that 4.8e-3 sec = 0.0048 sec and 1.8e-1 sec = 0.18 sec... So in my test case, list comprehension does take much less time to execute. And of course, time comparison depends on your data, computer and so on, so yes the results can be different.
Yet I see "2.7e-1 sec for numpy array comparison". But anyway, the results do not hold on any modern CPU if you work with large arrays.
@BS. I should'nt know as I haven't tested it, I only worked with the array given in the question. Please feel free to ask for alternatives for large arrays in another question! Anyway thank you for spotting that I interverted computation times for the last twos, I updated my post.
@jennej I too do feel that the benchmark is a bit unfair. As you know, it is common to pose questions with small reproducible examples, while the real problem may have larger scale. Also, your comparison may mislead future readers of this page who do have larger problem sizes. On my machine, np.clip is 250 times faster than the list comprehension for an array with one million elements. As a suggestion, you could run the benchmark for different sizes of the array and say for which ranges it is practical to use a list comprehension.
4

I think the easiest way for you to get the result is using the clip function from numpy.

import numpy as np
a = np.array([1, 100, 123, -400, 85, -98])
np.clip(a,-100,90)

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.