The general solution to making a Python function work both on Numpy arrays and scalar values is np.vectorize
. There should be practically no penalty to performance when vectorizing a function,1 though if you try to vectorize a function that is already vectorized (or built entirely from vectorized operations), you will definitely see some slowdowns.
Vectorizing also allows you to automatically handle higher-dimensional arrays rather than just 1-d arrays.
But if you really don't want to vectorize, I think there are a couple clean-ish options you could use.
The technique I would recommend in this scenario is similar to what you're already doing, but avoids wrapping Python lists in an extra array layer:
if np.ndim(x) == 0:
x = np.array([x])
If you're mixing vectorized operations and non-vectorized functions (i.e. you want to convert Python lists into arrays), then something even more like what you're already doing works:
if not isinstance(x, np.ndarray):
x = np.array(x, ndmin=1)
This is how this sort of type-fixing is normally done in Python: by checking the property/type of the provided value and adjusting it if necessary. If you do the exact same check enough, you could even make it a function.2
def as_array(x):
"""Convert a value to a Numpy array (1d+) if it isn't one already."""
return x if isinstance(x, np.ndarray) else np.array(x, ndmin=1)
Another option, if you only want to output 1-d arrays, is to use np.asarray(y).flat
or np.ravel(y)
. These will let you view both 0-dimensional arrays (as created by np.asarray(scalar)
) and higher-dimensional arrays as one-dimensional, so you can process them both with a regular for loop.
As noted in a comment, np.array(y, ndmin=1, copy=False)
is also an option. This won't give you a different view of higher-dimensional arrays, which might be preferable depending on your specific use case.
And finally, there's one last option: don't handle scalar values at all. It's marginally less convenient, but usually, when programming, the same functions don't work the same on lists and numbers, so it often doesn't make sense to allow scalar inputs at all. It takes a little more effort to remember to wrap any scalar inputs in braces, but it does keep the code simpler overall.
1 And even less if you avoid unnecessary re-vectorization, such as by using @np.vectorize
as a decorator.
2 If you really wanted to, you could even make a decorator that automatically performs this conversion, but that path leads to code that is much more annoying to read.
np.atleast_1d
? Though I believe the python code for that includes some sort ofif
.np.array
also takes some sort ofndmin
parameter.np.array(..., ndmin=1,...)
includecopy=None
to avoid unnecessay copies. Read the docs (as I just did) :)if
lines in your code don't hurt run time efficency. It's the loops over large arrays that you want to be careful with. And unfortunately withquad
there isn't much you can do to avoid that.