127

I'm trying to multiply each of the terms in a 2D array by the corresponding terms in a 1D array. This is very easy if I want to multiply every column by the 1D array, as shown in the numpy.multiply function. But I want to do the opposite, multiply each term in the row. In other words I want to multiply:

[1,2,3]   [0]
[4,5,6] * [1]
[7,8,9]   [2]

and get

[0,0,0]
[4,5,6]
[14,16,18]

but instead I get

[0,2,6]
[0,5,12]
[0,8,18]

Does anyone know if there's an elegant way to do that with numpy? Thanks a lot, Alex

3
  • 3
    Ah I figured it out just as I submitted the question. First transpose the square matrix, multiply, then transpose the answer. Commented Aug 29, 2013 at 22:56
  • Better to transpose the row to a column matrix then you don't have to re-transpose the answer. If A * B you'd have to do A * B[...,None] which transposes B by adding a new axis (None). Commented Aug 30, 2013 at 2:16
  • Thanks, that's true. The problem is when you have a 1D array calling .transpose() or .T on it doesn't turn it into a column array, it leaves it as a row, so as far as I know you have to define it as a column right off the bat. Like x = [[1],[2],[3]] or something. Commented Sep 3, 2013 at 19:59

8 Answers 8

157

Normal multiplication like you showed:

>>> import numpy as np
>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> m * c
array([[ 0,  2,  6],
       [ 0,  5, 12],
       [ 0,  8, 18]])

If you add an axis, it will multiply the way you want:

>>> m * c[:, np.newaxis]
array([[ 0,  0,  0],
       [ 4,  5,  6],
       [14, 16, 18]])

You could also transpose twice:

>>> (m.T * c).T
array([[ 0,  0,  0],
       [ 4,  5,  6],
       [14, 16, 18]])
Sign up to request clarification or add additional context in comments.

1 Comment

With new axis method it possible to multiply two 1D arrays and generate a 2D array. E.g [a,b] op [c,d] -> [[a*c, b*c], [a*d, b*d]].
84

I've compared the different options for speed and found that – much to my surprise – all options (except diag) are equally fast. I personally use

A * b[:, None]

(or (A.T * b).T) because it's short.

enter image description here


Code to reproduce the plot:

import numpy
import perfplot


def newaxis(data):
    A, b = data
    return A * b[:, numpy.newaxis]


def none(data):
    A, b = data
    return A * b[:, None]


def double_transpose(data):
    A, b = data
    return (A.T * b).T


def double_transpose_contiguous(data):
    A, b = data
    return numpy.ascontiguousarray((A.T * b).T)


def diag_dot(data):
    A, b = data
    return numpy.dot(numpy.diag(b), A)


def einsum(data):
    A, b = data
    return numpy.einsum("ij,i->ij", A, b)


perfplot.save(
    "p.png",
    setup=lambda n: (numpy.random.rand(n, n), numpy.random.rand(n)),
    kernels=[
        newaxis,
        none,
        double_transpose,
        double_transpose_contiguous,
        diag_dot,
        einsum,
    ],
    n_range=[2 ** k for k in range(13)],
    xlabel="len(A), len(b)",
)

1 Comment

(A.T * b).T is equal to b.T * A, you save a transpose.
18

You could also use matrix multiplication (aka dot product):

a = [[1,2,3],[4,5,6],[7,8,9]]
b = [0,1,2]
c = numpy.diag(b)

numpy.dot(c,a)

Which is more elegant is probably a matter of taste.

3 Comments

dot is really overkill here. You're just doing unnecessary multiplication by 0 and additions to 0.
this might also trigger memory issues in case you want to multipy an nx1 vector to an nxd matrix where d is larger than n.
Downvoting as this is slow and uses a lot of memory when creating the dense diag matrix.
17

Yet another trick (as of v1.6)

A=np.arange(1,10).reshape(3,3)
b=np.arange(3)

np.einsum('ij,i->ij',A,b)

I'm proficient with the numpy broadcasting (newaxis), but I'm still finding my way around this new einsum tool. So I had play around a bit to find this solution.

Timings (using Ipython timeit):

einsum: 4.9 micro
transpose: 8.1 micro
newaxis: 8.35 micro
dot-diag: 10.5 micro

Incidentally, changing a i to j, np.einsum('ij,j->ij',A,b), produces the matrix that Alex does not want. And np.einsum('ji,j->ji',A,b) does, in effect, the double transpose.

5 Comments

If you will time this on computer with arrays large enough that it take at least a few milliseconds and post the results here along with your relevant system information it would be much appreciated.
with a larger array (100x100) the relative numbers are about the same. einsumm (25 micro)is twice as fast as the others (dot-diag slows down more). This is np 1.7, freshly compiled with 'libatlas3gf-sse2' and 'libatlas-base-dev' (Ubuntu 10.4, single processor). timeit gives the best of 10000 loops.
This is a great answer and I think it is the one that should have been accepted. However, the code written above does, in fact, give the matrix Alex was trying to avoid (on my machine). The one hpaulj said is wrong is actually the right one.
The timings are misleading here. dot-diag really is far worse than the other three options, and einsum isn't faster than the others either.
@NicoSchlömer, my answer is nearly 5 yrs old, and many numpy versions back.
1

For those lost souls on google, using numpy.expand_dims then numpy.repeat will work, and will also work in higher dimensional cases (i.e. multiplying a shape (10, 12, 3) by a (10, 12)).

>>> import numpy
>>> a = numpy.array([[1,2,3],[4,5,6],[7,8,9]])
>>> b = numpy.array([0,1,2])
>>> b0 = numpy.expand_dims(b, axis = 0)
>>> b0 = numpy.repeat(b0, a.shape[0], axis = 0)
>>> b1 = numpy.expand_dims(b, axis = 1)
>>> b1 = numpy.repeat(b1, a.shape[1], axis = 1)
>>> a*b0
array([[ 0,  2,  6],
   [ 0,  5, 12],
   [ 0,  8, 18]])
>>> a*b1
array([[ 0,  0,  0],
   [ 4,  5,  6],
   [14, 16, 18]])

Comments

1

You need to transform row-array into column-array, which transpose doesn't do. Use reshape instead:

>>> import numpy as np
>>> a = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> b = np.array([0,1,2])
>>> a * b
array([[ 0,  2,  6],
       [ 0,  5, 12],
       [ 0,  8, 18]])

with reshape:

>>> a * b.reshape(-1,1)
array([[ 0,  0,  0],
       [ 4,  5,  6],
       [14, 16, 18]])

Comments

0

In short

It's easy to scale the rows, or the columns, of a matrix using a diagonal matrix and matrix multiplication.

import numpy as np
M = np.array([[1,2,3],
              [4,5,6],
              [7,8,9]])

# Pre-multiply by a diagonal matrix to scale rows
C = np.diag([0,1,2]) # Create a diagonal matrix
R = C @ M

   # For the related scaling of columns, change the order of the product
   # C = np.diag([0,1,2])
   # R = M @ C 

M, C, R values:

[[1 2 3]   [[0 0 0]   [[ 0  0  0]
 [4 5 6]    [0 1 0]    [ 4  5  6]
 [7 8 9]]   [0 0 2]]   [14 16 18]]

There are variants involving the product of a vector by a matrix and transpose, but the solution above is as simple to scale the rows, and by inverting the order of the multiplication (R = M @ C), to scale the columns as well, regardless of whether the matrix is square or not.

Explanation

Refresher: The dot product of two vectors is a single number, the sum of the individual element-wise products:

( a b c ) . ( d e f ) = ad + be + cf

Matrix multiplication involves dot products of a row by a column, and the result depends on the order of the matrices (matrix multiplication is not commutative).

| a b |   | x y |   | ax+bz ay+bt |
| c d | x | z t | = | cx+dz cy+dt |

Note how element (row 2, col 1) of the result is the dot product of row 2 of the first matrix by col 1 of the second matrix.

Thus your problem is to find a matrix of coefficients which performs the correct dot products, and select in which order the matrices must be multiplied.

If you think about the result above, setting elements not on the diagonal to zero in one matrix simplifies the dot products, and results in a scaling of the other matrix either by rows or by columns. E.g

| a 0 |   | x y |   | ax+0z ay+0t |   | ax ay |
| 0 d | x | z t | = | 0x+dz 0y+dt | = | dz dt |

The result is the second matrix rows have been scaled respectively by a and d, which are the values on the diagonal of the first matrix. Similarly:

| x y |   | a 0 |   | ax+0y 0x+dy |   | ax dy |
| z t | x | 0 d | = | az+0t 0z+dt | = | az dt |

The result is the first matrix columns have been scaled by the diagonal elements of the second matrix.

The change in the multiplication order seen above is explained by the fact you can get the same result by pre-multiplying the transpose of the matrix, and taking the transpose of the result, with is linked with your attempt to use transposes. This operation corresponds to R = (C * M.T).T. But on the other hand matrix transpose has this property: (AB).T = (B.T)(A.T). Hence R = M(C.T) which is equal to R = MC, because C is a diagonal matrix, and its transpose is itself.

Thus back to your problem: You want to scale the rows, thus set a diagonal matrix with your coefficients and pre-multiply your matrix by the diagonal matrix:

Diagonal matrix:

coeffs = [0,1,2]
C = np.diag(coeffs)

Pre-multiply:

M = np.array([[1,2,3],
              [4,5,6],
              [7,8,9]])
R = C @ M # operator @ is a shortcut for matrix multiplication

Comments

-4

Why don't you just do

>>> m = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> c = np.array([0,1,2])
>>> (m.T * c).T

??

1 Comment

That exact approach is already shown in the accepted answer, I don't see how this adds anything.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.