Update: this feature is now in sciPy.stats.qmc.discrepancy and was ported to Cython and also parallelised.
I have a function using some for loops and I wanted to improve the speed using numpy. But this seems not to do the trick as the bumpy version appears to be 2 times slower. Here is the code:
import numpy as np
import itertools
import timeit
def func():
    sample = np.random.random_sample((100, 2))
    
    disc1 = 0
    disc2 = 0
    n_sample = len(sample)
    dim = sample.shape[1]
    for i in range(n_sample):
        prod = 1
        for k in range(dim):
            sub = np.abs(sample[i, k] - 0.5)
            prod *= 1 + 0.5 * sub - 0.5 * sub ** 2
    
        disc1 += prod
    for i, j in itertools.product(range(n_sample), range(n_sample)):
        prod = 1
        for k in range(dim):
            a = 0.5 * np.abs(sample[i, k] - 0.5)
            b = 0.5 * np.abs(sample[j, k] - 0.5)
            c = 0.5 * np.abs(sample[i, k] - sample[j, k])
            prod *= 1 + a + b - c
        disc2 += prod
    c2 = (13 / 12) ** dim - 2 / n_sample * disc1 + 1 / (n_sample ** 2) * disc2
def func_numpy():
    sample = np.random.random_sample((100, 2))
    disc1 = 0
    disc2 = 0
    n_sample = len(sample)
    dim = sample.shape[1]
    disc1 = np.sum(np.prod(1 + 0.5 * np.abs(sample - 0.5) - 0.5 * np.abs(sample - 0.5) ** 2, axis=1))
    
    for i, j in itertools.product(range(n_sample), range(n_sample)):
        disc2 += np.prod(1 + 0.5 * np.abs(sample[i] - 0.5) + 0.5 * np.abs(sample[j] - 0.5) - 0.5 * np.abs(sample[i] - sample[j]))
    
    c2 = (13 / 12) ** dim - 2 / n_sample * disc1 + 1 / (n_sample ** 2) * disc2
print('Normal function time: ' , timeit.repeat('func()', number=20, repeat=5, setup="from __main__ import func"))
print('numpy function time: ', timeit.repeat('func_numpy()', number=20, repeat=5, setup="from __main__ import func_numpy"))
The timing output is:
Normal function time:  [2.831496894999873, 2.832342429959681, 2.8009242500411347, 2.8075121529982425, 2.824807019031141]
numpy function time:  [5.154757721000351, 5.2011515340418555, 5.148996959964279, 5.095560318033677, 5.125199959962629]
What am I missing here? I know that the bottleneck is the itertools part because I have a 100x100x2 loop instead of a 100x2 loop before. Do you see another way to do that?


x.shapeto ensure that the operations are done properly and the output is of the expected shape/size.np.abs(sample - 0.5)once and re-use it instead of computing it twice...