I'm trying to make an Abelian Sandpile using numpy arrays in python. The calculation speed is okay for smaller square matrices, but for larger ones, it slows down significantly (200x200 matrix, with 20000 initial sand particles taking upto 20-30 minutes). Is there any way to speed it up / optimize the matrix calculation? The threshold value is 3.
The basic code right now is -
import numpy as np
n = 200
size = (n,n)
x = np.zeros(size)
m = 0 # mean
if n%2 == 0:
m = int((n+1)/2)
else :
m = int(n/2)
x[m][m] = 100000
z = int(x[m][m])
def f(x):
count = 0
for i in range(0,n):
for j in range(0,n):
if x[i][j] > 3:
x[i][j] = x[i][j] - 4
if i-1 >= 0 :
x[i-1][j] = x[i-1][j] + 1
if i+1 < n :
x[i+1][j] = x[i+1][j] + 1
if j-1 >= 0 :
x[i][j-1] = x[i][j-1] + 1
if j+1 < n :
x[i][j+1] = x[i][j+1] + 1
elif x[i][j] <= 3:
count = count + 1
return x, count
for k in range(0,z):
y, count = f(x)
if count == n**2 :
break
elif count < n**2:
continue
print(y)
I've tried running a 500x500 matrix, with 100,000 initial particles, but that took more than 6 hours.