Don't get into the bad habit of using np.append to build arrays!
Appending to a numpy array is expensive since there is no way to do it without creating a new copy of the array in memory (the same is true of np.concatenate, np.vstack etc.). As the array gets bigger and bigger, copying it becomes slower and slower. A 1700-long 1D vector still isn't that big, but when you are dealing with millions of elements the copying will really hurt performance.
A much better way is to create an empty array with the correct final size, then fill in the appropriate indices as you go along. For example:
# create an empty array of the final size
newposition = np.empty(1700, np.float)
# fill in the first three values
newposition[:3] = 1, 2, 3
# fill in the rest
for ii in xrange(3, 1700):
newposition[ii] = np.random.uniform(0, 0.25)
# or whatever...
You haven't shown exactly how you build the rest of your newposition array, but in the silly example above it would be much quicker to use the size= argument to np.random.uniform to fill in the rest of the rows in one go:
newposition[3:] = np.random.uniform(0, 0.25, size=1697)
newposition = np.append(newposition,(np.random.uniform(0,0.25)))np.appendis slow because it forces a new copy of the array to be created each time. The bigger the array is, the slower this will be, so appending 1700 times to a growing array is not a particularly good idea. A much better way is to create an empty array of the correct length (e.g. usingnp.empty), then fill in the appropriate rows as you go.