I've reached a stage where my arrays have become massive and a single function takes about 2 days to compute.
I am working with image processing and using kmeans and gmm - fitgmdist.
I have a workstation with Nvidia Tesla GPU's which are on the supported list and I would like to use their processing power to help speed up my work.
Looking into the documentation, I understand that in order to use the GPU functions all I have to do is to pass the array that is being passed to the functions to the GPU first. i.e.
model_feats = get_feats(all_imges);
kmeans = kmeans(model_feats, gaussians, 'EmptyAction','singleton', 'MaxIter',1000);
gmm{i} = fitgmdist(model_feats, 128, 'Options',statset('MaxIter',1000), ...
'CovType','diagonal', 'SharedCov',false, 'Regularize',0.01, 'Start',cInd);
All of my processing time is taken up by these two functions. So if I am to use the GPU cores, is all that I have to do is use the gpuArray function? For example the above will become:
temp_feats = get_feats(all_imges);
model_feats = gpuArray(temp_feats);
kmeans = kmeans(model_feats, gaussians, 'EmptyAction','singleton', 'MaxIter',1000);
gmm{i} = fitgmdist(model_feats, 128, 'Options',statset('MaxIter',1000), ...
'CovType','diagonal', 'SharedCov',false, 'Regularize',0.01, 'Start',cInd);
Will this work? Will it work for any function by first passing the array to gpuArray?
P.S. Sorry I have to ask here rather than just try it myself, but I do not have access to the workstation as of now, but I can request access to it. Before I request access to it I wanted to make sure if my script will work with
gpuArray.