58

AccelerEyes announced in December 2012 that it works with Mathworks on the GPU code and has discontinued its product Jacket for MATLAB:

http://blog.accelereyes.com/blog/2012/12/12/exciting-updates-from-accelereyes/

Unfortunately they do not sell Jacket licences anymore.

As far as I understand, the Jacket GPU Array solution based on ArrayFire was much faster than the gpuArray solution provided by MATLAB.

I started working with gpuArray, but I see that many functions are implemented poorly. For example a simple

myArray(:) = 0 

is very slow. I have written some custom CUDA-Kernels, but the poorly-implemented standard MATLAB functionality adds a lot of overhead, even if working with gpuArrays consistently throughout the code. I fixed some issues by replacing MATLAB code with hand written CUDA code - but I do not want to reimplement the MATLAB standard functionality.

Another feature I am missing is sparse GPU matrices.

So my questions are:

How do is speed up the badly implemented default GPU implementations provided by MATLAB? In particular, how do I speed up sparse matrix operations in MATLAB using the GPU?

21
  • 24
    Of course myArray(:) = 0 is slow - it's moving loads of zeros from the CPU to the GPU for no reason. That doesn't mean MATLAB GPU capabilities are implemented poorly, it means you need to know how to use them; try myArray = gpuArray.zeros(size(myArray)) instead. Commented Jun 5, 2013 at 16:48
  • 2
    Sam, <code>myArray(:) = 0</code> should only move one integer from the CPU to the GPU - if implemented optimally. Using <code>gpuArray.zeros()</code> is even slower. For now I am using <code>myArray = myArray - myArray</code> which is faster - but still slow. I hope the Jacket functionality is coming with the next MATLAB release. Commented Jun 10, 2013 at 7:45
  • 4
    What size array are you trying to allocate and finding it slow? Note that in recent releases of Parallel Computing Toolbox, some operations execute asynchronously. Also, "a = a - a;" does not necessarily result in an array of all zeros, so I would avoid this pattern (hint: what if 'a' contained NaN or Inf?). (And rather contact The MathWorks with the details of your performance problem). Commented Jul 29, 2013 at 11:40
  • 3
    Here's what I've been able to pick from the web (part 1): In 2011, MathWorks and AccelerEyes sued and counter-sued each other over intellectual property issues. MathWorks alleged patent infringement of their Parallel Computing Toolbox product by AcceleEyes' Jacket product [ scribd.com/doc/59765193/MathWorks-v-AccelerEyes-et-al]. Commented Aug 5, 2013 at 17:07
  • 2
    I was happen to be at a Matlab seminar yesterday, and ask Loren Shure and some other TMW people this question. They refused to comment and the best I could get is that "there is something in the pipeline"... Commented Oct 11, 2013 at 18:03

2 Answers 2

5

MATLAB does support CUDA based GPU. You have to access it from the "Parallel Computing Toolbox". Hope these 2 links also help:

Parallel Computing Toolbox Features

Key Features

  • Parallel for-loops (parfor) for running task-parallel algorithms on multiple processors
  • Support for CUDA-enabled NVIDIA GPUs
  • Full use of multicore processors on the desktop via workers that run locally
  • Computer cluster and grid support (with MATLAB Distributed Computing Server)
  • Interactive and batch execution of parallel applications
  • Distributed arrays and single program multiple data (spmd) construct for large dataset handling and data-parallel algorithms

MATLAB GPU Computing Support for NVIDIA CUDA-Enabled GPUs

Using MATLAB for GPU computing lets you accelerate your applications with GPUs more easily than by using C or Fortran. With the familiar MATLAB language you an take advantage of the CUDA GPU computing technology without having to learn the intricacies of GPU architectures or low-level GPU computing libraries.

You can use GPUs with MATLAB through Parallel Computing Toolbox, which supports:

  • CUDA-enabled NVIDIA GPUs with compute capability 2.0 or higher. For releases 14a and earlier, compute capability 1.3 is sufficient.
  • GPU use directly from MATLAB
  • Multiple GPUs on the desktop and computer clusters using MATLAB workers in Parallel Computing Toolbox and MATLAB Distributed Computing Server
Sign up to request clarification or add additional context in comments.

1 Comment

While these links may answer the question, you should avoid link-only answers and summarize or quote the articles, because links tend to decay over time.
3

I had the pleasure of attending a talk by John, the founder of AccelerEyes. They did not get the speedup because they just removed poorly written code and replaced it with code that saved a few bits here and there. Their speedup was mostly from exploiting the availability of cache and doing a lot of operations in-memory (GPU's). Matlab relied on transferring data between GPU and CPU, if I remember correctly, and hence the speedup was crazy.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.