Difference between CUDA and MPS?

Why are we using MPS in MacOs instead of Cuda?

Wired Wisdom

--

I would like to make this blog as short and sweet as possible for those who wonder why MPS instead of CUDA.

Source — https://techcrunch.com/2020/07/22/adding-an-external-gpu-to-your-mac-is-probably-a-better-upgrade-option-than-getting-a-new-one/

What is the difference between CPUs and GPUs?

CPUs and GPUs are nothing but computing engines that process data and are silicon-based microprocessors.

However, they are built for different purposes.

CPU is made of millions of transistors and can have multiprocess capabilities and is used as the brain of the computer. Whereas the GPUs are smaller and with a lot of specialized cores that work together where the tasks can be broken down into pieces and processed across many cores.

Why do we need GPUs in Machine Learning and Deep Learning?

For smaller computations GPUs and CPUs don't make any difference because there is nothing to optimize with more cores and there are over head load to transfer data from CPUs to GPUs.

But in cases like Deep Learning where a lot of parallelism can be achieved GPUs make a lot of difference.

What about CUDA ?

Nvidia is a technological company that designs GPUs. CUDA is a software platform (API) for Nvidia that pairs between the developers and the Nvidia GPUs. So that the developers can make software that can make use of the GPUs with loads of parallelism.

Now when you have the Nvidia GPUs you can download CUDA from Nvidia and make use of it. You don’t have to know how CUDA API works instead libraries like Pytorch will handle those.

What about MPS now?

Pytorch became Apple silicon compatible. So when you install torch library on your Macbook with version more than MacOs 12.3+ then your system can use the MPS API just like CUDA to make use of the GPUS in macbook.

--

--