Reddit Change Gives AMD Ryzen CPUs Almost 300% Boost in MathWorks MATLAB

When new hardware comes out most of the time the big hardware companies send sites like Legit Reviews a sample to check out along with reviewer materials. For example, Intel usually sends out a “what to expect guide” in their reviewer materials with some baseline performance numbers along with access to a few suggested workloads that might be of interest. No pressure is put on us to test, but we are given access to the material and can use it if we would like to.

One new workload that Intel suggested Reddit lol that we run was a script for a MATLAB workload that they said was fairly fast to run. The tested functions include matrix factorizations, linear equation solving, and computation of singular values that are commonly used in machine learning, geometric modeling, and scientific applications.

We started digging into it a bit and noticed that MATLAB runs both AMD and Intel processors by default with the Intel Math Kernel Library (Intel MKL). During our search we ran across a post on Reddit that shows AMD performance can be greatly boosted by running a script to tell MATLAB to run AMD processors in AVX2 mode. By forcing Matlab to use a fast codepath on AMD processors, the performance gains were said to be between 20% and 300% by doing this simple change.

Maybe Intel PR didn’t know about this minor little detail, but it seemed like something we should stay away from with regards to CPU benchmarking in our launch reviews. So, we went about our hunt for new workloads to run on the 10th Gen Intel Core i9-10980XE ‘Cascade Lake-X’ and 3rd Gen Ryzen Threadripper 3970X processors as this seemed a bit too hot to touch. A number of people reached out to us on social media though and wanted us to test MATLAB anyway to see what the deal was.

So, we downloaded the Intel provided workload as you can see from the screenshot above and did some testing. The script that Intel provided to the media looks at the individual elapsed times that it takes MATLAB to complete 9 different linear algebra functions. Benchmarked functions include matrix factorizations, linear equation solving, and computation of singular values that are commonly used in machine learning, geometric modeling, and scientific applications. The functions performed, in no particular order are:

 

  • SVD – Singular Value Decomposition
  • Chol – Cholesky factorization
  • QR – QR Factorization
  • LU – LU Matrix Factorization
  • Inv – Matrix inverse
  • Pinv – Moore-Penrose pseudoinverse
  • FFT – Fast Fourier Transform of a vector
  • Solving a symmetric sparse linear system
  • Matrix Multiplication

 

Leave a Reply

Your email address will not be published. Required fields are marked *