fbpx

Does R memory management slow down vs using Matlab and NVIDIA Fermi Tesla/CUDA/C++?

(Last Updated On: September 9, 2010)

Does R memory management slow down vs using Matlab and NVIDIA Fermi Tesla/CUDA/C++?

A very interesting discussion on how R’s memory management could slow down versus other people’s experience in using Matlab, C++ with NVIDIA’s Fermi Tesla CUDA. Found on Linked In

A few things regarding R’s memory management and garbage collection in R … although it is not required to preallocate memory before using it, code is substantially faster if you do but setting the length, nrow, ncol, or dim attributes for an object first. Also, the ‘gc()’ function is there to help do two things– (1) cause garbage collection to occur immediately and (2) display statistics on the amount of free memory. Also, ‘object.size()’ and ‘memory.profile()’ can be useful. Don’t forget to remove big objects that are no longer needed with ‘rm()’. As others have mentioned, there are versions of R (like those from REvolution Analytics) that have high-quality math libraries. NOTE: Chapter 11 (High-Performance R) of ‘R in a Nutshell’ contains more information on the topics above.
—-
Unfortunately rm() does not a very good job to free up memory, it happened that it did not work or not fast enough when it is under stress.
It is just an observation and I am not able to prove it with a simple example. I just found the memory management in R unreliable under stress.

To get back to our discussion I guess that careful coding and a lot of simplification can lead to a robust application in R unless you need a lot of speed in which case it is just a bad choice.
—————————
The discussion about selecting Matlab, Mathmatica, R, etc. has interested me. As I said before, I’m a fisheries ecologist so that is where my perspective arises. I must say that R is used (and quite successfully I’ll add) for some quite computationally demanding work in my field. Since R is getting some mixed feedback in this discussion thread, I would like to point out that ‘Empirical Finance’ Task View would be a good place to quickly see what capacity has been developed already. For specific questions about R performance in given circumstances (ie. memory management, dealing with real-time data, etc.) it might be worth posting a question to the associated mailing list. You are sure to get an honest answer from someone who will say, this package is really good for this, but R is not the best choice for that…

R ports to many other languages (ie. C++, Java, Python, Perl, and even Matlab, etc.). I’ll also say that I’ve recently found quite significant improvement in R performance on gruntly computations when using it on a 64-bit Linux machine. I’m conciously migrating my entire work stream to open-source (Linux, R, OpenOffice, LaTeX, etc.) and am VERY happy with this process so far! After some initial learning, I’m rapidly getting much more knowledgeable and don’t intend to ever turn back towards under performing (Microsoft), over-priced (Apple) applications.

That’s my two-cents, but I continue to be interested in these discussions in the Financial world.
—–
• Off the records…
I found the gridR package being an amazing solution to send heavy calculations to a remote cluster…
Did anybody try to use it with commercial clouds like amazon EC2 ?
—————————-
Coding in R/Matlab is on par if you know any language, picking up either is a snap.

Cost – R WINS – I have yet to come across a function in R that costs money. Free wins every time.

You must learn C++ regardless of your choice between R or Matlab.

R is more geared towards financial analytic, versus Matlab spans across a number of areas, including financial analytic, but with Matlab libraries cost money.

Both seem to run somewhat slow, I know Matlab can take advantage of GPU (NVIDIA only) through Acceleryes accelerator to speed up the parallel processing, though again, this costs money.

If your dealing with just financial analytic, I would take R over Matlab, though if you have the extra time and money, buy an NVIDIA Fermi Tesla card, and learn all about the CUDA architecture, Fermi GPUs now support C++. If crunching parallel data as fast as you can is your game, there is no match, NV FERMI compute.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!
Don't miss out!
Get my TRADE LIKE A BOSS 2 PDF Books

You will received instantly the download links.

Invalid email address
Give it a try. You can unsubscribe at any time.

NOTE!

Check NEW site on stock forex and ETF analysis and automation

Scroll to Top