Tag Archives: accelerate

Accelerate your MATLAB algorithms and applications! Here’s some tips for assessing code performance, generating C code

Accelerate your MATLAB algorithms and applications! Here’s some tips for assessing code performance, working with system objects, generating C code, and more:

http://www.mathworks.com/company/newsletters/articles/accelerating-matlab-algorithms-and-applications.html?s_eid=PSM_4828

Learn more how I will accelerate my Matlab processes through my FREE newsletter

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

How do you accelerate your MATLAB Algorithms and Applications? Watch this webinar

How do you accelerate your MATLAB Algorithms and Applications? Watch this webinar

http://www.mathworks.com/company/newsletters/articles/accelerating-matlab-algorithms-and-applications.html?s_v1=54958434_1-M48KWA

Want to learn more how I plan use Matlab for my trading? Join my FREE newsletter

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Talk on how C++, Matlab, R to accelerate a CPUI using FPGA/CUDA

Talk on how C++, Matlab, R to accelerate a CPUI using FPGA/CUDA

Some important features of fermi that require highlighting:

– True Floating Point – 754-2008 standard, 16-bit floating point memory format, multiply-add support
– ECC – hard requirement these days to stave off SEU, one can’t have a bit flipping when dealing with a million dollar transaction
– Switching between 32 and 64 bit addresses is streamlined
– Scratchpad, GPU and system memory all reside on same 64-bit interface -> this makes compiling C++ a breeze
– Atomic instructions avoids any other process from overwriting memory values, great handshaking
– Allows kernals to overlap executions

I do agree with Alex on the lack of C++ constructs being supported, its just a matter of time before that changes (this is all relatively new in the past 5 or so years, it took FPGAs much longer to finally streamline their process), so those with vision and who can see where this is all heading, and has already headed for that matter, its best to be among the first on the train then left at the station. So if you have the ‘time’ and ‘$'( I will argue after all the required libraries you will need with Matlab, if you choose that route, you could have armed yourself with a serious fermi device), its in your best interest to learn how to program some of your exercises onto a GPU, again if you have time. That will differentiate yourself when applying to jobs, everyone can use Matlab, R, code in C++ etc, not everyone can use the CUDA architecture to offload // processing…

For those in doubt, download folding at home and see for yourself the advantages you get from a NV compute device versus porting it to a CPU(download both CPU and GPU version for comparison purposes):

Front Page

Heck, code up some Monte Carlo simulations and set your number of sims into the millions or better yer price out every option in the market via black-scholes or using any binomial/trinomial method and you will quickly see what performance advantage due to mass parallelism you gain with GPU vs CPU. Its almost an unfair contest.

R is your best choice for the programs you outlined. Its free, open-source and quite powerful.

This was found to be part of Linked In Group Conversation.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!