Tag Archives: parallel computing

Check this awesome LIVE webinar with Parallel Computing with MATLAB

Check this awesome LIVE webinar with Parallel Computing with MATLAB

Learn how you can use Parallel Computing Toolbox and MATLAB Distributed Computing Server to speed up MATLAB applications by using the desktop and cluster computing hardware you already have. You will learn how minimal programming efforts can speed up your applications on widely available desktop systems equipped with multicore processors and GPUs, and how to continue scaling your speed up with a computer cluster.

https://go2.mathworks.com/parallel-computing-with-matlab-lwb-na-63463?s_v1=3773&elq_cid=3006217&elq=d07e472063e04267bafe241b4e0e83d8&elqCampaignId=1080

Join my FREE newsletter to see what other Matlab webinars there are

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Do you use open source OpenCL with GPU and parallel computing for HFT: Matlab Simulink DotNet FSharp FPGA or GPU

Do you use open source OpenCL with GPU and parallel computing for HFT: Matlab Simulink DotNet FSharp FPGA or GPU

I just entered into a discussion with someone on this topic. Here is the latest from someone on my Meetup group:

( I HAVE REMOVED THE EXTRA INFO AS REQUESTED)

https://quantlabs.net/blog/2011/11/wow-opencl-looks-cool-for-hpc-and-parallel-computing-for-quant-development/

Here are a couple quick sites that are along the lines I was thinking:

http://www.tomshardware.com/reviews/best-workstation-graphics-card,3493-26.html

http://www.bittware.com/fpga-dsp-applications/financial

Our options are of course

1. FPGA via Smulink on Matlab thanks to Mr FPGA. Hey where are you by the way?

2. F# out of box for GPU and parallelism

3. Matlab prototyping and discovery via GPU Computing support

Which one would you use? LQ may build onto Open GL with Qt in C++ for Linux. What would you choose?

Join my FREE newsletter to learn more about HFT

 

 

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Is F Sharp ready to take on the world of Python and R for Parallel Computing, Algo development, with DotNet framework access?

Is F Sharp ready to take on the world of Python and R for Parallel Computing, Algo development, with DotNet framework access?

From my Facebook users:

Bryan what do you think about this?

http://news.dice.com/2014/03/10/f-dramatically-gains-popularity/

Parallel CPU programming, scripting, and algorithmic development.”
Bryan Downing: hello anyone?
F# Foundation carries testimonials such as one from Credit Suisse, which says the language is well-suited to the rapid development of mathematical models.

Very powerful powerful with access to .NET. Sign me up!!

Join my FREE newsletter to see if we use it in the future

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Youtube video demo on Limitation of demo Matlab Compiler and Parallel Computing Toolboxes with GPU and CUDA

Youtube video demo on  Limitation of demo Matlab Compiler and Parallel Computing Toolboxes with GPU and CUDA

http://quantlabs.net/membership.htm

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Video on Matlab query of parallel computing with RSI and Moving Average using High frequency trading HFT tick data

Video on Matlab query of parallel computing with RSI and Moving Average using High frequency trading HFT tick data

Question from a visitor:

Hi there, I watched you Youtube video, and you referred to the Mathworks webinar (algorithmic trading webinar). You said it takes couple of seconds using parallel computing, however on a quad core machine for HFT data, like you were talking about it would take several days for optimization using RSI or even mov avg

Best answer from:

http://quantlabs.net/quant-member-benefits/hot-hft-high-frequency-trading-tips-and-tricks/

My answer is:

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Watch Matlab do parallel computing on a GPU very easily and quickly? Why use R, Hadoop, MapReduce, HBASE, etc?

Watch Matlab do parallel computing on a GPU very easily and quickly? Why use R, Hadoop, MapReduce, HBASE, etc?

If you watched this webinar, you would be asking yourself several many questions:

  1. Why use R which seems kind of premature compared to Matlab?
  2. 2Even if you used this Revolutionary Analytics project with Hadoop, you still have so much to set up and manage compared to Matlab’s Parallel Toolbox.  This R project is no where complete as compared to the maturity of Matlab.

Regardless of cost, I still feel this makes you much more productive as compared to setting and up managing hardware and underlying technologies like Hadoop/HBASE/MapReduce etc with R. It just seems to me, it would not add up as compared to the productivity you get out box with Matlab. Sometimes the cost of time needs to be factored into something like open source technologies which are free. Just my worthless two cents.

Parallel Computing with MATLAB in Computational Finance

 

http://www.mathworks.com/company/events/webinars/wbnr51891.html?id=51891&p1=801727294&p2=801727312

 

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Big data, HPC, parallel computing, GPU, FPGA? FREE Intro for HFT with Linux, C++, R, Python for the future of automated trading!

Big data, HPC, parallel computing, GPU, FPGA?  FREE Intro for HFT with Linux, C++, R, Python for the future of automated trading!

This online course is fantastic! It will give an amazing intro to all the concepts and technologies as you may want to build a highly scalable HFT platform. This is also the future trading as well. This author sounds like a prof at TASC which holds one of the biggest super computers on the planet. He even talks about the hardware end as well which is an end to end discussion on all these important topics.  It is really good!!

https://quantlabs.net/blog/2012/01/my-hpc-book-updated-for-quant-development-and-hft-high-frequency-trading/

P.S. Personally, I don’t see myself implementing these sort of things until I get and up and running with my future development for my Windows platforms.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

WoW! OpenCL looks cool for HPC and Parallel computing for quant development

WoW! OpenCL looks cool for HPC and Parallel computing for quant development

This is very cool. I just stumbled upon but knew about it for many months. It is an open standard for parallel computing and high performance computing. I came across a variety of videos from Intel on this:

http://software.intel.com/en-us/articles/vcsource-learn-videos/

 

This tutorial is very helpful aswell for a brief intro:

http://indico.cern.ch/getFile.py/access?sessionId=0&resId=0&materialId=0&confId=138427

You might see some OpenCL project scome from QuantLabs.net next year as we start to get more serious on the multitasking front for model/strategy development.

Keep your eyes peeled by joining our email newsletter to find out what.

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Quant development: Massive Parallel Computing(20000 cores) Vs Small 1600core SMP Cluster

 

Quant development: Massive Parallel Computing(20000 cores) Vs Small 1600core SMP Cluster

What information should be needed to ask so that you can make a reasoned assessment and recommendation between this two options?

By “SMP Cluster” do you really mean “cluster of SMPs” ? Or do you mean a single
shared memory address space for hundreds of processors?
If the latter, what systems are you considering and how would you program it?

Are you really trying to ask if multi-threading is better than message passing at this scale?

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Quant development: What languages, CUDA, and C++ libraries scientists use for parallel computing?

Quant development: What languages, CUDA,  and C++ libraries scientists use for parallel computing?

Which parallel programming language should be preferred for parallel computing in Molecular Dynamics ?

I am working on super computer for Molecular dynamics tools. I want to choose better parallel programming language which mainly help in Molecular Dynamics and molecule simulation. It will help to improve scalability and performance and should minimize communication latency.

MPI is now and ever shall be.
Seriously most used, most tested most widely understood.
openMP workable IF you plan to stay really small.

How about using GPGPU (CUDA) programming using NVIDIA Fermi cards. I am currently using it for a CFD (Computation Fluid Dynamics) problem and it does a great job.

If he’s going small then GPGPU (CUDA) programming may enough.
openCL gives more GPGPU portability. Would you agree that if Sagar is to be using
large distributed machines then MPI surely ought to be at least a “gluing” component?

Sagar what size machines you work on?
Will the code be home grown, open source or proprietary? Combination?
Scale of computations?

Neither “MPI” or “GPGPU(CUDA)” are programming languages.

MPI is a programming library whose interface and standards are defined by the MPI forum and therefore has multiple implementations (MPICH, Open MPI, etc.) MPI supports C, C++, and Fortran, and I believe most implementations provide bindings for other languages (Perl, Python, etc.) too.

CUDA is a combination of libraries and extensions to the C language and a few other languages (C++, for example). Using CUDA (or OpenCL) to enable GPU processing only makes sense if you already have access to GPU hardware, or know you will in the future. There is no Fortran support for CUDA right now.

MPI and CUDA do not compete with each other. They are separate, complementary technologies.

I would recommend avoiding using C++ for MPI programming. It’s hard to transmit C++ objects over MPI messages. There is the BOOST library that makes it easier, and you can can create your own MPI types, but I think it’s much easier to stick with plain C.

Since CUDA doesn’t support Fortran, and MPI programming is difficult with C++, I would recommend C if you plan on using CUDA. I would go even further and recommend strict ANSI C for maximum portability.

If you want really want to shoot for the moon and write code that will be usable and runnable on clusters *and* be able to take advantage of GPUs, it is possible to write code in C that uses MPI and use CUDA to perform vector operations on the node. This is essentially how the LANL’s IBM roadrunner is programmed, (but using Cell processors instead of NVIDIA GPUs and CUDA) as well as Tianhe-1 in China.

For max portability, at t the start of your code, you can check for CUDA hardware, and then write your functions so that if CUDA was detected, they’ll use CUDA functions, and regular implementations of those functions if CUDA is not detected. This is more work, but will allow max performance and flexibility.

I am going to work on Tera FLOPS machine and will try to develope a code which charge upto thousand of processors.
Code will be open source or proprietary purpose.

Currently we are working on MPI.But there are some limitations of MPI with C++ as Prentice say and also dynamic load balancing is difficult.But it is well know and most of the code written in MPI.Also MPI is good for parallel programming in MD simulation.
And it is really good for large scale computation.
I dont have that much experience of GPGPU programming.

Also I read about one more parallel programming language CHARM++.It is object oriented message driven language. NAMD is written in CHARM++. CHARM++ is written in C++ and programming with C++ object is easy. It provides features like Dynamic Load Balancing, object migration , virtual processor etc.

If any one know some information regarding that then please share.

MPI is a set of libraries to encapsulate the communication between machines. Then if you use a programming language like C you can have multithreading and parallelism inside a node and all the advantages of distributed memory mechanism using MPI. I don’t have experience with other programming languages, but MPI is very easy to implement both with C and Fortran

I said that they can all use an MPI library for parallel programming, but I recommend C since MPI programming with C++ can be difficult, and CUDA does not support Fortran.

CHARM++ is also not a language. It is a parallel programming library for C++

What’s wrong with Fortran? CHARMm is written and runs under Fortran, many use it in single or parallel CPU mode.

Unified Parallel C (http://upc.lbl.gov/) is perhaps another distributed shared memory parallel programming extension to C, which attempts to abstract out the MPI programming complexity by making compiler do the hardwork. But as Richard says, Fortran may be the best option depending on what you are attempting to implement.

I never said there was anything wrong with Fortran, other than it’s supported by CUDA right now, so if you plan on programming for GPUs, it’s not an option.

I guess I misunderstood. It seems that he wants to write his own MD code that can be parallelized rather than use something that already exists? Let’s reinvent the wheel?

Don’t be so quick to judge. It could be that he’s working on a new algorithm for academic research (MS or PhD thesis). Or maybe he’s working on a new problem in MD that no one else has addressed, yet, and therefore existing tools won’t work.

If you want to develop new algorithms or work on unusual problems, you may need to write a code from scratch, but you should think carefully about whether you can accomplish your goals by modifying an existing package. Especially in parallel computing, you have to write a lot of code that isn’t about the solver algorithm to make the program work (and it can be very difficult to get it right) so if you can reuse somebody else’s code for this part, your life will be easier and you’ll be able to spend more time working on innovative code and problems.

Working on something that no one has done is real practical, now isn’t it?

I guess someone here hasn’t stuck a toe in the employment waters recently (if ever).

That was mostly meant to be tongue in cheek.

From a Linked In group discussion

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!