fbpx

Quant development: What languages, CUDA, and C++ libraries scientists use for parallel computing?

(Last Updated On: April 8, 2011)

Quant development: What languages, CUDA,  and C++ libraries scientists use for parallel computing?

Which parallel programming language should be preferred for parallel computing in Molecular Dynamics ?

I am working on super computer for Molecular dynamics tools. I want to choose better parallel programming language which mainly help in Molecular Dynamics and molecule simulation. It will help to improve scalability and performance and should minimize communication latency.

MPI is now and ever shall be.
Seriously most used, most tested most widely understood.
openMP workable IF you plan to stay really small.

How about using GPGPU (CUDA) programming using NVIDIA Fermi cards. I am currently using it for a CFD (Computation Fluid Dynamics) problem and it does a great job.

If he’s going small then GPGPU (CUDA) programming may enough.
openCL gives more GPGPU portability. Would you agree that if Sagar is to be using
large distributed machines then MPI surely ought to be at least a “gluing” component?

Sagar what size machines you work on?
Will the code be home grown, open source or proprietary? Combination?
Scale of computations?

Neither “MPI” or “GPGPU(CUDA)” are programming languages.

MPI is a programming library whose interface and standards are defined by the MPI forum and therefore has multiple implementations (MPICH, Open MPI, etc.) MPI supports C, C++, and Fortran, and I believe most implementations provide bindings for other languages (Perl, Python, etc.) too.

CUDA is a combination of libraries and extensions to the C language and a few other languages (C++, for example). Using CUDA (or OpenCL) to enable GPU processing only makes sense if you already have access to GPU hardware, or know you will in the future. There is no Fortran support for CUDA right now.

MPI and CUDA do not compete with each other. They are separate, complementary technologies.

I would recommend avoiding using C++ for MPI programming. It’s hard to transmit C++ objects over MPI messages. There is the BOOST library that makes it easier, and you can can create your own MPI types, but I think it’s much easier to stick with plain C.

Since CUDA doesn’t support Fortran, and MPI programming is difficult with C++, I would recommend C if you plan on using CUDA. I would go even further and recommend strict ANSI C for maximum portability.

If you want really want to shoot for the moon and write code that will be usable and runnable on clusters *and* be able to take advantage of GPUs, it is possible to write code in C that uses MPI and use CUDA to perform vector operations on the node. This is essentially how the LANL’s IBM roadrunner is programmed, (but using Cell processors instead of NVIDIA GPUs and CUDA) as well as Tianhe-1 in China.

For max portability, at t the start of your code, you can check for CUDA hardware, and then write your functions so that if CUDA was detected, they’ll use CUDA functions, and regular implementations of those functions if CUDA is not detected. This is more work, but will allow max performance and flexibility.

I am going to work on Tera FLOPS machine and will try to develope a code which charge upto thousand of processors.
Code will be open source or proprietary purpose.

Currently we are working on MPI.But there are some limitations of MPI with C++ as Prentice say and also dynamic load balancing is difficult.But it is well know and most of the code written in MPI.Also MPI is good for parallel programming in MD simulation.
And it is really good for large scale computation.
I dont have that much experience of GPGPU programming.

Also I read about one more parallel programming language CHARM++.It is object oriented message driven language. NAMD is written in CHARM++. CHARM++ is written in C++ and programming with C++ object is easy. It provides features like Dynamic Load Balancing, object migration , virtual processor etc.

If any one know some information regarding that then please share.

MPI is a set of libraries to encapsulate the communication between machines. Then if you use a programming language like C you can have multithreading and parallelism inside a node and all the advantages of distributed memory mechanism using MPI. I don’t have experience with other programming languages, but MPI is very easy to implement both with C and Fortran

I said that they can all use an MPI library for parallel programming, but I recommend C since MPI programming with C++ can be difficult, and CUDA does not support Fortran.

CHARM++ is also not a language. It is a parallel programming library for C++

What’s wrong with Fortran? CHARMm is written and runs under Fortran, many use it in single or parallel CPU mode.

Unified Parallel C (http://upc.lbl.gov/) is perhaps another distributed shared memory parallel programming extension to C, which attempts to abstract out the MPI programming complexity by making compiler do the hardwork. But as Richard says, Fortran may be the best option depending on what you are attempting to implement.

I never said there was anything wrong with Fortran, other than it’s supported by CUDA right now, so if you plan on programming for GPUs, it’s not an option.

I guess I misunderstood. It seems that he wants to write his own MD code that can be parallelized rather than use something that already exists? Let’s reinvent the wheel?

Don’t be so quick to judge. It could be that he’s working on a new algorithm for academic research (MS or PhD thesis). Or maybe he’s working on a new problem in MD that no one else has addressed, yet, and therefore existing tools won’t work.

If you want to develop new algorithms or work on unusual problems, you may need to write a code from scratch, but you should think carefully about whether you can accomplish your goals by modifying an existing package. Especially in parallel computing, you have to write a lot of code that isn’t about the solver algorithm to make the program work (and it can be very difficult to get it right) so if you can reuse somebody else’s code for this part, your life will be easier and you’ll be able to spend more time working on innovative code and problems.

Working on something that no one has done is real practical, now isn’t it?

I guess someone here hasn’t stuck a toe in the employment waters recently (if ever).

That was mostly meant to be tongue in cheek.

From a Linked In group discussion

NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Subscribe For Latest Updates

Sign up to best of business news, informed analysis and opinions on what matters to you.
Invalid email address
We promise not to spam you. You can unsubscribe at any time.
Scroll to Top