For true ultra low latency HFT, the language wars is a case closed with CUDA and C or C++, sorry Java and Python lovers
UPDATE: This posting is now irrelevant!! CUDA does support Java, Python, and even Matlab. Can you say DOH? Don’t bother reading this…
If you are planning to do any kind of GPU, the only one that really matters is CUDA. Don’t bother with OpenGL as the control seems to be better under CUDA. The pain of OpenGL makes me cringe. Where’s the Tylenol?
Follow what I do with my free newsletter
As for the programming language wars, there is no further debate as the only two languages natively supported for CUDA is C++/C or Fortran. End of story. This could change but …
So what does that mean for all those Java, Python etc lovers, are you seriously going to add a potentially bug ridden API converter with an extra layer to add potential latency to your HFT system. Now that would be dumb. Plain and simple, you should be out of a job if you go that route. If you worked in my operation, I would fire you.
If you care about affordable and easy way to develop true HFT platforms, go CUDA with C++. Again, there is no further debate on it so case closed.
Also, here some more references :
For Java use http://www.jcuda.org/
ForPython and its ilk:
For true reaction including myself:
Quote from amazingIndustry:
please do not tell anyone you are running live trading strategies in R, Matlab or Python, people who know this specific business will laugh at you
What exactly is “this” specific business? If you are looking at stuff tick-level, sure, go with C++. If you care about dynamics of implied volatility or simple technical indicators in a non-latency dependent way, why would I want to be f*cking around with memory, pointers and virtual functions? As I said, there are multi-billion dollar funds that are running fully in interpreted languages.
Read this comment:
|I believe that, with PyCUDA, your computational kernels will always have to be written as “CUDA C Code”. PyCUDA takes charge of a lot of otherwise-tedious book-keeping, but does not build computational CUDA kernels from Python code.
||Indeed, though there are interesting projects that do; see my answer. 😉 – dwf Jun 20 ’10 at 0:54
|We use PyCUDA at work and get good results with it. The advantage of writing plain CUDA C is that you can get help and snippets from other people without translating from your API specific python code.
||We use PyCUDA at work and get good results with it. The advantage of writing plain CUDA C is that you can get help and snippets from other people without translating from your API specific python code.
I now post my TRADING ALERTS
into my personal FACEBOOK ACCOUNT
. Don't worry as I don't post stupid cat videos or what I eat!