Hi all I am an expert FPGA engineer and I want to build a open source trade execution system to showcase my abilities.
I am purely in a systems planning stage right now, and I have a few questions if some of you can please help.
1.) Do you use Verilog or VHDL? I am great with both and want to chose the HDL that a majority use.
2.) I plan on building the system to interface with NYCE real time equities feed, do some of you suggest something else?
3.) I will not write any platform specific code (FPGA specific) but do you, but I want to know which FPGA vendor most of you work with. So can you tell me your preference?
My personal preference would be:
2) write to NYSE Equities feed and use the NYSE ‘OpenMAMA’ specification.
I feel what would be useful is measure how long this took, then compare the time against taking C++ code, converting into Verilog / VHDL via tools, and optimizing.
Optimizing means seeing how much adjustment is needed to have converted
code run in roughly the same amount of cycles / time.
I hope this is helpful.
Writing against OpenMAMA (or any other normalised market data API) means that you’re losing the latency advantage of using an FPGA. Use the raw feeds.
I guess my question now is a bit newbie like, but I work on radar systems. Our latency is the time a radar sees a target till the time we see it on a radar display. Can you clearly define trade latency to me, (I can probably google it but here are my guesses)
1) the time it takes to simply execute a trade, not counting the time to identify a good trade. EI give a buy/sell order
2) the time it takes to identify and execute a trade
-here 1 is very easy to do, 2 in fact gets trickier because it depends on the users algorithm + whatever I come up with to execute the trade.
BATS makes available historical market data in the native PITCH protocol (raw market data). The protocol spec is available from their website. Contact BATS’ TradeDesk at email@example.com, 913-815-7001 for info on obtaining the historical data. That should help you build your FPGA to decode the incoming market data.
Fully agree you give up latency, but it would be fun to see just how much.
For sheer raw speed, the exchange spec is best, and for specific strategies also limit the data you present / work with as well to reduce the instruction footprint.
Yet, I still feel merit in building against such an interface, given the open normalized format and data model, make the work very portable to a larger set of exchanges.
Thus any time you trade for speed i feel can be amortized using the HW itself, and reduction in time to build against new exchanges makes it worth it for me.
I’m not interested in just one exchange; I want to have the same methodology to handle, code, manage to tens of them, not just a once off coding process.
I hope this better explains my reasoning.
You’ve probably heard of Corvil already but if you haven’t, get to www.corvil.com and request the low latency related whitepapers to have a better understanding of latency in the context you are interested.
Not only Corvil, but TA Associates. Both claim they can provide fractions of a nanosecond bucket of precision in measuring. Trading systems have yet to make the shift to picoseconds, and are firmly in the range of nanoseconds.