Java Concurrency Question for Quant Development

(Last Updated On: May 8, 2011)

Java Concurrency Question for Quant Development

In November 2009 I went to No Fluff Just Stuff. One of the presentations was by Brian Goetz, which was about Java concurrency. For some reason there were items on his agenda slide that were not covered in his presentation.

He went over some strategies, and at the end he pointed out a common tactic that he also said is a good rule of thumb for concurrency in Java: Make your variables private, and make any methods that access them synchronized.

That sounds pretty simple. And perhaps too good to be true. Are there situations/applications where this concurrency technique would not be sufficient? Would relying primarily on this technique work well in systems with lots of transactions, or large data sets? What are the potential drawbacks of doing this?


Key design philosophy is to to minimise mutability and document the locking strategy for what’s left

• This strategy is thread safe but in systems with lot of transactions, it will not scale since synchronizing methods has over heads and results in some slowdown of the application

In a nutshell, keep your critical sections small and have a clear idea of dependencies between different threads to avoid deadlock

Now that multicore processors are now pretty standard, when lock collision is expected to be low, it is better to use a spinlock than a conventional critical section synchronised function. Using a spinlock (java.util.concurrent.atomic) or the higher-level ReentrantReadWriteLock will minimise the serialisation when threads contend for access to the critical section.

The best strategy though is keep objects immutable and version when objects do need to mutate.

: What about using locks? Or synchronizing on objects within methods. Is there less overhead for that? I know that limiting mutability is good, but I am just trying to explore the topic further.



Re exploring the topic, start with “volatile” (un-optimized vars), move to “atomic” (relying on relevant CPU instructions), next will be “ReadWriteLock” (exclusive write ops, un-blocked read ops when there are no write ops) and finish with “synchronized” (everything is exclusive). On top of that, have a look at the Actor concepthttp://drdobbs.com/high-performance-computing/229402193

Threading models are domain specific; in what I do, a small number of threads with minimal contention is most desirable.

I would be very careful of things like volatile and atomic. Go for correctness first and optimise later. I would also avoid explicit use of spin locks: any OS worth its salt will use adaptive mutexes anyhow (which are spin locks unless it makes sense for them not to be).

If you can bear it, read through the java memory model which is available on the web. The guarantees are actually quite loose. You need to consider atomicity but also visibility and ordering.

If you’re seeing performance problems with a multi-threaded program, I reckon a look at the broader design will yield more benefits than swapping out synchronized for volatile / atomics etc

Over time, I have had similar architectural problems with C, C++, Java and C#, and, besides the differences on the what each language allows and does not allow to do, on all cases, the conceptual solution has turned out to be the same:

You have to find ways to enqueue the concurrent changes to the state of an object, no matter the language, platform, whatever: no matter how you slice it.

You have to make sure that only your object has access and control of its own state, but that is not enough in a multi-threading environment.

You could use any sort of synchronizing strategies for your concurrent (multi-threading) changes, as much as your platform and tools allow.

You could have a Singleton in charge of invoking the actual changes to the state of objects, will the threads are able to call some method or event handler of the Singleton.

It is a general, ok solution to add synchronized to methods as it will ensure sequential consistency (what a thread sees is the latest state and not stale version as writes are flushed to memory and not local caches). However prefer lock splicing and lock-free (compare and swap) algorithms over using synchronized methods; also it depends on the use of synchronised, how many invariants need to be guarded, the number of threads reading and writing to the data structure and what level of sequential consistency is needed. Use synchronization with judicious caution if you care about performance where milliseconds count; especially if you are guarding multiple, independent invariants with synchronized methods as performance will degrade quickly especially when there is significant throughput.

Let me give you a concrete example by comparing Hashtable and ConcurrentHashMap. The former uses synchronized methods for each method invocation and the latter uses lock splicing (each bucket has its own lock). Which one provides better performance? ConcurrentHashMap – by far!

Why? ConcurrentHashMap uses a mixture of volatile variables (ensures reads and writes are to memory and not CPU caches), lock splicing (each hash bucket has its own lock) and lock-free reads. It enables vast amounts of threads to read and write at the same to the data structure in parallel (of course, assuming the hashing exhibits uniform distribution among the buckets!). Hashtable only lets one thread access the data structure at a time and there is overhead cost to acquire and release the lock; if there are many threads wanting to read and write and lots of entries then each one has to wait for access; just imagine being at at airport with hundreds of passengers and only one check-in desk! However there is a downfall to using ConcurrentHashMap as it doesn’t provide sequential consistency (when you do a read operation, it will retrieve the latest write value) as the reads are lock-free; if a read and write happens at the same time the write operation may note be visible to read. In performance critical systems, most developers will prefer performance over sequential consistency.

True. It is also worth mentioning about “optimistic locking” (at which point “atomic” operations are very useful). And I agree with James Taylor that one thread per CPU/core is the best configuration (in order to reduce “context switch” effect).

Few more terms to explore “executors” (from “concurrency” package, e.g. “Executors.newFixedThreadPool(…)” static method) and “Selector” (from NIO, allowing one thread to control multiple I/O ops)

The Java Concurrency is pretty big and complicated theme. Check following link


NOTE I now post my TRADING ALERTS into my personal FACEBOOK ACCOUNT and TWITTER. Don't worry as I don't post stupid cat videos or what I eat!

Subscribe For Latest Updates

Sign up to best of business news, informed analysis and opinions on what matters to you.
Invalid email address
We promise not to spam you. You can unsubscribe at any time.


Check NEW site on stock forex and ETF analysis and automation

Scroll to Top