Thread Partitioning to Handle Actor-Based Concurrency in Java

Imagine an auction site with millions of items, to which millions of customers might want to place bids. Each bid on a specific item needs to processed serially and atomically before the next bid can be processed to ensures each bid leaves the item in a consistent state and ensures fairness such that later bids cannot overtake earlier ones.

This type of scenario is a prime candidate for actor-based concurrency. Each actor, the auction item in this example, processes the messages it receives, the bids, one at a time, and the actors work in parallel to provide concurrency.

Erlang's concurrency model is an actor based one and so might be ideal to implement such a system. However, Java's is not, it is a task based concurrency system, so since this post is about Java, let's look at the issues and how one could implement an actor based concurrency system.

In Java, typically, one uses the Executor framework to create a pool of threads to execute tasks in parallel each taken from a task queue; when a thread becomes available it takes the next item from the queue so that no thread is idle while there are tasks in the queue. This provides very good throughput as long as the threads do not have to contend (but there is still contention on the queue).

In this scenario though we can have the situation where a particularly popular item has a lot of bids in the queue at same time and two or more threads may need to contend on the item in order to process those bids. This can lead to threads becomes blocked waiting for locks even though there are other bids in the queue that could be processed, thus damaging throughput and latency. Additionally there is no guarantee as to the order in which bids are processed, if bid A precedes bid B in the queue, it is possible that they are picked up by two different threads and that the thread handling bid B may be scheduled before the thread handling bid A.

In an actor based system each actor is processed on a single "thread" to avoid contention on its data and has its own queue for its incoming messages. A simplistic approach to achieve this in Java would be to allocate each auction item its own thread and queue, however no Java system can handles millions of threads so this is impractical in any system which has more than a small number of actors.

Instead one can use a partitioning model.

The set of all actors are partitioned into as many partitions as there are threads available. Each thread has its own queue for messages and a partition of the actors, and when a message is received for an actor it is placed on the appropriate queue. Typically one uses a hashing function to assign an actor to a partition.

This means that all messages for an actor are handled on a single thread and thus there is no contention for that actor's data.

There is still a remaining issue regarding load balancing. If the partitioning algorithm happens to place all popular items in the same partition then one can have the situation where a single thread is processing many more messages than the other threads; therefore one has to ensure that the hashing function does a reasonable job of spreading the load. In many scenarios a simple hash function will be sufficient.

If it is not, one may need to implement a dynamic load balancing process - that will be the topic of a later post.

This site uses cookies. Continue to use the site as normal if you are happy with this, or read more about cookies and how to manage them.

X