It’s been three years since we published the ‘The Activiti Performance Showdown‘, in which the performance of the Activiti engine (version 5.9 and 5.10) was benchmarked.
After three years it is time (or perhaps already long overdue) for updating the benchmark results using modern hardware with the latest Activiti engine. We’ve obviously been enhancing and improving the engine with every release, but in the latest release something called ‘bulk insert‘ was added to the core engine.
We worked together with our friends at Google for implementing this feature (they actually did the bulk of the work, no pun intended) so a big shout-out and thanks to Abhishek Mittal, Imran Naqvi, Logan Chadderdon, Bo-Kyung Choi and colleagues for our past discussions and code! As you can imagine, the scale at which Google uses Activiti makes for interesting topics and challenges :-). In fact, some major changes in the past (for example the new Async job executor) and very interesting items on the future roadmap were driven by discussions with them.
Edit (July 30th 2015): ran the benchmark on Amazon Aurora. Results here: http://www.jorambarrez.be/blog/2015/07/30/activiti-performance-showdown-aurora/
The code used to gather the numbers are the same as three years ago and on Github: https://github.com/jbarrez/activiti-benchmark
To run the benchmark, simply follow instructions on the Github page. Basically what the test does:
We let the benchmark project mentioned above loose on the following setups:
Setup 1 and 2 are interesting cause they show what is the raw overhead of the engine, without any network delays (as its on the same machine). Of course numbers will vary with different databases, but the main point to illustrate with this benchmark is to demonstrate how light Activiti is.
Setup 3 is in my eyes probably even more interesting (if that’s even possible), as it runs on ‘real’ servers with real network delay. In our experience, AWS servers aren’t as performant as real physical servers, so that’s only good news when you look at the numbers and are already impressed ;-).
Furthermore, all tests were ran in Spring mode (the engine used in a Spring setup) and with a BoneCP connection pool (I used BoneCP, although I know it has been superceded by HikariCP, but wanted to have the same setup as three years ago).
When looking at the results, they are obviously faster than the similar 2012 numbers for the same processes. Obviously the engine has improved and hardware vendors didn’t sit still either. Generally, I noticed a 2x – 4x improvement in the throughput results.
The tests were also ran on Activiti 5.17 and 5.18. So we learn two things:
I bundled all benchmark results in the following Google spreadsheet:
If you prefer Excel:
It has various sheets which you can select below: a sheet for the different test environments and then sheets for each of the tested processes in detail (with pretty graphs!). Each of the sheets contain analysis text at the top. Anyway, let’s look at the different sections in a bit of detail.
Note that the numbers here only show the throughput/second (aka process instances started and completed per second). Way more numbers (including the certainly interesting average timings of process instance execution) can be found in the following links:
I’ve split the results up in two types: the first sections will cover the difference between 5.17 and 5.18 and will highlight the difference the bulk insert makes. In the next sections, we’ll look at the numbers for the process definitions in detail for 5.18.
This maps to the first sheet in the spreadsheet linked above.
When looking at the numbers, we learn that Activiti adds very little overhead when executing processes. There are crazy numbers in there like +6000 process instances / second (for very simple process definition of course). Keep in mind that this is with a local MySQL. But still, this proves that the overhead of the Activiti engine is very minimal.
The sheet does show a bit of red (meaning 5.18 has a lower throughput) here and there. This is logical: there is a minimum of data being produced (no history) and the average timings are sometimes really low (a few milliseconds in some cases!). This means that doing a bit of extra housekeeping in a hashmap (which we do for bulk insert) can already impact the numbers. The ‘red’ numbers are in general within 10% range, which is ok.
In my view, the ‘random’ execution (2500 randomly chosen process definitions using an equal distribution, throughput/second) is the most interesting one, as in a typical system, multiple process instance of different process definitions will be executed at the same time. The graph below shows that 5.17 and 5.18 are quite close to each other, which was to be expected.
There are however two outliers in the data here: the service task vars 1 and 2 process definitions. These process definitions are pretty simple: they have a 7 steps (version 2 has a user task) and one of them is a service task that generates 4 and 50 process variables respectively. For these two cases, with more data (each process variable is an insert) we can see an increase from 20% up to 50%! So it seems like the bulk inserts starts to play here. That makes us hopeful for the next tests with historic data!
This maps to the second sheet in the spreadsheet linked above.
In the benchmark we did three years ago, we already learned history has a huge impact on performance. Of course, most people want to run with history enabled, as this is one of the main benefits of using a workflow engine like Activiti.
As we suspected from the previous setup, enabling history makes the bulk insert shine. In general we see nice improvements, in the 5%-40% range. Tests with fewer data (eg the user task tests, which do a lot of task service polling) see a bit of red there (meaning lower performance), but this is probably due to the same reasons as mentioned above and are well withing acceptable (<10% range). So this proves that the bulk insert, when there is a ‘normal’ amount of data being generated does have a very good impact on performance. Also, in general we see again that the overhead of the Activiti engine is very minimal with very nice numbers for the throughput/second (think about it. Doing 100’s of process instance executions per second is a lot when you translate this to process instances / day!).
The random execution graph (same as above, 2500 random executions, throughput/second), now shows that 5.18 keeps its distance from 5.17. For all threadpool settings, 5.18 can do more process instance executions in the same time period.
This maps to the third sheet in the spreadsheet linked above.
Now here it gets different from the tests we did three years ago. We have two servers, both in the cloud (Amazon), both with real network delays. One thing to notice here is that the throughput/seconds is lower than the previous setup (about 1/4 of the throughput). Again, real network delays of the Amazon infrastructure are in play here.
The reasoning behind the bulk insert idea was that, when switching to a data center with a higher network delay, it would be beneficial if we can squeeze in more work for each roundtrip we do. And oh boy, that surely is the case. If you look at the numbers in the sheet, they are in general almost green or < 5% difference. But more importantly, we see improvements up to 80% for some cases (not surprisingly those cases with a lot of data being produced)!
The random execution graph (same setup as above, 2500 random executions, throughput/second) shows the following:
On average, there is a 15% gain for this particular test (which is awesome). Also interesting here to see is the shape of the graph: a linear line, diverting from the 5.17 line the more we threads we add. But if you scroll up, you can see the shape being very different when running on my laptop. In fact, you can state that we’ve reached a tipping point around 8 threads for the previous two graphs while here there is no sign of tipping yet. This means that we could probably load the Oracle RDS database way more than we were doing. And that my laptop is not a server machine, which yeah, I already knew ;-).
For each of the process definitions, there is a specific sheet in the spreadsheet linked above.
process01 is the simplest process you can imagine:
process02 is a bit longer, all still passthrough activities, but this generates obviously more history (for example one entry for each passthrough activity):
process03 tests the parallel gateway (fork and join), as it’s a more expensive activity (there is some locking going on):
process04 goes a bit further with the parallel gateways, and tests the impact when using multiple join/forks at one:
process05 is about using a service task (with a Java class) together with an exclusive gateway:
multi-instance is a process definition that tests the embedded subprocess construct, with a parallel gateway in the middle. Knowing a bit about the internals of the Activiti engine, this process most likely will be the slowest:
usertask01 to 03 are quite similar, 01 has just one user task, 02 has seven sequential ones and 03 has two user task, with a parallel gateway as fork join.
And lastly, the servicetask-vars01/02 process definitions have the same structure as usertask02, but with a service task that generates process variables (four process vars for the 01 version, fifty process variables for the 02 version). The 02 version also has a user task.
The first batch of process definitions we’ll look at is process01/02/03/04/05. These are all process definitions that complete after starting a process instance, no wait state. Which makes it a prime candidate for testing the raw overhead the Activiti engine adds.
Here are the throughput and average timing numbers (click to enlarge):
And the similar numbers for the Oracle RDS setup (click to enlarge):
These numbers show two things clearly:
The second batch of process definitions are those with a user task (I’ve made a crude distinction here, as looking more detailed would add way too much information). Now, one big difference with the previous batch is the way the time is measured here. For the previous batch, it was easy, as the process instance would complete after start. For these process definitions however, the process instance reaches a wait state. The numbers below take in account both the starting of the process instance and the task completions. So for usertask02 for example, with seven user tasks, this means that in the results you have to remember that we are talking about 8 database transactions (one for process instance start, seven for each user task). Which explains the difference in magnitude of the numbers versus the previous batch.
The first graphs show the results for 5.18 on a local Mysql (click to enlarge):
Similar, but now for Oracle (click to enlarge):
A similar analysis as with the previous batch can be made: when adding more threads, the average time goes up, but in the same time period a lot more work can be done. Still, taking into account we’re talking here about multiple database transactions for each process instance run, the numbers are really good.
Now, this was only a summary of the number analysis we did. The spreadsheet contains way more data and graphs (but we have to limit this very long blog a bit…).
For example, the spreadsheet shows the detailed analysis for each process definition in detail. Take for example the throughput graph of process definition process02 (seven manual tasks). Above is the local Mysql, below is the Oracle RDS:
First of all, the difference in throughput between 5.17 and 5.18 is very noticeable here (since there is quite a bit of historic data). But also the numbers are really impressive, in both setups, going well over the thousands for the local db and hundreds for Oracle RDS.
Another example is the servicetask-vars02 process definition (seven steps, one is a service task that generates fifty variables, and one is a user task to make sure all variables are written to the database – they can be optimised away without wait state otherwise). Again above is the local Mysql, below is Oracle RDS:
The difference in line is very interesting here. It shows the Oracle is having no issue with the additional concurrent load. It also nicely shows the difference between 5.17 and 5.18 again. Do take in account that in this process fifty process variables get written. While the numbers are a magnitude lower than the previous example, taking in account what is happening makes the numbers look very good in my opinion.
The conclusion which we made three years ago, is still valid today. Obviously results are better due to better hardware, but I hope the graphs above prove it’s also about continuously improving the engine based on real-life deployments. Again, if you missed it in the introduction above, the bulk insert was conceived as a solution for a problem Google found when running Activiti at scale. The community is really what drives Activiti forward by challenging it in every way possible!
In general we can say
Like I said three years ago: the numbers are what they are: just numbers. Results will vary from use case, hardware and setup. My main point which I want to conclude here, is that the Activiti engine is extremely lightweight. The overhead of using Activiti for automating your business processes is small. In general, if you need to automate your business processes or workflows, you want top-notch integration with any Java system and you like all of that fast and scalable … look no further!