It’s been three years since we published the ‘The Activiti Performance Showdown‘, in which the performance of the Activiti engine (version 5.9 and 5.10) was benchmarked.
After three years it is time (or perhaps already long overdue) for updating the benchmark results using modern hardware with the latest Activiti engine. We’ve obviously been enhancing and improving the engine with every release, but in the latest release something called ‘bulk insert‘ was added to the core engine.
We worked together with our friends at Google for implementing this feature (they actually did the bulk of the work, no pun intended) so a big shout-out and thanks to Abhishek Mittal, Imran Naqvi, Logan Chadderdon, Bo-Kyung Choi and colleagues for our past discussions and code! As you can imagine, the scale at which Google uses Activiti makes for interesting topics and challenges :-). In fact, some major changes in the past (for example the new Async job executor) and very interesting items on the future roadmap were driven by discussions with them.
Edit (July 30th 2015): ran the benchmark on Amazon Aurora. Results here: http://www.jorambarrez.be/blog/2015/07/30/activiti-performance-showdown-aurora/
The code used to gather the numbers are the same as three years ago and on Github: https://github.com/jbarrez/activiti-benchmark
To run the benchmark, simply follow instructions on the Github page. Basically what the test does:
We let the benchmark project mentioned above loose on the following setups:
Setup 1 and 2 are interesting cause they show what is the raw overhead of the engine, without any network delays (as its on the same machine). Of course numbers will vary with different databases, but the main point to illustrate with this benchmark is to demonstrate how light Activiti is.
Setup 3 is in my eyes probably even more interesting (if that’s even possible), as it runs on ‘real’ servers with real network delay. In our experience, AWS servers aren’t as performant as real physical servers, so that’s only good news when you look at the numbers and are already impressed ;-).
Furthermore, all tests were ran in Spring mode (the engine used in a Spring setup) and with a BoneCP connection pool (I used BoneCP, although I know it has been superceded by HikariCP, but wanted to have the same setup as three years ago).
When looking at the results, they are obviously faster than the similar 2012 numbers for the same processes. Obviously the engine has improved and hardware vendors didn’t sit still either. Generally, I noticed a 2x – 4x improvement in the throughput results.
The tests were also ran on Activiti 5.17 and 5.18. So we learn two things:
I bundled all benchmark results in the following Google spreadsheet:
If you prefer Excel:
It has various sheets which you can select below: a sheet for the different test environments and then sheets for each of the tested processes in detail (with pretty graphs!). Each of the sheets contain analysis text at the top. Anyway, let’s look at the different sections in a bit of detail.
Note that the numbers here only show the throughput/second (aka process instances started and completed per second). Way more numbers (including the certainly interesting average timings of process instance execution) can be found in the following links:
I’ve split the results up in two types: the first sections will cover the difference between 5.17 and 5.18 and will highlight the difference the bulk insert makes. In the next sections, we’ll look at the numbers for the process definitions in detail for 5.18.
This maps to the first sheet in the spreadsheet linked above.
When looking at the numbers, we learn that Activiti adds very little overhead when executing processes. There are crazy numbers in there like +6000 process instances / second (for very simple process definition of course). Keep in mind that this is with a local MySQL. But still, this proves that the overhead of the Activiti engine is very minimal.
The sheet does show a bit of red (meaning 5.18 has a lower throughput) here and there. This is logical: there is a minimum of data being produced (no history) and the average timings are sometimes really low (a few milliseconds in some cases!). This means that doing a bit of extra housekeeping in a hashmap (which we do for bulk insert) can already impact the numbers. The ‘red’ numbers are in general within 10% range, which is ok.
In my view, the ‘random’ execution (2500 randomly chosen process definitions using an equal distribution, throughput/second) is the most interesting one, as in a typical system, multiple process instance of different process definitions will be executed at the same time. The graph below shows that 5.17 and 5.18 are quite close to each other, which was to be expected.
There are however two outliers in the data here: the service task vars 1 and 2 process definitions. These process definitions are pretty simple: they have a 7 steps (version 2 has a user task) and one of them is a service task that generates 4 and 50 process variables respectively. For these two cases, with more data (each process variable is an insert) we can see an increase from 20% up to 50%! So it seems like the bulk inserts starts to play here. That makes us hopeful for the next tests with historic data!
This maps to the second sheet in the spreadsheet linked above.
In the benchmark we did three years ago, we already learned history has a huge impact on performance. Of course, most people want to run with history enabled, as this is one of the main benefits of using a workflow engine like Activiti.
As we suspected from the previous setup, enabling history makes the bulk insert shine. In general we see nice improvements, in the 5%-40% range. Tests with fewer data (eg the user task tests, which do a lot of task service polling) see a bit of red there (meaning lower performance), but this is probably due to the same reasons as mentioned above and are well withing acceptable (<10% range). So this proves that the bulk insert, when there is a ‘normal’ amount of data being generated does have a very good impact on performance. Also, in general we see again that the overhead of the Activiti engine is very minimal with very nice numbers for the throughput/second (think about it. Doing 100’s of process instance executions per second is a lot when you translate this to process instances / day!).
The random execution graph (same as above, 2500 random executions, throughput/second), now shows that 5.18 keeps its distance from 5.17. For all threadpool settings, 5.18 can do more process instance executions in the same time period.
This maps to the third sheet in the spreadsheet linked above.
Now here it gets different from the tests we did three years ago. We have two servers, both in the cloud (Amazon), both with real network delays. One thing to notice here is that the throughput/seconds is lower than the previous setup (about 1/4 of the throughput). Again, real network delays of the Amazon infrastructure are in play here.
The reasoning behind the bulk insert idea was that, when switching to a data center with a higher network delay, it would be beneficial if we can squeeze in more work for each roundtrip we do. And oh boy, that surely is the case. If you look at the numbers in the sheet, they are in general almost green or < 5% difference. But more importantly, we see improvements up to 80% for some cases (not surprisingly those cases with a lot of data being produced)!
The random execution graph (same setup as above, 2500 random executions, throughput/second) shows the following:
On average, there is a 15% gain for this particular test (which is awesome). Also interesting here to see is the shape of the graph: a linear line, diverting from the 5.17 line the more we threads we add. But if you scroll up, you can see the shape being very different when running on my laptop. In fact, you can state that we’ve reached a tipping point around 8 threads for the previous two graphs while here there is no sign of tipping yet. This means that we could probably load the Oracle RDS database way more than we were doing. And that my laptop is not a server machine, which yeah, I already knew ;-).
For each of the process definitions, there is a specific sheet in the spreadsheet linked above.
process01 is the simplest process you can imagine:
process02 is a bit longer, all still passthrough activities, but this generates obviously more history (for example one entry for each passthrough activity):
process03 tests the parallel gateway (fork and join), as it’s a more expensive activity (there is some locking going on):
process04 goes a bit further with the parallel gateways, and tests the impact when using multiple join/forks at one:
process05 is about using a service task (with a Java class) together with an exclusive gateway:
multi-instance is a process definition that tests the embedded subprocess construct, with a parallel gateway in the middle. Knowing a bit about the internals of the Activiti engine, this process most likely will be the slowest:
usertask01 to 03 are quite similar, 01 has just one user task, 02 has seven sequential ones and 03 has two user task, with a parallel gateway as fork join.
And lastly, the servicetask-vars01/02 process definitions have the same structure as usertask02, but with a service task that generates process variables (four process vars for the 01 version, fifty process variables for the 02 version). The 02 version also has a user task.
The first batch of process definitions we’ll look at is process01/02/03/04/05. These are all process definitions that complete after starting a process instance, no wait state. Which makes it a prime candidate for testing the raw overhead the Activiti engine adds.
Here are the throughput and average timing numbers (click to enlarge):
And the similar numbers for the Oracle RDS setup (click to enlarge):
These numbers show two things clearly:
The second batch of process definitions are those with a user task (I’ve made a crude distinction here, as looking more detailed would add way too much information). Now, one big difference with the previous batch is the way the time is measured here. For the previous batch, it was easy, as the process instance would complete after start. For these process definitions however, the process instance reaches a wait state. The numbers below take in account both the starting of the process instance and the task completions. So for usertask02 for example, with seven user tasks, this means that in the results you have to remember that we are talking about 8 database transactions (one for process instance start, seven for each user task). Which explains the difference in magnitude of the numbers versus the previous batch.
The first graphs show the results for 5.18 on a local Mysql (click to enlarge):
Similar, but now for Oracle (click to enlarge):
A similar analysis as with the previous batch can be made: when adding more threads, the average time goes up, but in the same time period a lot more work can be done. Still, taking into account we’re talking here about multiple database transactions for each process instance run, the numbers are really good.
Now, this was only a summary of the number analysis we did. The spreadsheet contains way more data and graphs (but we have to limit this very long blog a bit…).
For example, the spreadsheet shows the detailed analysis for each process definition in detail. Take for example the throughput graph of process definition process02 (seven manual tasks). Above is the local Mysql, below is the Oracle RDS:
First of all, the difference in throughput between 5.17 and 5.18 is very noticeable here (since there is quite a bit of historic data). But also the numbers are really impressive, in both setups, going well over the thousands for the local db and hundreds for Oracle RDS.
Another example is the servicetask-vars02 process definition (seven steps, one is a service task that generates fifty variables, and one is a user task to make sure all variables are written to the database – they can be optimised away without wait state otherwise). Again above is the local Mysql, below is Oracle RDS:
The difference in line is very interesting here. It shows the Oracle is having no issue with the additional concurrent load. It also nicely shows the difference between 5.17 and 5.18 again. Do take in account that in this process fifty process variables get written. While the numbers are a magnitude lower than the previous example, taking in account what is happening makes the numbers look very good in my opinion.
The conclusion which we made three years ago, is still valid today. Obviously results are better due to better hardware, but I hope the graphs above prove it’s also about continuously improving the engine based on real-life deployments. Again, if you missed it in the introduction above, the bulk insert was conceived as a solution for a problem Google found when running Activiti at scale. The community is really what drives Activiti forward by challenging it in every way possible!
In general we can say
Like I said three years ago: the numbers are what they are: just numbers. Results will vary from use case, hardware and setup. My main point which I want to conclude here, is that the Activiti engine is extremely lightweight. The overhead of using Activiti for automating your business processes is small. In general, if you need to automate your business processes or workflows, you want top-notch integration with any Java system and you like all of that fast and scalable … look no further!
A while ago, somebody proclaimed their application was going out of memory sometimes due to Activiti. I don’t need to tell you that this hurt my developer heart. We know our architecture is sound and very resource-friendly. But without hard numbers, anybody can just blurt out that Activiti is a memory hog without us being able counter it.
So I decided to do the sane thing. I measured … and boy, was I surprised with the results!
I cobbled together something which would mimic typical Activiti usage. The code is open and available on
This program does the following:
The processes that are started are randomly chosen from five deployed processes (in the order of the picture below):
I decided to use following parameters for running the test. Note that I did not tried to play around with these settings. It could very well be that a much higher throughput is possible, but I believe that the numbers I chose now are an adequate representation of a company doing a fair amount of business process management.
Again, I didn’t check what the limit of these numbers was. It could very well be you can start 500 processes per second. But the point here is memory usage.
Also, I’m using a standard MySQL installation (just installed, nothing tweaked) as database.
To make it a but interesting, I decided to start low and build up from there. So I ran the benchmark using 32 MB of heap space:
java -jar -Xms32M -Xmx32M -XX:+UseG1GC activiti-memory-usage.jar
Note that i’m using the new G1 garbage collector which is supposed to be doing good in cases where memory usage is more than 50% of the max heap. I also attached the Yourkit profiler to get an insight into the memory usage. I let the benchmark run for 30 minutes. When I came back, following statistics were shown:
So to my surprise 32 MB was enough to finish the benchmark! And using it during 30 minutes allowed to finish 3157 process instances and complete 12802 tasks! And even more, the profiler showed me that it wasn’t even using all of the available memory (see first chart)!
You can also see that when the garbage collector passes by, only 13MB is being used:
And the CPU was really boring himself during the benchmark: It never really goes above 10% usage.
Also, there was quite a bit of garbage collecting going on (28 seconds on 30 minutes), which is expectable:
The first test learned us 32 is more than enough to run this ‘BPM platform’. And like I said, I believe that the load isn’t that different from a typical company using a BPM solution. But how low can we go?
So I reran my tests using less memory. I quickly learned that when throwing less than 32 MB of RAM at it, I couldn’t complete the benchmark with the Yourkit profiler attached. Probably the profiler agent also steals some memory. So I ran the benchmarks using less memory:
java -jar -XmsXXXM -XmxXXXM -XX:+UseG1GC activiti-memory-usage.jar
I tried 24 MB. Success!
I went down to 16 MB. Success!
I went down to 14 MB. Dang! Out of heap space. But no worries: the exception occurred when the BPMN diagram was generated during process deployment. This takes quite a bit of memory, as Java2D is involved and the PNG is built up in memory. So I configured the engine to not generate this diagram (setting ‘createDiagramOnDeploy’ to false). And yes. Success!
I went down to 12 MB. Success!
And 12 MB of Ram was the lowest I could go. With less memory you get ‘out of heap space’ exception quickly. The statistics for the 12 MB run are actually quite similar to the 32 MB version.
Let me rephrase that: A measly Twelve Megabytes of RAM memory!! Twelve!!
Activiti (or at least my approximation of a typical Activiti load) needs 12 MB of memory to run. Probably even less, cause the fifty user threads also take up some memory here. To put this in perspective:
Edit 5 feb 2013: See comments below. Andreas has succeeded in running it on 9MB!
Of course, in a ‘real’ application, you’d also need a web container, servlets, REST layer, etc. Also, I didn’t touch the permgen settings. But it is equal for all Java programs. The point remains the same: Activiti is REALLY memory friendly! And we learned earlier that Activiti is also really fast.
So why even bother looking at the competition?
Tijs Rademakers (already 2 months full-time Activiti team member now … time sure flies!) posted on his blog an overview of some new features in the latest Activiti Eclipse Designer release. All features are demonstrated with a short movie, so it’s all very digestable :-).
I’m a huuuuuge fan of the quick edit feaure! This is really a productivity booster.
(for those with a limited attention span: there is a screencast at the bottom!)
2010 was awesome. We had the launch and explosive growth of Activiti in ways that none of us were able to forecast when we started the Activiti-adventure. 2011 will continue to amaze, no single doubt about that. To kick of this 2011-amazement-rollercoaster-ride, I’m very proud to introduce the latest addition to the Activiti platform: Activiti KickStart.
KickStart grew out of the idea that each and every company has processes that are done in an adhoc way. These are processes that are ‘discovered’ on the fly: some people want to collaborate or a certain document needs to be handled in a specific order by different departements. A BPM platform such as Activiti is a well-suited solution to achieve this, but the threshold and cost to actually model, deploy and execute these kind of processes in the traditional sense is way too high.
Activiti KickStart gives you a simple and intuitive UI that allows you to create such processes in a matter of minutes. No need to model anything, no need to actually know or understand BPMN, no need to do any coding, … KickStart really and seriously lowers the threshold to automate your workflow processes.
The processes created with KickStart are directly deployable to the Activiti repository. They are also immediately usable in Activiti Explorer, and they are fully BPMN 2.0 compliant, which means they can be edited in any modeling tool that understands the BPMN 2.0 file format. And best of all, the workflows can be edited at any time, truly honoring the adhoc nature of these processes.
Obviously, you are not limited to these use cases. As history proves, people always tend to use and enhance these things in ways we can’t image today :-).
Activiti KickStart is available today! It is part of the freshly released 5.1 release, and installed by default if you run the Activiti demo setup. Just visit activiti.org and download the latest release, we’re open source after all :-).
Note that KickStart is by no means ‘finished’ (which software product ever is). But in the Activiti and open source way of doing things, we want to show you as early as possible what we’re cooking. Using the feedback, ideas and contributions of you and the rest of the Activiti community, KickStart will grow and mature in a way no commercial vendor can keep up with us.
A picture is worth a 1000th words, so a movie will definitely be able to show you the power and ease of Activiti KickStart.
In the past years, I’ve seen my fair share of meetings where were we needed to draw business processes … ad-hoc or working on existing models. Poster-sized paper, whiteboards, beer coasters, … I’ve used it all. I can tell quite some hilarious stories about those meetings … but that’s probably content for another post. Main thing is: in 2010, this is just not the way anymore the game is played.
The Activiti BPM suite ships out-of-the-box with a cool webbased modeling tool, which allows anybody to collaborate from anywhere on their business processes. This modeling tool is donated to Activiti by our friends at Signavio.
And today, they’ve pushed out iPad support for their process modeling tool. This is just way too cool if you think about it. Discuss, model and collaborate on processes from basically everywhere you are – with decent graphics and all stored on the server. Gone are the ‘quirks’ attached to using ancient ‘technologies’ such as losing the paper, wet beer coaster, unreadable scribbling, etc.
Check it out in this video from the Signavio team itself.
And now I *really* want an iPad….
Activiti 5.0.alpha2 just has been released!
Key feature of this release is the taskform functionality, many thanks to Erik Winlof for implementing them! Nothing could give it more credit than a screencast.
In this screencast, you”ll be able to enjoy:
Don’t forget to put on your speakers and to check the movie out in fullscreen!
Activiti is a super-fast and rock-solid BPM and workflow engine that natively runs BPMN 2.0. It’s completely open-source (Apache licence) and embeddable in any Java environment.
Official Alfresco press release: click here.