Posts Tagged: Activiti

The Activiti Performance Showdown 2015


It’s been three years since we published the ‘The Activiti Performance Showdown‘, in which the performance of the Activiti engine (version 5.9 and 5.10) was benchmarked.

After three years it is time (or perhaps already long overdue) for updating the benchmark results using modern hardware with the latest Activiti engine. We’ve obviously been enhancing and improving the engine with every release, but in the latest release something called bulk insertwas added to the core engine.

We worked together with our friends at Google for implementing this feature (they actually did the bulk of the work, no pun intended) so a big shout-out and thanks to Abhishek Mittal, Imran Naqvi, Logan Chadderdon, Bo-Kyung Choi and colleagues for our past discussions and code! As you can imagine, the scale at which Google uses Activiti makes for interesting topics and challenges :-). In fact, some major changes in the past (for example the new Async job executor) and very interesting items on the future roadmap were driven by discussions with them.

Edit (July 30th 2015): ran the benchmark on Amazon Aurora. Results here:

The Activiti Benchmark project

The code used to gather the numbers are the same as three years ago and on Github:

To run the benchmark, simply follow instructions on the Github page. Basically what the test does:

  • There is a set of test process definitions
  • The benchmark can be run with no history or with ‘audit’ history
  • Each process definition is started x times using a thread pool from 1 to 10 threads
  • All process definitions are put into a list, of which x random selections are chosen to start a process instance
  • The results are nicely gathered in a basic HTML page

Test environment

We let the benchmark project mentioned above loose on the following setups:

  1. Benchmark logic + local MySQL 5.6.20 DB on my dev laptop (Macbook Pro, i7 CPU, 16GB ram, SSD disk) with history disabled
  2. Benchmark logic + local MySQL 5.6.20 DB on the same dev laptop with history on audit
  3. The benchmark logic on a separate Amazon EC2 server (type c4.4xlarge, 16 vCPU, 30GB ram, provisioned IOPS SSD) and an Oracle database on Amazon RDS server (type m3.2xlarge: 8 vCPI, 30GB ram, provisioned IOPS SSD), both in the the same zone (Ireland). The reason why I selected Oracle instead of MySQL on RDS is that I couldn’t get MySQL to perform decently on RDS for odd reasons, so in the end I gave up. We do probably have more users running on Oracle than MySQL anyway.

Setup 1 and 2 are interesting cause they show what is the raw overhead of the engine, without any network delays (as its on the same machine). Of course numbers will vary with different databases, but the main point to illustrate with this benchmark is to demonstrate how light Activiti is.

Setup 3 is in my eyes probably even more interesting (if that’s even possible), as it runs on ‘real’ servers with real network delay. In our experience, AWS servers aren’t as performant as real physical servers, so that’s only good news when you look at the numbers and are already impressed ;-).

Furthermore, all tests were ran in Spring mode (the engine used in a Spring setup) and with a BoneCP connection pool (I used BoneCP, although I know it has been superceded by HikariCP, but wanted to have the same setup as three years ago).

The numbers

When looking at the results, they are obviously faster than the similar 2012 numbers for the same processes. Obviously the engine has improved and hardware vendors didn’t sit still either. Generally, I noticed a 2x – 4x improvement in the throughput results.

The tests were also ran on Activiti 5.17 and 5.18. So we learn two things:

  • How performant (or the overhead) of the Activiti engine
  • How much influence the bulk insert change mentioned above has

I bundled all benchmark results in the following Google spreadsheet:

If you prefer Excel:

It has various sheets which you can select below: a sheet for the different test environments and then sheets for each of the tested processes in detail (with pretty graphs!). Each of the sheets contain analysis text at the top. Anyway, let’s look at the different sections in a bit of detail.

Note that the numbers here only show the throughput/second (aka process instances started and completed per second). Way more numbers (including the certainly interesting average timings of process instance execution) can be found in the following links:

Results: introduction

I’ve split the results up in two types: the first sections will cover the difference between 5.17 and 5.18 and will highlight the difference the bulk insert makes. In the next sections, we’ll look at the numbers for the process definitions in detail for 5.18.

Activiti 5.17 vs 5.18 – local Mysql, no history

This maps to the first sheet in the spreadsheet linked above.

When looking at the numbers, we learn that Activiti adds very little overhead when executing processes. There are crazy numbers in there like +6000 process instances / second (for very simple process definition of course). Keep in mind that this is with a local MySQL. But still, this proves that the overhead of the Activiti engine is very minimal.

The sheet does show a bit of red (meaning 5.18 has a lower throughput) here and there. This is logical: there is a minimum of data being produced (no history) and the average timings are sometimes really low (a few milliseconds in some cases!). This means that doing a bit of extra housekeeping in a hashmap (which we do for bulk insert) can already impact the numbers. The ‘red’ numbers are in general within 10% range, which is ok.

In my view, the ‘random’ execution (2500 randomly chosen process definitions using an equal distribution, throughput/second) is the most interesting one, as in a typical system, multiple process instance of different process definitions will be executed at the same time. The graph below shows that 5.17 and 5.18 are quite close to each other, which was to be expected.

Screenshot 2015-07-17 17.32.04


There are however two outliers in the data here: the service task vars 1 and 2 process definitions. These process definitions are pretty simple: they have a 7 steps (version 2 has a user task) and one of them is a service task that generates 4 and 50 process variables respectively. For these two cases, with more data (each process variable is an insert) we can see an increase from 20% up to 50%! So it seems like the bulk inserts starts to play here. That makes us hopeful for the next tests with historic data!

Activiti 5.17 vs 5.18 – local Mysql, history enabled (audit)

This maps to the second sheet in the spreadsheet linked above.

In the benchmark we did three years ago, we already learned history has a huge impact on performance. Of course, most people want to run with history enabled, as this is one of the main benefits of using a workflow engine like Activiti.

As we suspected from the previous setup, enabling history makes the bulk insert shine. In general we see nice improvements, in the 5%-40% range. Tests with fewer data (eg the user task tests, which do a lot of task service polling) see a bit of red there (meaning lower performance), but this is probably due to the same reasons as mentioned above and are well withing acceptable (<10% range). So this proves that the bulk insert, when there is a ‘normal’ amount of data being generated does have a very good impact on performance. Also, in general we see again that the overhead of the Activiti engine is very minimal with very nice numbers for the throughput/second (think about it. Doing 100’s of process instance executions per second is a lot when you translate this to process instances / day!).

The random execution graph (same as above, 2500 random executions, throughput/second), now shows that 5.18 keeps its distance from 5.17. For all threadpool settings, 5.18 can do more process instance executions in the same time period.

Screenshot 2015-07-17 17.45.51


Activiti 5.17 vs 5.18 – Oracle RDS, history enabled (audit)


This maps to the third sheet in the spreadsheet linked above.

Now here it gets different from the tests we did three years ago. We have two servers, both in the cloud (Amazon), both with real network delays. One thing to notice here is that the throughput/seconds is lower than the previous setup (about 1/4 of the throughput). Again, real network delays of the Amazon infrastructure are in play here.

The reasoning behind the bulk insert idea was that, when switching to a data center with a higher network delay, it would be beneficial if we can squeeze in more work for each roundtrip we do. And oh boy, that surely is the case. If you look at the numbers in the sheet, they are in general almost green or < 5% difference. But more importantly, we see improvements up to 80% for some cases (not surprisingly those cases with a lot of data being produced)!

The random execution graph (same setup as above, 2500 random executions, throughput/second) shows the following:

Screenshot 2015-07-17 17.54.22


On average, there is a 15% gain for this particular test (which is awesome). Also interesting here to see is the shape of the graph: a linear line, diverting from the 5.17 line the more we threads we add. But if you scroll up, you can see the shape being very different when running on my laptop. In fact, you can state that we’ve reached a tipping point around 8 threads for the previous two graphs while here there is no sign of tipping yet. This means that we could probably load the Oracle RDS database way more than we were doing. And that my laptop is not a server machine, which yeah, I already knew ;-).

Process definitions

For each of the process definitions, there is a specific sheet in the spreadsheet linked above.

process01 is the simplest process you can imagine:

Screenshot 2015-07-20 09.28.20

process02 is a bit longer, all still passthrough activities, but this generates obviously more history (for example one entry for each passthrough activity):

Screenshot 2015-07-20 09.31.01

process03 tests the parallel gateway (fork and join), as it’s a more expensive activity (there is some locking going on):

Screenshot 2015-07-20 09.31.25

process04 goes a bit further with the parallel gateways, and tests the impact when using multiple join/forks at one:

Screenshot 2015-07-20 09.32.10

process05 is about using a service task (with a Java class) together with an exclusive gateway:

Screenshot 2015-07-20 09.32.47


multi-instance is a process definition that tests the embedded subprocess construct, with a parallel gateway in the middle. Knowing a bit about the internals of the Activiti engine, this process most likely will be the slowest:

Screenshot 2015-07-20 10.42.57

usertask01 to 03 are quite similar, 01 has just one user task, 02 has seven sequential ones and 03 has two user task, with a parallel gateway as fork join.

Screenshot 2015-07-20 10.45.20


And lastly, the servicetask-vars01/02 process definitions have the same structure as usertask02, but with a service task that generates process variables (four process vars for the 01 version, fifty process variables for the 02 version). The 02 version also has a user task.

Screenshot 2015-07-20 10.47.22

Detailed numbers for process definitions

The first batch of process definitions we’ll look at is process01/02/03/04/05. These are all process definitions that complete after starting a process instance, no wait state. Which makes it a prime candidate for testing the raw overhead the Activiti engine adds.

Here are the throughput and average timing numbers (click to enlarge):



And the similar numbers for the Oracle RDS setup (click to enlarge):


These numbers show two things clearly:

  • The average timing goes up due to context switches when adding more worker threads, but in the same period of time more can be done in general. This shows that Activiti scales nicely horizontally when adding more threads (and thus, more nodes in a cluster).
  • The raw overhead (i.e. what the Activiti engine adds as logic on top of the database queries) is really low. If you look at the throughput charts, take in account that this is process instances/second! On the local MySQL we’re talking about thousands/second, and on Oracle hundreds a second!

The second batch of process definitions are those with a user task (I’ve made a crude distinction here, as looking more detailed would add way too much information). Now, one big difference with the previous batch is the way the time is measured here. For the previous batch, it was easy, as the process instance would complete after start. For these process definitions however, the process instance reaches a wait state. The numbers below take in account both the starting of the process instance and the task completions. So for usertask02 for example, with seven user tasks, this means that in the results you have to remember that we are talking about 8 database transactions (one for process instance start, seven for each user task). Which explains the difference in magnitude of the numbers versus the previous batch.

The first graphs show the results for 5.18 on a local Mysql (click to enlarge):


Similar, but now for Oracle (click to enlarge):


A similar analysis as with the previous batch can be made: when adding more threads, the average time goes up, but in the same time period a lot more work can be done. Still, taking into account we’re talking here about multiple database transactions for each process instance run, the numbers are really good.

Now, this was only a summary of the number analysis we did. The spreadsheet contains way more data and graphs (but we have to limit this very long blog a bit…). 

For example, the spreadsheet shows the detailed analysis for each process definition in detail. Take for example the throughput graph of process definition process02 (seven manual tasks). Above is the local Mysql, below is the Oracle RDS:

Screenshot 2015-07-20 11.03.33

First of all, the difference in throughput between 5.17 and 5.18 is very noticeable here (since there is quite a bit of historic data). But also the numbers are really impressive, in both setups, going well over the thousands for the local db and hundreds for Oracle RDS.

Another example is the servicetask-vars02 process definition (seven steps, one is a service task that generates fifty variables, and one is a user task to make sure all variables are written to the database – they can be optimised away without wait state otherwise). Again above is the local Mysql, below is Oracle RDS:

Screenshot 2015-07-20 11.04.50

The difference in line is very interesting here. It shows the Oracle is having no issue with the additional concurrent load. It also nicely shows the difference between 5.17 and 5.18 again. Do take in account that in this process fifty process variables get written. While the numbers are a magnitude lower than the previous example, taking in account what is happening makes the numbers look very good in my opinion.



The conclusion which we made three years ago, is still valid today. Obviously results are better due to better hardware, but I hope the graphs above prove it’s also about continuously improving the engine based on real-life deployments. Again, if you missed it in the introduction above, the bulk insert  was conceived as a solution for a problem Google found when running Activiti at scale. The community is really what drives Activiti forward by challenging it in every way possible!

In general we can say

  • The Activiti engine is fast and overhead is minimal
  • Activiti is written to take in account concurrent setups, and scales nicely when adding more worker threads. Activiti is well designed to be used in high-throughput and availability (clustered) architectures.

Like I said three years ago: the numbers are what they are: just numbers. Results will vary from use case, hardware and setup. My main point which I want to conclude here, is that the Activiti engine is extremely lightweight. The overhead of using Activiti for automating your business processes is small. In general, if you need to automate your business processes or workflows, you want top-notch integration with any Java system and you like all of that fast and scalable … look no further!

What?!? Activiti needs how much memory?

A while ago, somebody proclaimed their application was going out of memory sometimes due to Activiti. I don’t need to tell you that this hurt my developer heart. We know our architecture is sound and very resource-friendly. But without hard numbers, anybody can just blurt out that Activiti is a memory hog without us being able counter it.

So I decided to do the sane thing. I measured … and boy, was I surprised with the results!


I cobbled together something which would mimic typical Activiti usage. The code is open and available on

This program does the following:

  • Has a thread that starts a new process instance every few milliseconds
  • Has a thread that fetches counts from the history table and prints it on the screen
  • Has a group of threads mimicking users that fetch and complete tasks

The processes that are started are randomly chosen from five deployed processes (in the order of the picture below):

  • A simple four user tasks process
  • A five user tasks process with a parallel gateway where all the user tasks are asynchronous
  • A process with a simple script
  • A Process with an exclusive choice with four branches
  • A process with a subprocess with timer



I decided to use following parameters for running the test. Note that I did not tried to play around with these settings. It could very well be that a much higher throughput is possible, but I believe that the numbers I chose now are an adequate representation of a company doing a fair amount of business process management.

  • 50 users. This means there will be 50 threads asking every x seconds for tasks and completing them
  • Run for 30 minutes
  • Start 120 processes per minute (ie. 2 per second)
  • Have the user threads sleep for a random amount of seconds between 0 and 20.

Again, I didn’t check what the limit of these numbers was. It could very well be you can start 500 processes per second. But the point here is memory usage.

Also, I’m using a standard MySQL installation (just installed, nothing tweaked) as database.

A trip down memory lane

To make it a but interesting, I decided to start low and build up from there. So I ran the benchmark using 32 MB of heap space:

java -jar -Xms32M -Xmx32M -XX:+UseG1GC  activiti-memory-usage.jar

Note that i’m using the new G1 garbage collector which is supposed to be doing good in cases where memory usage is more than 50% of the max heap. I also attached the Yourkit profiler to get an insight into the memory usage. I let the benchmark run for 30 minutes. When I came back, following statistics were shown:

Screen Shot 2013-02-04 at 12.31.55

So to my surprise 32 MB was enough to finish the benchmark! And using it during 30 minutes allowed to finish 3157 process instances and complete 12802 tasks! And even more, the profiler showed me that it wasn’t even using all of the available memory (see first chart)!

Screen Shot 2013-02-04 at 12.33.41

You can also see that when the garbage collector passes by, only 13MB is being used:

Screen Shot 2013-02-04 at 12.35.25

And the CPU was really boring himself during the benchmark: It never really goes above 10% usage.

Screen Shot 2013-02-04 at 12.35.49

Also, there was quite a bit of garbage collecting going on (28 seconds on 30 minutes), which is expectable:

Screen Shot 2013-02-04 at 12.37.57

How low can you go?

The first test learned us 32 is more than enough to run this ‘BPM platform’. And like I said, I believe that the load isn’t that different from a typical company using a BPM solution. But how low can we go?

So I reran my tests using less memory. I quickly learned that when throwing less than 32 MB of RAM at it, I couldn’t complete the benchmark with the Yourkit profiler attached. Probably the profiler agent also steals some memory. So I ran the benchmarks using less memory:

java -jar -XmsXXXM -XmxXXXM -XX:+UseG1GC activiti-memory-usage.jar

I tried 24 MB. Success!

I went down to 16 MB. Success!

I went down to 14 MB. Dang! Out of heap space. But no worries: the exception occurred when the BPMN diagram was generated during process deployment. This takes quite a bit of memory, as Java2D is involved and the PNG is built up in memory. So I configured the engine to not generate this diagram (setting ‘createDiagramOnDeploy’ to false). And yes. Success!

I went down to 12 MB. Success!

And 12 MB of Ram was the lowest I could go. With less memory you get ‘out of heap space’ exception quickly. The statistics for the 12 MB run are actually quite similar to the 32 MB version.

Screen Shot 2013-02-04 at 12.44.34

Let me rephrase that: A measly Twelve Megabytes of RAM memory!! Twelve!!


Activiti (or at least my approximation of a typical Activiti load) needs 12 MB of memory to run. Probably even less, cause the fifty user threads also take up some memory here. To put this in perspective:

  • An iPhone 5 has 85 times more RAM memory (1GB).
  • A Raspberry Pi (25 $ version) has 21 times more RAM memory (256 MB). The 35$ has 42 times more RAM memory (512 MB).
  • An Amazon Micro instance has 51 times more RAM memory (613 MB).
  • The ‘biggest’ Amazon machine you can get at the moment has 2560 times more memory (30GB).

Edit 5 feb 2013: See comments below. Andreas has succeeded in running it on 9MB! 

Of course, in a ‘real’ application, you’d also need a web container, servlets, REST layer, etc. Also, I didn’t touch the permgen settings. But it is equal for all Java programs. The point remains the same: Activiti is REALLY memory friendly! And we learned earlier that Activiti is also really fast

So why even bother looking at the competition?

Cool new features in Activiti Designer

Tijs Rademakers (already 2 months full-time Activiti team member now … time sure flies!) posted on his blog an overview of some new features in the latest Activiti Eclipse Designer release. All features are demonstrated with a short movie, so it’s all very digestable :-).

I’m a huuuuuge fan of the quick edit feaure! This is really a productivity booster.

Adhoc workflow with Activiti: introducing Activiti KickStart

(for those with a limited attention span: there is a screencast at the bottom!)

2010 was awesome. We had the launch and explosive growth of Activiti in ways that none of us were able to forecast when we started the Activiti-adventure. 2011 will continue to amaze, no single doubt about that. To kick of this 2011-amazement-rollercoaster-ride, I’m very proud to introduce the latest addition to the Activiti platform: Activiti KickStart.

What’s this all about?

KickStart grew out of the idea that each and every company has processes that are done in an adhoc way. These are processes that are ‘discovered’ on the fly: some people want to collaborate or a certain document needs to be handled in a specific order by different departements. A BPM platform such as Activiti is a well-suited solution to achieve this, but the threshold and cost to actually model, deploy and execute these kind of processes in the traditional sense is way too high.

Activiti KickStart gives you a simple and intuitive UI that allows you to create such processes in a matter of minutes. No need to model anything, no need to actually know or understand BPMN, no need to do any coding, … KickStart really and seriously lowers the threshold to automate your workflow processes.

The processes created with KickStart are directly deployable to the Activiti repository. They are also immediately usable in Activiti Explorer, and they are fully BPMN 2.0 compliant, which means they can be edited in any modeling tool that understands the BPMN 2.0 file format. And best of all, the workflows can be edited at any time, truly honoring the adhoc nature of these processes.

Activiti KickStart in action: defining an expense process in a matter of a few clicks

What can I do with it?

  • Adhoc workflow: often, coordination is required between different people or groups in a company. You know how it normally goes: sending an email here, doing a telephone there … which often ends up in a mess of nobody knowing what or when something needs to be done. However, a business process management platform such as Activiti is an excellent way of distributing and follow-up everything, as it is intended to track exactly such things. KickStart allows you to create processes for adhoc work in a matter of minutes, and distribute and coordinate tasks between people easily.
  • Prototyping/Proof-of-concept: before diving into complex BPMN 2.0 modeling and thinking about all complex aspects of , it is often wise to get all people involved aligned and work out a prototype that shows the vision of what needs to be done. KickStart allows to do exatcly that: create a business process prototype on the fly, to get your ideas visible for everyone.
  • Simple processes: some processes are just simple by nature, and every company has them. Think about an expense process, a holiday leave process, a hiring process, etc… These kind of processes are probably already being done using paper or e-mail. KickStart allows to model these processes quickly and change them whenever it is needed. As such, KickStart really lowers the threshold to automate these business processes.

Obviously, you are not limited to these use cases. As history proves, people always tend to use and enhance these things in ways we can’t image today :-).

When can I use it?

Activiti KickStart is available today! It is part of the freshly released 5.1 release, and installed by default if you run the Activiti demo setup. Just visit and download the latest release, we’re open source after all :-).

Note that KickStart is by no means ‘finished’ (which software product ever is). But in the Activiti and open source way of doing things, we want to show you as early as possible what we’re cooking. Using the feedback, ideas and contributions of you and the rest of the Activiti community, KickStart will grow and mature in a way no commercial vendor can keep up with us.


A picture is worth a 1000th words, so a movie will definitely be able to show you the power and ease of Activiti KickStart.

BPMN 2.0 process modeling on the iPad

In the past years, I’ve seen my fair share of meetings where were we needed to draw business processes … ad-hoc or working on existing models. Poster-sized paper, whiteboards, beer coasters, … I’ve used it all. I can tell quite some hilarious stories about those meetings … but that’s probably content for another post. Main thing is: in 2010, this is just not the way anymore the game is played.

The Activiti BPM suite ships out-of-the-box with a cool webbased modeling tool, which allows anybody to collaborate from anywhere on their business processes. This modeling tool is donated to  Activiti by our friends at Signavio.

And today, they’ve pushed out iPad support for their process modeling tool. This is just way too cool if you think about it. Discuss, model and collaborate on processes from basically everywhere you are – with decent graphics and all stored on the server. Gone are the ‘quirks’ attached to using ancient ‘technologies’ such as losing the paper, wet beer coaster, unreadable scribbling, etc.

Check it out in this video from the Signavio team itself.

And now I *really* want an iPad….

Screencast: Getting Started with Activiti 5.0.alpha2

Activiti 5.0.alpha2 just has been released!

Key feature of this release is the taskform functionality, many thanks to Erik Winlof for implementing them! Nothing could give it more credit than a screencast.

In this screencast, you”ll be able to enjoy:

  • Setting up the Activiti environment using the demo setup script. As you’ll see, only 27 seconds are needed. That’s about the time it takes to pour a cup of coffee (if you’re fast).
  • The Activiti Probe, REST, Modeler and Explorer webapps.
  • The new taskform functionality. In the screencast, the vacation request BPMN 2.0 example is used. Check out our userguide to learn everything about taskforms and the example process.

Don’t forget to put on your speakers and to check the movie out in fullscreen!

The curtains are pulled: Alfresco launches Activiti


After some weeks of silence, we can now finally pull the curtains … and reveal Activiti to the world!

Activiti is a super-fast and rock-solid BPM and workflow engine that natively runs BPMN 2.0. It’s completely open-source (Apache licence) and embeddable in any Java environment.

Tom and I joined Alfresco about two months ago, and we’ve kept ourselves quite busy. Bundled with this announcement is the first alpha release of Activiti. Go and play with it while it’s hot!

Activiti is all about what made jBPM great, and taking giant leaps from there. Tom nicely summarizes it in his blogpost. A lot more information can be found on the Activiti website.

Official Alfresco press release: click here.