Activiti on !

In case you missed the following tweet last week


That’s right! Activiti is now on!

This is a huge deal – the Spring Initializr is the place where the journey for many Spring Boot projects start, so being on that page is a huge honour for us. Of course, couldn’t have done it without the help of my Spring friends Josh Long (who also contributed the Spring Boot + Activiti code) and Stéphane Nicoll.

And just to prove how awesome it is, here’s a short movie that starts on and finishes with a REST endpoint that starts a process instance is about 5 minutes!

Achievement Unlocked: 5000 Forum Posts!


I just passed the 5000 forum posts on the Activiti Forum. Out of a total of 41792 posts, so you can only blame me for  12% of the content on there ;-).

Just thinking about it humbles me tremendously. The forum is such an important tool for improving the Activiti Engine, guarding the quality and getting fresh ideas for features.

Also, doing the math that averages to 2.5 posts/day (not taking in account any weekends and holidays). Crazy! Thank you for all the conversations over the past years!

Multi-Tenancy with separate database schemas in Activiti

One feature request we’ve heard in the past is that of running the Activiti engine in a multi-tenant way where the data of a tenant is isolated from the others. Certainly in certain cloud/SaaS environments this is a must.

A couple of months ago I was approached by Raphael Gielen, who is a student at the university of Bonn, working on a master thesis about multi-tenancy in Activiti. We got together in a co-working coffee bar a couple of weeks ago, bounced ideas and hacked together a first prototype with database schema isolation for tenants. Very fun :-).

Anyway, we’ve been refining and polishing that code and committed it to the Activiti codebase. Let’s have a look at the existing ways of doing multi-tenancy with Activiti in the first two sections below. In the third section, we’ll dive into the new multi-tenant multi-schema feature sprinkled with some real-working code examples!

Shared Database Multi-tenancy

Activiti has been multi-tenant capable for a while now (since version 5.15). The approach taken was that of a shared database: there is one (or more) Activiti engines and they all go to the same database. Each entry in the database table has a tenant identifier, which is best to be understood as a sort of tag for that data. The Activiti engine and API’s then read and use that tenant identifier to perform its various operations in the context of a tenant.

For example, as shown in the picture below, two different tenants can have a process definition with the same key. The engine and API’s make sure there is no mixup of data.

Screenshot 2015-10-06 12.57.00

The benefit of this approach is the simplicity of deployment, as there is no difference from setting up a ‘regular’ Activiti engine. The downside is that you have to remember to use the right API calls (i.e. those that take in account the tenant identifier). Also, it has the same problem as any system with shared resources: there will always be competition for the resources between tenants. In most use cases this is fine, but there are use cases that can’t be done in this way, like giving certain tenants more or less system resources.

Multi-Engine Multi-Tenancy

Another approach, which has been possible since the very first version of Activiti is simply having one engine instance for each tenant:

Screenshot 2015-10-06 13.12.56

In this setup, each tenant can have different resource configurations or even run on different physical servers. Each engine in this picture here can of course be multiple engines for more performance/failover/etc. The benefit now is that the resources are tailored for the tenant. The downside is the more complex setup (multiple database schemas, having a different configuration file for each tenant, etc.). Each engine instance will take up memory (but that’s very low with Activiti). Also, you’d need t write some routing component that knows somehow the current tenant context and routes to the correct engine.

Screenshot 2015-10-06 13.18.36

Multi-Schema Multi-Tenancy

The latest addition to the Activiti multi-tenancy story was added two weeks ago (here’s the commit), simultaneously on version 5 and 6. Here, there is a database (schema) for each tenant, but only one engine instance. Again, in practice there might be multiple instances for performance/failover/etc., but the concept is the same:

Screenshot 2015-10-06 13.41.20

The benefit is obvious: there is but one engine instance to manage and configure and the API’s are exactly the same as with a non-multi-tenant engine. But foremost, the data of a tenant is completely separated from the data of other tenants. The downside (similar to the multi-engine multi-tenant approach) is that someone needs to manage and configure different databases. But the complex engine management is gone.

The commit I linked to above also contains a unit test showing how the Multi-Schema Multi-Tenant engine works.

Building the process engine is easy, as there is a MultiSchemaMultiTenantProcessEngineConfiguration that abstracts away most of the details:

config = new MultiSchemaMultiTenantProcessEngineConfiguration(tenantInfoHolder);

config.registerTenant("alfresco", createDataSource("jdbc:h2:mem:activiti-mt-alfresco;DB_CLOSE_DELAY=1000", "sa", ""));
config.registerTenant("acme", createDataSource("jdbc:h2:mem:activiti-mt-acme;DB_CLOSE_DELAY=1000", "sa", ""));
config.registerTenant("starkindustries", createDataSource("jdbc:h2:mem:activiti-mt-stark;DB_CLOSE_DELAY=1000", "sa", ""));
processEngine = config.buildProcessEngine();

This looks quite similar to booting up a regular Activiti process engine instance. The main difference is that we’re registring tenants with the engine. Each tenant needs to be added with its unique tenant identifier and Datasource implementation. The datasource implementation of course needs to have its own connection pooling. This means you can effectively give certain tenants different connection pool configuration depending on their use case. The Activiti engine will make sure each database schema has been either created or validated to be correct.

The magic to make this all work is the TenantAwareDataSourceThis is a javax.sql.DataSource implementation that delegates to the correct datasource depending on the current tenant identifier. The idea of this class was heavily influenced by Spring’s AbstractRoutingDataSource (standing on the shoulders of other open-source projects!).

The routing to the correct datasource is being done by getting the current tenant identifier from the TenantInfoHolder instance. As you can see in the code snippet above, this is also a mandatory argument when constructing a MultiSchemaMultiTenantProcessEngineConfiguration. The TenantInfoHolder is an interface you need to implement, depending on how users and tenants are managed in your environment. Typically you’d use a ThreadLocal to store the current user/tenant information (much like Spring Security does) that gets filled by some security filter. This class effectively acts as the routing component’ in the picture below:

Screenshot 2015-10-06 13.53.13

In the unit test example, we use indeed a ThreadLocal to store the current tenant identifier, and fill it up with some demo data:

 private void setupTenantInfoHolder() {
    DummyTenantInfoHolder tenantInfoHolder = new DummyTenantInfoHolder();
    tenantInfoHolder.addUser("alfresco", "joram");
    tenantInfoHolder.addUser("alfresco", "tijs");
    tenantInfoHolder.addUser("alfresco", "paul");
    tenantInfoHolder.addUser("alfresco", "yvo");
    tenantInfoHolder.addUser("acme", "raphael");
    tenantInfoHolder.addUser("acme", "john");
    tenantInfoHolder.addUser("starkindustries", "tony");
    this.tenantInfoHolder = tenantInfoHolder;

We now start some process instance, while also switching the current tenant identifier. In practice, you have to imagine that multiple threads come in with requests, and they’ll set the current tenant identifier based on the logged in user:


The startProcessInstances method above will set the current user and tenant identifier and start a few process instances, using the standard Activiti API as if there was no multi tenancy at all (the completeTasks method similarly completes a few tasks).

Also pretty cool is that you can dynamically register (and delete) new tenants, by using the same method that was used when building the process engine. The Activiti engine will make sure the database schema is either created or validated.

config.registerTenant("dailyplanet", createDataSource("jdbc:h2:mem:activiti-mt-daily;DB_CLOSE_DELAY=1000", "sa", ""));

Here’s a movie showing the unit test being run and the data effectively being isolated:

Multi-Tenant Job Executor

The last piece to the puzzle is the job executor. Regular Activiti API calls ‘borrow’ the current thread to execute its operations and thus can use any user/tenant context that has been set before on the thread.

The job executor however, runs using a background threadpool and has no such context. Since the AsyncExecutor in Activiti is an interface, it isn’t hard to implement a multi-schema multi-tenant job executor. Currently, we’ve added two implementations. The first implementation is called the SharedExecutorServiceAsyncExecutor:

config.setAsyncExecutor(new SharedExecutorServiceAsyncExecutor(tenantInfoHolder));

This implementations (as the name implies) uses one threadpool for all tenants. Each tenant does have its own job acquisition threads, but once the job is acquired, it is put on the shared threadpool. The benefit of this system is that the number of threads being used by Activiti is constrained.

The second implementation is called the ExecutorPerTenantAsyncExecutor:

config.setAsyncExecutor(new ExecutorPerTenantAsyncExecutor(tenantInfoHolder));

As the name implies, this class acts as a ‘proxy’ AsyncExecutor. For each tenant registered, a complete default AsyncExecutor is booted. Each with its own acquisition threads and execution threadpool. The ‘proxy’ simply delegates to the right AsyncExecutor instance. The benefit of this approach is that each tenant can have a fine-grained job executor configuration, tailored towards the needs of the tenant.


As always, all feedback is more than welcome. Do give the multi-schema multi-tenancy a go and let us know what you think and what could be improved for the future!

Pluggable persistence in Activiti 6

In the past years, we’ve often heard the request (both from community and our customers) on how to swap the persistence logic of Activiti from relational database to something else. When we announced Activiti 6, one of the promises we made was that we would make exactly this possible.

People that have dived into the code of the Activiti engine will know that this is a serious refactoring, as the persistence code is tightly coupled with the regular logic. Basically, in Activiti v5, there were:

  • Entity classes: these contain the data from the database. Typically one database row is one Entity instance
  • EntityManager: these classes group operations related to entities (find, delete,… methods)
  • DbSqlSession: low-level operations (CRUD) using MyBatis. Also contains command-duration caches and manages the flush of the data to the database.

The problems in version 5 were the following:

  • No interfaces. Everything is a class, so replacing logic gets really hard.
  • The low-level DbSqlSession was used everywhere across the code base.
  • a lot of the logic for entities was contained within the entity classes. For example look at the TaskEntity complete method. You don’t have to be an Activiti expert to understand that this is not a nice design:
    • It fires an event
    • it involves users
    • It calls a method to delete the task
    • It continues the process instance by calling signal

Now don’t get me wrong. The v5 code has brought us very far and powers many awesome stuff all over the world. But when it comes to swapping out the persistence layer … it isn’t something to be proud of.

And surely, we could hack our way in into the version 5 code (for example by swapping out the DbSqlSession with something custom that responds to the methods/query names being used there), but it still would be not quite nice design-wise and quite relational-database-like. And that doesn’t necessarily match the data store technology you might using.

No, for version 6 we wanted to do it properly. And oh boy … we knew it was going to be a lot of work … but it was even more work than we could imagine (just look at the commits on the v6 branch for the last couple of weeks). But we made it … and the end result is just beautiful (I’m biased, true). So let’s look at the new architecture in v6 (forgive me my powerpoint pictures. I’m a coder not a designer!):

Screenshot 2015-09-17 17.14.00

So, where in v5 there were no interfaces, there are interfaces everywhere in v6. The structure above is applied for all of the Entity types in the engine (currently around 25). So for example for the TaskEntity, there is a TaskEntityImpl, a TaskEntityManager, a TaskEntityManagerImpl, a TaskDataManager and a TaskDataManagerImpl class (and yes I know, they still need javadoc).  The same applies for all entities.

Let me explain the diagram above:

  • EntityManager: this is the interface that all the other code talks to whenever it comes to anything around data. It is the only entrypoint when it comes to data for a specific entity type.
  • EntityManagerImpl: implementation of the EntityManager class.The operations are often high level and do multiple things at the same time. For example an Execution delete might also delete tasks, jobs, identityLinks, etc and fire relevant events. Every EntityManager implementation has a DataManager. Whenever it needs data from the persistence store, it uses this DataManager instance to get or write the relevant data.
  • DataManager: this interface contains the ‘low level’ operations. Typically contains CRUD methods for the Entity type it manages and specific find methods when data for a particular use case is needed
  • DataManagerImpl: implementation of the DataManager interface. Contains the actual persistence code. In v6, this is the only class that now uses the DbSqlSession classes to communicate with the database using MyBatis. This is typically the class you will want to swap out.
  • Entity: interface for the data. Contains only getters and setters.
  • EntityImpl: implementation of the above interface. In Activiti v6, this is a regular pojo, but the interface allows you to switch to different  technologies such as Neo4 with spring-dataj, JPA, … (which use annotations). Without it, you would need to wrap/unwrap the entities if the default implementation would not work on your persistence technology.


Moving all of the operations into interfaces gave us a clear overview of what methods were spread across the codebase. Did you know for example there were at least five different methods to delete an Execution (named ‘delete’, ‘remove’, ‘destroy’, etc…)? They did almost the same, but with subtle differences. Or sometimes not subtle at all.

A lot of the work over the past weeks included consolidating all of this logic into one method. Now, in the current codebase, there is but one way to do something. Which is quite important for people that want to use different persistence technologies. Making them implement all varieties and subtleties would be madness.

In-memory implementation

To prove the pluggability of the persistence layer, I’ve made a small ‘in-memory’ prototype. Meaning that, instead of a relational database, we use plain old HashMaps to store our entities as {entityId, entities}. The queries then become if-clauses.

The code can be found on Github:

(people have sometimes on the forum asked how hard it was to run Activiti purely in memory, for simple use cases that don’t mandate the use of a database. Well, now it’s not hard at all anymore! Who knows … this little prototype might become something if people like it!)

As expected, we swap out the DataManager implementations with our in-memory version, see InMemoryProcessEngineConfiguration :

 protected void initDataManagers() {
   this.deploymentDataManager = new InMemoryDeploymentDataManager(this);
   this.resourceDataManager = new InMemoryResourceDataManager(this);
   this.processDefinitionDataManager = new InMemoryProcessDefinitionDataManager(this);
   this.jobDataManager = new InMemoryJobDataManager(this);
   this.executionDataManager = new InMemoryExecutionDataManager(this);
   this.historicProcessInstanceDataManager = new InMemoryHistoricProcessInstanceDataManager(this);
   this.historicActivityInstanceDataManager = new InMemoryHistoricActivityInstanceDataManager(this);
   this.taskDataManager = new InMemoryTaskDataManager(this);
   this.historicTaskInstanceDataManager = new InMemoryHistoricTaskInstanceDataManager(this);
   this.identityLinkDataManager = new InMemoryIdentityLinkDataManager(this);
   this.variableInstanceDataManager = new InMemoryVariableInstanceDataManager(this);
   this.eventSubscriptionDataManager = new InMemoryEventSubscriptionDataManager(this);

Such DataManager implementations are quite simple. See for example the InMemoryTaskDataManager who needs to implement the data retrieval/write methods for a TaskEntity:

public List<TaskEntity> findTasksByExecutionId(String executionId) {
   List<TaskEntity> results = new ArrayList<TaskEntity>();
   for (TaskEntity taskEntity : entities.values()) {
     if (taskEntity.getExecutionId() != null && taskEntity.getExecutionId().equals(executionId)) {
 return results;

To prove it works, let’s deploy, start a simple process instance, do a little task query and check some history. This code is exactly the same as the ‘regular’ Activiti usage.

public class Main {
 public static void main(String[] args) {
   InMemoryProcessEngineConfiguration config = new InMemoryProcessEngineConfiguration();
   ProcessEngine processEngine = config.buildProcessEngine();
   RepositoryService repositoryService = processEngine.getRepositoryService();
   RuntimeService runtimeService = processEngine.getRuntimeService();
   TaskService taskService = processEngine.getTaskService();
   HistoryService historyService = processEngine.getHistoryService();
   Deployment deployment = repositoryService.createDeployment().addClasspathResource("oneTaskProcess.bpmn20.xml").deploy();
   System.out.println("Process deployed! Deployment id is " + deployment.getId());
   ProcessInstance processInstance = runtimeService.startProcessInstanceByKey("oneTaskProcess");
   List<Task> tasks = taskService.createTaskQuery().processInstanceId(processInstance.getId()).list();
   System.out.println("Got " + tasks.size() + " tasks!");
   System.out.println("Number of process instances = " + historyService.createHistoricProcessInstanceQuery().count());
   System.out.println("Number of active process instances = " + historyService.createHistoricProcessInstanceQuery().finished().count());
   System.out.println("Number of finished process instances = " + historyService.createHistoricProcessInstanceQuery().unfinished().count());


Which, if you run it gives you this (blazingly fast as it’s all in memory!):

Process deployed! Deployment id is 27073df8-5d54-11e5-973b-a8206642f7c5

Got 1 tasks!

Number of process instances = 1

Number of active process instances = 0

Number of finished process instances = 1

In this prototype, I didn’t add transactional semantics. Meaning that if two users would complete the same user task at the very same time the outcome would be indeterminable. It’s of course possible to have in-memory transaction-like logic which you expect from the Activiti API, but I didn’t implement that yet. Basically you would need to keep all objects you touch in a little cache until flush/commit time and do some locking/synchronisation at that point. And of course, I do accept pull requests 😉

What’s next?

Well, that’s pretty much up to you. Let us know what you think about it,  try it out!

We’re in close contact with one of our community members/customers who plan to try it out very soon. But we want to play with it ourselves too of course and we’re looking at what would be a cool first choice (I myself still have a special place in my heart for Neo4j … which would be a great fit as it’s transactional).

But the most important bit is: in Activiti v6 it is now possible to cleanly swap out the persistence layer. We’re very proud of how it looks now. And we hope you like it too!

The performance impact of scripting in processes

We often see people using the scripting (for example in a service task, execution listener, etc.) for various purposes. Using scripts versus Java logic makes often sense:

  • It does not need to be packaged into a jar and put on the classpath
  • It makes the process definition more understandable: no need to look into different files
  • The logic is part of the process definition, which means no hassling to make sure the correct version of logic is being used

However, it’s important to also keep in mind the performance aspect of using scripting within the process definition, and balance those requirements with the benefits above.

The two scripting languages we typically see being used with Activiti is Javascript and Groovy. Javascript comes bundled with the JDK (Rhino for JDK 6 and 7) and Nashorn for JDK 8, which makes it easy to pick up. For Groovy, the Groovy scripting engine needs to be added to the classpath.

But let me tell you, I’m no fan of using Javascript as the scripting language choice, as there are subtle changes when moving between JDK versions (read more in a previous post of me here and here, and those are the ones that were documented …). So that means you could write your logic one day and it all happily works and the next day after a JDK upgrade it all fails. I rather spend my time on actually coding.

To verify the performance I made a very small microbenchmark:

Screenshot 2015-09-08 23.06.34and where the script did something silly like (the point was to have a getVariable() and setVariable() in there and something extra like getting the current day):

var input = execution.getVariable(‘input’);
var today = new Date().getDay();
execution.setVariable(‘result’, input * today);

The same code in a Java service task:

public class MyDelegate implements JavaDelegate {

    public void execute(DelegateExecution execution) throws Exception {
        Integer input = (Integer) execution.getVariable("input");
        int today = Calendar.getInstance().get(Calendar.DAY_OF_MONTH);
        execution.setVariable("result", input * today);

and the Groovy counterpart:

def input = execution.getVariable('input');
int today = Calendar.getInstance().get(Calendar.DAY_OF_MONTH);
execution.setVariable('result', input * today);

I started that process instance 10.000 times and simply noted down the total execution time, I think the numbers speak for themselves:

  • JavaDelegate: 6255 ms
  • Groovy: 7248 ms
  • Javascript: 27314 ms

The JDK version used was the latest version (1.8.0_60). The first time I ran the tests I was on 1.8.0_20, and the Javascript results were 25% higher (I read that performance improvements went in in JDK 1.8.0_40). For Groovy I used version 2.4.4 (which you should be using giving older versions have a security problem!)

Just to give a visual idea of the difference between the options:



Using Groovy for the scripting language seems to be a far better choice performance-wise compared to using Javascript. Do take in account this is a microbenchmark for one very simple use case. But given our troubles in the past with JDK upgrades that break Javascript scripts and this result, it’s very hard to make a case for selecting Javascript by default.


UPDATE 11 SEPT ’15: Quite a few people have asked me why the difference is of that magnitude. My assumption is that it’s because the javascript engine in the JDK is not thread safe and thus cannot be reused nor cached, thus having a costly bootup of the ScriptingEngine every time. If you take a look at, you can read that there is a special parameter THREADING, which we use in Activiti: to determine if the scripting engine can be cached. Nashorn (and Rhino) returns null here, meaning it can’t be used to execute scripts on multiple threads, i.e. each thread needs it’s own instance. I can only assume that the ScriptEngineManager in the JDK does something similar.


Activiti 6: An Evolution of the Core Engine

This article describes the changes with regards to the Activiti core engine and how version 6 differs from version 5. All the material here, including the code examples  were presented back in June in Paris, but I believed it was good to also have it written down in blog form for people that don’t like watching Youtube movies (although I think it’s pretty interesting 😉 )

If you don’t like the theoretical background on the Actviti 6 design, do scroll down to see some nice Activiti 5 vs Activiti 6 examples!

The Activiti engine is now five years old. In those five years, we’ve seen Activiti grow enormously and today it’s used all across the world in all kinds of companies and use cases. Five years is a very long time in IT, but the architectural choices we’ve made with Activiti proved to be the right ones. In the past five years, we’ve also learned a ton of how Activiti is used (and maybe (ab)(mis)used).

We’ve identified four key parts where we believe the engine could be improved. Rather than to hack all that stuff in piece by piece, we decided to do them all at once and get it all nice and clean in the code. That is the Activiti 6 engine. As such, version 6 is not a rewrite or new engine, as a matter of fact it’s fully compatible with version 5. Activiti version 6 is an evolution of the core engine, taking in account all we have learned in the past five years and what we (the devs, the community and customers) believe that is needed to cope with the use cases of the next five years, in the cleanest and simplest way possible.


Anyway, enough fluff, let’s dive into those four problem areas.

Problem 1: Model Transformation

In Activiti version 5, a process definition was parsed internally to another model (called the Process Virtual Machine model). People that have gone deep into the engine, will have seen classes such as ActivityImpl, ExecutionImpl, ProcessDefinitionImpl. That model is then interpreted and executed by ‘atomic operations‘ in the core engine.

The problem with that approach is that the model did not match 1-1 with the BPMN model and even with some concepts described in the BPMN spec. And, as with any translation of models between software layers, what happens is that model details leak between layers. For example, look at this class: the low level model leaks into the behavior class. You could argue that’s ok, but it does mean the developer of that behavior class needs to get a mental model in his head that does not match with the stuff learned from the BPMN spec.

The same code in version 6 looks like this (could be written a lot shorter, but this is for clarity): here, we reason about sequence flow on the current BPMN element. It turns out that writing such behaviors (as we do for the core constructs of BPMN we support) is a lot easier when it maps 1-1 with the BPMN spec. The code becomes a lot more readable and understandable and you can reason about very complex structures with much more ease.

Problem 2: Execution Tree

One the core goals when starting with version 5 was to get the number of inserts to the database as low as possible. Which we did: we (over) optimized and reused execution entities (the representation of a process instance that is executed) as much as possible.

The consequence of that mantra is that in many places you can now find if-else-constructions that are very specifically optimising a certain use case or pattern. The problem is that in version 5, it’s very hard to look at the execution tree in the database and know exactly how that maps to the BPMN structure. Which is crucial if you want to do dynamic process definitions and/or instances without hacking.

In version 6, we opted for clear and simple guidelines for the structure of the execution tree, making it simple to look at an execution tree and know exactly where and how the process instance is being executed.

Problem 3: Competing Stacks of Operations

Every API call in Activiti boils down to the manipulation of the runtime execution structure by applying changes to the runtime model and feeding that into the core engine where it gets interpreted by the ‘atomic operations’.

That’s a good approach, except in the very early days the stack of these operations was not designed to be central. Rather, there could be multiple stacks of such operations existing at the same time. To make a long story short, this means that certain patterns can’t be executed on the v5 engine.

In version 6, we implemented a central agenda that is used to plan all the core operations. This fixes those patterns and at the same time makes process execution also way easier to follow and debug.

Problem 4: Persistence Code

Code grows organically. Five years ago, we tried to keep persistent logic nicely contained … but over the last five years much of the persistence code has gone everywhere. This makes it impossible to swap out the persistence layer with a custom implementation.

Of course, you could hack your way around all that (people on the forum have done it :-) ), but it’s really dirty and not maintainable in the long run.

In version 6, we separated cleanly the persistence code (and the entity logic) into different, pluggable interfaces. Actually, that work has just recently been completed. Expect a blog post about it soon.

A Few Examples

These are just a few examples of what was not possible on the Activiti 5 engine, but does work on v6. Again, we probably could get it all working with a lot of hacking on the v5 engine, but it would not have been the nice and clean solution that we now have in the v6 engine.

Example 1: ‘Tricky Inclusive Merge’


This was an example I found on the blog of our partner BP3: assume that task A and B have been completed. At that point C completes and the exclusive gateway conditions make it go to E. Then E is completed. At that point D should become active and the process instance can finish.

In version 5, what happens is that the process instance never completes, as D is never reached. The reason for that is that the executions that are dormant after reaching the inclusive merging gateway will never be activated again.

In version 6, we fixed this (and similar patterns) by introducing a kind of ‘background ActivityBehavior’, making it possible for certain constructs (such as the inclusive gateway) to have background behavior, in this case verifying if the conditions to execute the merge are fullfilled.

Example 2: ‘Loops and Tail Recursion’

In this example, the service task is visited 1000 times before the end event is reached.

In Activiti 5, this leads to a StackOverflowException (and many people on the forum have reported it!).

In Activiti 6, this runs as expected. The reason for this is the centralized agenda for operations, implemented using tail recursion and a run loop .


Example 3: ‘Concurrency after a Boundary Event’


Don’t let the size of the example scare you, the embedded subprocesses here actually don’t matter (it’s just there to impress 😉 ).

The real problem here is the boundary event which has two parallel outgoing sequence flow for the timer boundary event. In Activiti 5, such a pattern was not possible (you have to use a parallel gateway). In Activiti 6, this ran without actually any work, as the direct BPMN model interpretation does not leave any other way of executing it

I want to try it out for myself!

Good news for you! We’ve release a first beta of Activiti 6 last week!

If you’re interested in running the new UI for Activiti 6, Tijs has written an excellent post about it here.

In the next weeks, we’ll look into many other facets of Activiti 6. So keep an eye on our blogs and Twitter!

The Activiti Performance Showdown Running on Amazon Aurora


Earlier this week, Amazon announced that Amazon Aurora is generally available on Amazon RDS. The Aurora website promises a lot:

Amazon Aurora is a MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Amazon Aurora provides up to five times better performance than MySQL at a price point one tenth that of a commercial database while delivering similar performance and availability.

Here at Alfresco, my colleagues have been playing around with Aurora too – actually Alfresco is mentioned as a partner on the announcement article above, so ‘playing’ is probably not the right word ;-). But anyway, the noise I’ve heard coming from that team has been quite positive. Since we’ve just published the results of running our performance benchmark on (amongst others) Oracle on Amazon RDS, trying out the benchmark on Aurora was a no-brainer.

I’ve added the benchmark results to the sheet of the previous performance blog post: The sheet compares the numbers from Oracle on RDS vs Aurora on RDS.

We selected the same hardware as the benchmark on Oracle RDS. Remember from my previous post that I couldn’t get MySQL to perform as expected on RDS (hence why I switched to Oracle on RDS, which surely is pricier!). Luckily, Aurora doesn’t seem to suffer the same fate as it’s MySQL nephew :-)

So here’s what we saw: when using just a single thread, the numbers weren’t that clear. Half of the use cases performed better on Oracle, the other half better on Aurora. On average, Aurora was 16.96% faster for the same benchmark on Oracle.  However, our test with the most data (service tasks vars 02), which is hard to see on the chart due to the scale, performed 50% better.



When we add more threads to the system to execute processes, we see a clear pattern arise. The more threads we added, the better the results for Aurora became. With ten threads (the most we tested), Aurora is faster in all but two cases (which does makes me want to check upon those tests, cause they are measured in a different way), and in general 25% faster.


Lastly, we tested the random execution (2500 random process instance executions), which comes close to a real system with real processes running on it. The graph clearly shows that Aurora beats Oracle for this test. On average, we saw a 20% better performance of Aurora:



Although we didn’t got the five times performance improvement from the marketing announcement (EDIT 31st July 2015: as pointed out in the comments below, the ‘up to five times’ is for MySQL and NOT for commercial databases. Which matches what we saw.), I was genuinely surprised that, with the same hardware and setup, Aurora is 20% faster on average with outliers up to 81%. Also note that our benchmark is really write-intensive, where Aurora makes setting up read replica’s a breeze through the AWS console. So probably for other benchmarks, with a bit more reads, Aurora can shine even more.

Also cost-wise, Aurora is interesting. I checked the bill: Oracle was about 1.96$/hour while the Aurora machine I was using is advertised at 1.28$.hour (couldn’t verify, as it shows as 0$ on the bill currently … maybe a promotion?). So more performance for less money … who doesn’t like that!

When it comes to Activiti, the conclusion is still the same: Activiti is highly performant and scalable. A faster database just makes that event better :-)

The Activiti Performance Showdown 2015


It’s been three years since we published the ‘The Activiti Performance Showdown‘, in which the performance of the Activiti engine (version 5.9 and 5.10) was benchmarked. Looking at my site analytics, it’s high in the top 10 of most read articles: up until today, 10K unique visits with an average time of 7 minutes (which is extraordinary in this day and age!).

After three years it is time (or perhaps already long overdue) for updating the benchmark results using modern hardware with the latest Activiti engine. We’ve obviously been enhancing and improving the engine with every release, but in the latest release something called bulk insertwas added to the core engine.

We worked together with our friends at Google for implementing this feature (they actually did the bulk of the work, no pun intended) so a big shout-out and thanks to Abhishek Mittal, Imran Naqvi, Logan Chadderdon, Bo-Kyung Choi and colleagues for our past discussions and code! As you can imagine, the scale at which Google uses Activiti makes for interesting topics and challenges :-). In fact, some major changes in the past (for example the new Async job executor) and very interesting items on the future roadmap were driven by discussions with them.

Edit (July 30th 2015): ran the benchmark on Amazon Aurora. Results here:

The Activiti Benchmark project

The code used to gather the numbers are the same as three years ago and on Github:

To run the benchmark, simply follow instructions on the Github page. Basically what the test does:

  • There is a set of test process definitions
  • The benchmark can be run with no history or with ‘audit’ history
  • Each process definition is started x times using a thread pool from 1 to 10 threads
  • All process definitions are put into a list, of which x random selections are chosen to start a process instance
  • The results are nicely gathered in a basic HTML page

Test environment

We let the benchmark project mentioned above loose on the following setups:

  1. Benchmark logic + local MySQL 5.6.20 DB on my dev laptop (Macbook Pro, i7 CPU, 16GB ram, SSD disk) with history disabled
  2. Benchmark logic + local MySQL 5.6.20 DB on the same dev laptop with history on audit
  3. The benchmark logic on a separate Amazon EC2 server (type c4.4xlarge, 16 vCPU, 30GB ram, provisioned IOPS SSD) and an Oracle database on Amazon RDS server (type m3.2xlarge: 8 vCPI, 30GB ram, provisioned IOPS SSD), both in the the same zone (Ireland). The reason why I selected Oracle instead of MySQL on RDS is that I couldn’t get MySQL to perform decently on RDS for odd reasons, so in the end I gave up. We do probably have more users running on Oracle than MySQL anyway.

Setup 1 and 2 are interesting cause they show what is the raw overhead of the engine, without any network delays (as its on the same machine). Of course numbers will vary with different databases, but the main point to illustrate with this benchmark is to demonstrate how light Activiti is.

Setup 3 is in my eyes probably even more interesting (if that’s even possible), as it runs on ‘real’ servers with real network delay. In our experience, AWS servers aren’t as performant as real physical servers, so that’s only good news when you look at the numbers and are already impressed ;-).

Furthermore, all tests were ran in Spring mode (the engine used in a Spring setup) and with a BoneCP connection pool (I used BoneCP, although I know it has been superceded by HikariCP, but wanted to have the same setup as three years ago).

The numbers

When looking at the results, they are obviously faster than the similar 2012 numbers for the same processes. Obviously the engine has improved and hardware vendors didn’t sit still either. Generally, I noticed a 2x – 4x improvement in the throughput results.

The tests were also ran on Activiti 5.17 and 5.18. So we learn two things:

  • How performant (or the overhead) of the Activiti engine
  • How much influence the bulk insert change mentioned above has

I bundled all benchmark results in the following Google spreadsheet:

If you prefer Excel:

It has various sheets which you can select below: a sheet for the different test environments and then sheets for each of the tested processes in detail (with pretty graphs!). Each of the sheets contain analysis text at the top. Anyway, let’s look at the different sections in a bit of detail.

Note that the numbers here only show the throughput/second (aka process instances started and completed per second). Way more numbers (including the certainly interesting average timings of process instance execution) can be found in the following links:

Results: introduction

I’ve split the results up in two types: the first sections will cover the difference between 5.17 and 5.18 and will highlight the difference the bulk insert makes. In the next sections, we’ll look at the numbers for the process definitions in detail for 5.18.

Activiti 5.17 vs 5.18 – local Mysql, no history

This maps to the first sheet in the spreadsheet linked above.

When looking at the numbers, we learn that Activiti adds very little overhead when executing processes. There are crazy numbers in there like +6000 process instances / second (for very simple process definition of course). Keep in mind that this is with a local MySQL. But still, this proves that the overhead of the Activiti engine is very minimal.

The sheet does show a bit of red (meaning 5.18 has a lower throughput) here and there. This is logical: there is a minimum of data being produced (no history) and the average timings are sometimes really low (a few milliseconds in some cases!). This means that doing a bit of extra housekeeping in a hashmap (which we do for bulk insert) can already impact the numbers. The ‘red’ numbers are in general within 10% range, which is ok.

In my view, the ‘random’ execution (2500 randomly chosen process definitions using an equal distribution, throughput/second) is the most interesting one, as in a typical system, multiple process instance of different process definitions will be executed at the same time. The graph below shows that 5.17 and 5.18 are quite close to each other, which was to be expected.

Screenshot 2015-07-17 17.32.04


There are however two outliers in the data here: the service task vars 1 and 2 process definitions. These process definitions are pretty simple: they have a 7 steps (version 2 has a user task) and one of them is a service task that generates 4 and 50 process variables respectively. For these two cases, with more data (each process variable is an insert) we can see an increase from 20% up to 50%! So it seems like the bulk inserts starts to play here. That makes us hopeful for the next tests with historic data!

Activiti 5.17 vs 5.18 – local Mysql, history enabled (audit)

This maps to the second sheet in the spreadsheet linked above.

In the benchmark we did three years ago, we already learned history has a huge impact on performance. Of course, most people want to run with history enabled, as this is one of the main benefits of using a workflow engine like Activiti.

As we suspected from the previous setup, enabling history makes the bulk insert shine. In general we see nice improvements, in the 5%-40% range. Tests with fewer data (eg the user task tests, which do a lot of task service polling) see a bit of red there (meaning lower performance), but this is probably due to the same reasons as mentioned above and are well withing acceptable (<10% range). So this proves that the bulk insert, when there is a ‘normal’ amount of data being generated does have a very good impact on performance. Also, in general we see again that the overhead of the Activiti engine is very minimal with very nice numbers for the throughput/second (think about it. Doing 100’s of process instance executions per second is a lot when you translate this to process instances / day!).

The random execution graph (same as above, 2500 random executions, throughput/second), now shows that 5.18 keeps its distance from 5.17. For all threadpool settings, 5.18 can do more process instance executions in the same time period.

Screenshot 2015-07-17 17.45.51


Activiti 5.17 vs 5.18 – Oracle RDS, history enabled (audit)


This maps to the third sheet in the spreadsheet linked above.

Now here it gets different from the tests we did three years ago. We have two servers, both in the cloud (Amazon), both with real network delays. One thing to notice here is that the throughput/seconds is lower than the previous setup (about 1/4 of the throughput). Again, real network delays of the Amazon infrastructure are in play here.

The reasoning behind the bulk insert idea was that, when switching to a data center with a higher network delay, it would be beneficial if we can squeeze in more work for each roundtrip we do. And oh boy, that surely is the case. If you look at the numbers in the sheet, they are in general almost green or < 5% difference. But more importantly, we see improvements up to 80% for some cases (not surprisingly those cases with a lot of data being produced)!

The random execution graph (same setup as above, 2500 random executions, throughput/second) shows the following:

Screenshot 2015-07-17 17.54.22


On average, there is a 15% gain for this particular test (which is awesome). Also interesting here to see is the shape of the graph: a linear line, diverting from the 5.17 line the more we threads we add. But if you scroll up, you can see the shape being very different when running on my laptop. In fact, you can state that we’ve reached a tipping point around 8 threads for the previous two graphs while here there is no sign of tipping yet. This means that we could probably load the Oracle RDS database way more than we were doing. And that my laptop is not a server machine, which yeah, I already knew ;-).

Process definitions

For each of the process definitions, there is a specific sheet in the spreadsheet linked above.

process01 is the simplest process you can imagine:

Screenshot 2015-07-20 09.28.20

process02 is a bit longer, all still passthrough activities, but this generates obviously more history (for example one entry for each passthrough activity):

Screenshot 2015-07-20 09.31.01

process03 tests the parallel gateway (fork and join), as it’s a more expensive activity (there is some locking going on):

Screenshot 2015-07-20 09.31.25

process04 goes a bit further with the parallel gateways, and tests the impact when using multiple join/forks at one:

Screenshot 2015-07-20 09.32.10

process05 is about using a service task (with a Java class) together with an exclusive gateway:

Screenshot 2015-07-20 09.32.47


multi-instance is a process definition that tests the embedded subprocess construct, with a parallel gateway in the middle. Knowing a bit about the internals of the Activiti engine, this process most likely will be the slowest:

Screenshot 2015-07-20 10.42.57

usertask01 to 03 are quite similar, 01 has just one user task, 02 has seven sequential ones and 03 has two user task, with a parallel gateway as fork join.

Screenshot 2015-07-20 10.45.20


And lastly, the servicetask-vars01/02 process definitions have the same structure as usertask02, but with a service task that generates process variables (four process vars for the 01 version, fifty process variables for the 02 version). The 02 version also has a user task.

Screenshot 2015-07-20 10.47.22

Detailed numbers for process definitions

The first batch of process definitions we’ll look at is process01/02/03/04/05. These are all process definitions that complete after starting a process instance, no wait state. Which makes it a prime candidate for testing the raw overhead the Activiti engine adds.

Here are the throughput and average timing numbers (click to enlarge):



And the similar numbers for the Oracle RDS setup (click to enlarge):


These numbers show two things clearly:

  • The average timing goes up due to context switches when adding more worker threads, but in the same period of time more can be done in general. This shows that Activiti scales nicely horizontally when adding more threads (and thus, more nodes in a cluster).
  • The raw overhead (i.e. what the Activiti engine adds as logic on top of the database queries) is really low. If you look at the throughput charts, take in account that this is process instances/second! On the local MySQL we’re talking about thousands/second, and on Oracle hundreds a second!

The second batch of process definitions are those with a user task (I’ve made a crude distinction here, as looking more detailed would add way too much information). Now, one big difference with the previous batch is the way the time is measured here. For the previous batch, it was easy, as the process instance would complete after start. For these process definitions however, the process instance reaches a wait state. The numbers below take in account both the starting of the process instance and the task completions. So for usertask02 for example, with seven user tasks, this means that in the results you have to remember that we are talking about 8 database transactions (one for process instance start, seven for each user task). Which explains the difference in magnitude of the numbers versus the previous batch.

The first graphs show the results for 5.18 on a local Mysql (click to enlarge):


Similar, but now for Oracle (click to enlarge):


A similar analysis as with the previous batch can be made: when adding more threads, the average time goes up, but in the same time period a lot more work can be done. Still, taking into account we’re talking here about multiple database transactions for each process instance run, the numbers are really good.

Now, this was only a summary of the number analysis we did. The spreadsheet contains way more data and graphs (but we have to limit this very long blog a bit…). 

For example, the spreadsheet shows the detailed analysis for each process definition in detail. Take for example the throughput graph of process definition process02 (seven manual tasks). Above is the local Mysql, below is the Oracle RDS:

Screenshot 2015-07-20 11.03.33

First of all, the difference in throughput between 5.17 and 5.18 is very noticeable here (since there is quite a bit of historic data). But also the numbers are really impressive, in both setups, going well over the thousands for the local db and hundreds for Oracle RDS.

Another example is the servicetask-vars02 process definition (seven steps, one is a service task that generates fifty variables, and one is a user task to make sure all variables are written to the database – they can be optimised away without wait state otherwise). Again above is the local Mysql, below is Oracle RDS:

Screenshot 2015-07-20 11.04.50

The difference in line is very interesting here. It shows the Oracle is having no issue with the additional concurrent load. It also nicely shows the difference between 5.17 and 5.18 again. Do take in account that in this process fifty process variables get written. While the numbers are a magnitude lower than the previous example, taking in account what is happening makes the numbers look very good in my opinion.



The conclusion which we made three years ago, is still valid today. Obviously results are better due to better hardware, but I hope the graphs above prove it’s also about continuously improving the engine based on real-life deployments. Again, if you missed it in the introduction above, the bulk insert  was conceived as a solution for a problem Google found when running Activiti at scale. The community is really what drives Activiti forward by challenging it in every way possible!

In general we can say

  • The Activiti engine is fast and overhead is minimal
  • Activiti is written to take in account concurrent setups, and scales nicely when adding more worker threads. Activiti is well designed to be used in high-throughput and availability (clustered) architectures.

Like I said three years ago: the numbers are what they are: just numbers. Results will vary from use case, hardware and setup. My main point which I want to conclude here, is that the Activiti engine is extremely lightweight. The overhead of using Activiti for automating your business processes is small. In general, if you need to automate your business processes or workflows, you want top-notch integration with any Java system and you like all of that fast and scalable … look no further!

All Activiti Community Day 2015 Online

As promised in my previous post, here are the recordings of the talks done by our awesome Community people.

The order below is the order at which they were planned in the agenda (no favouritism!). Sadly, the camera battery died during the recording of the talks before lunch. As such, the talks of Yannick Spillemaeckers (University of Ghent) and Elmar Weber (Cupenya) were not usable :-(. Their slides can be found online though:

Here are the recordings of the talks. The slides can be found here.

Business Process Automation at CERN with Activiti, by João Silva

Process testing, debugging and prediction, by Martin Grofčík

Index Activiti Data on Elasticsearch, bySilvio Neta and Mike Dias

Activiti 6 Launch Recordings online

The Activiti Community Day in Paris last week was a blast. The venue, the talks, the attendees … all were top notch.

For the people that weren’t able to attend this time (I’m sure you had good excuses … right?), don’t worry. We’ve recorded most of the sessions (and the slides are already online a couple of days). The first batch of recordings, those about Activiti 6, are ready and on Youtube.

The rest of the sessions will follow in the next couple of days.

Launching Activiti 6 slides and recording:

This presentations goes into what Activiti v6 is, what the changes are and why we slap a 6 sticker on it.

Activiti 6 UI slides and recording:

V6 will be shipped with a new UI. This presentation shows it live in action, running against the v6 engine.

Activiti 6 Q&A

The questions after our presentations. You can clearly see how much more relaxed we are after the two presentations beforehand all went very well :-).