See google scholar for a more up-to-date version.
I am currently doing a PhD at Utrecht University on the interplay between processes and data. To explain what this means, consider following example. Suppose that you want to buy a two products, a pair of pants and a t-shirt, online at your favourite store. Imagine that these products can be identified by p1 for the pants, and p2 for the t-shirt.
To buy these products, you as customer have to perform certain actions:
- Visit the site of your store.
- Search for p1 (or use the sites’ navigation to directly go to the product).
- When on the product page of p1, click a button to add it to the cart.
- Repeat steps 2 and 3 for product p2.
- Go to your cart, and checkout.
This is a process. As illustrated at step 2, there may be multiple paths towards an end-goal. Simple, right? Now imagine you are the store, and you have a million customers. Each customer has their own way (their own path) to reach the checkout-step. As the store, you need to make sure that the right products (= data) go into the carts of the right customers (= data), producing the right orders (= data), allowing for a correct execution of your business process.
Usually you are not only interested in a correct execution, but you also want to gain insights into your processes. The following questions are not uncommon:
- How many different paths are there? How long do they take? Why?
- Which products are bought? Are there patterns we can discover?
- Can we improve the process? How?
As hinted at by the parentheses, products, customers, and orders in this scenario are “data objects”. What I study during my PhD is how we can work with these “data objects” using a process-aware hat. In particular, in the coming years I hope to find some answers to, amongst others, the following:
- How do we model multi-process systems?
- How do we design multi-process systems?
- How do we analyze multi-process systems?
- How do we build multi-process systems?