Composable vs Monolithic Architectures

I have a question about this article.
https://support.tulip.co/docs/composable-monolithic-architectures

Let’s say I work in a sheet metal company, which produce four products.

  1. Large Rectangle shape (40cm x 50cm)
  2. Small Rectangle shape (10cm x 20cm)
  3. Large Disk shape (diameter 30cm)
  4. Small Disk shape (diameter 15cm)

Quality assuarance department checks all the product before shipping. They check the dimension of the product, and record the dimension.

Before introducing Tulip to our company, we recorded the result in a paper like this.

product ID date type result
0001 2024-01-01 Large Disk D=30.2 cm
0002 2024-01-02 Large Rectangle 40.2cm x 49.9 cm
0003 2024-01-03 Small Disk D=14.8 cm

Now we subscribed to Tulip and I want to build an App for the Quality assuarance department.

I can think of three options.

  1. Create four App for each product. Each having one single step to enter the results.
    Each check item is written inside the App.
  2. Create one App, having entrance step and four other steps to enter the results.
    Each check item is written inside each steps.
  3. Create one App with one entrance step and another single result input step.
    The check item is inside the Tulip Table. The single result input step on the App will read each check item from the table using the Table Query+Aggregation and prompt user to input the datum according to the product type.

It is easy to create the App in the 1st and 2nd way. But we have to build more and more Apps/steps if we introduce new type of products to the market. We also have to revise all the Apps/steps if we want to add new check item (e.g. thickness of the sheetmetal).

It is hard to created the App in the 3rd way. But we do not have to revise the App/steps even if we introduce new products, nor when we add new check item. (Although we do have to revise the check item table)

I guess you call the 1st and 2nd option “Composable” and the 3rd option “Monolithic” in the article. Is my understanding correct ?

Hello Ta-Aoki,

All of the three options are viable designs for the use case that you are describing. What really matters is how you evolve the design and what are the true operational and physical constraints of the process. We recommend a “digital twin” approach that as close as possible represents the actual process that is being executed in the physical environment. That means that the app solution should never be more complex or abstract than the actual physical process.

  • Lets start with “evolving the design”. This means that you start simple with one product create an app that represents the process, see how it works, capture the data and get feedback from operators. We call this bottom-up development. Then you do the next few products in the same way and start seeing where in fact there are commonalities and what makes sense to consolidate or re-use. We call that parametrization. The next step may be your option #2 where you start consolidating and abstracting this data and results. Option #3 is extreme parametrization and should be arrived to typically with a mature digitally transformed organization since the app is going to be harder for citizen developer to enhance, and extend as well as the data structures will be more abstract. In other words option #3 is less “Composable”

  • Regarding the “operational and physical constraints” there are some important parameters to consider when doing this type of parametrization. The complexity of the data structure, in other words attributes that are measured, how many dimensions, how ofter is the data changed, etc. Also how many different products or products families are there and how often are new products added or old ones removed. In your example if there are only 4 products then option #1 is preferred, if there are 1000s of products with new products introduced very often then option #3 can be considered.

Remember that none of these options are mutually exclusive and you can use all three options design together even within the same app for different attributed or data. Also remember that the more master data driven your apps (we call these monolithic apps) are the harder they are to change and extend. Monolithic apps have a number of shortcomings such as making it hard to use some of the native human centric feature of the platform. Lastly also remember that editing an app to change a tolerance is in fact just as simple as editing a tolerance in a data table. However when you edit an app you get the added benefit of audit trail, controlled access, and controlled release of the app into production.

I hope this helps?
Thx

  • Gilad
1 Like

Dear @giadl,

Thank you very much for the detail explanation!
I basically agree with you, and I think I now have better understanding on Composable vs Monolithic idea.

I still want to know clearly if the option #3 is “Monolithic” which the article is strongly against to.
You used the phrase “less Composable” to describe option #3 . Would you even say option #3 is “Monolithic” ?

(I am not against the article nor the Composable approach. I just want to know if I am understanding the article correctly.)

It’s not that we recommend against Monolithic solution, in certain circumstances and use cases they have a place and can bring value. However we recommend arriving at these solutions through the bottom-up iterative approach where the right level and scope of parametrization is arrived at generatively. What we recommend against is starting to build a monolith at the outset with a top-down approach.

1 Like