Efficiently configuring Review by Exception

Hello, this may be a long one

I’m working on finishing an architecture for Review by Exception at the Life Sciences manufacturer I work for. I’m trying to ensure that we’ve got an efficient and maintainable scaffold for preparing new devices and supporting existing devices. Currently how I’ve structured this is with a few artifact tables that connect together, and documenting standard design patterns for other users in our Account.

To plan manufacturing activity, I’ve set up these tables:

  • Item Master
  • Routing Master
  • Data Master
  • Routing Stages

These are linked together, so each Item has multiple routings, and each routing has a set of expected data and expected stages

Then to track manufacturing activity, I’ve set up these tables:

  • Orders
  • Assemblies
  • Metrics
  • Activity
  • Exceptions (for defects, record corrections, etc.)

These are linked together, so each order has assemblies, and each assembly has a set of metrics, activity, and exceptions.

When users create new Orders, they select a routing for that order. Then they populate that order with assembly records (before rollout we’ll connect this to our ERP so they are transferring the records from there, instead of making them manually). These assemblies get pushed to the manufacturing environment and get routed through the work process based on their Routing Stages.

At the end, each device gets pushed to an eDHR review application. QA review checks:

  1. Each order quantity is correct
  2. All exceptions are closed
  3. The activity logged for each assembly record matches the expected routing stages in the routing
  4. The data logged for each assembly record matches the expected data in the routing

I’d like to make sure this process is as efficient as possible. Currently, for QA review the user clicks through each assembly and the app runs a query to check the items above. If it doesn’t find anything, I’m comfortable saying they can accept as is. But for larger orders that could mean clicking through hundreds of records.

In an ideal case, what I’d like to do is be able to query info on the order and return to QA a list of only assemblies that were flagged for one of these items being out of tolerance, but I’m not sure how to do that.

1 Like

In classic software fashion, writing that long post up gave me an idea:

My new solution to this problem is to use the Looper custom widget to run through the materials linked to the order and for each one, automatically check whether any of the data is out-of-conformance, and then only send those specific materials to the QE for review.

I know that the looper widget has a capacity of only up to a certain number of loops, but I believe that cap is still higher than the typical order quantities we would see.

David, for the use case of showing a Q/A reviewer if there are any exceptions on an Order, and if so how many and how many are not released, you shouldn’t need any Looper widget. Assuming you are storing the Order number as context in your Exceptions table, then consider: doing a Query on the Exceptions table filtering records to those for a given Order (and any other sensible filters e.g. Exception Status), and using a ‘Count’ aggregation. You can then present a user with a simple view on a given Order e.g. ‘Total # of Exceptions’, ‘Total # of Open Exceptions’, ‘Total # of Released Exceptions’, etc…

1 Like

You also mentioned that QA needs to check if expected routing was followed and if data logged matches expected data. I would suggest rather than having complex looping logic to check this, build app logic where an exception is created if data entered is outside the expected value. You can also construct the app logic so that the routing is enforced. Apps built in this way will allow you to do a simple review by exception using the logic Tim mentioned above.

1 Like

Hello David,

Based on what I read you have a designed what we call a monolithic solution that copies design patterns from traditional MES. Managing Item Master, Routing Master, Data Master, Routing Stages creates what we call a master data driven design data replicates what traditional MES does. Here is a bit of background and explanation of why Composable is preferable to Monolithic.

A monolithic solution is characterized by the following:

Data Model Centric
Process and Activity Models are defined by data in Tables and Monolithic Apps are used to execute the process or activity model. Data models in Tulip Tables provide an abstraction of the complexity of the operations in a one size fits all approach.

Process Centric
Monolithic Apps are built to serve a function based on a Functional Decomposition of the complexity of the operations. The finite set of Monolithic Apps are intended to provide the same function to frontline operators anywhere in the operation.

Designed for Maintainability
Monolithic Apps are designed to ease the maintenance and management of the solution by a central team by reducing the number and variety of Apps used. The monolithic solution is designed top-down in a rigid hierarchy where frontline operators serve the Apps with information by choosing which function is applicable vs being supported and enabled to do their work.

We would strongly recommend against this and instead to follow a Composable approach since Tulip is not a traditional MES. Tulip is NOT designed to be used to build apps that are monolithic - i.e. one app to serve all industries, in all modalities, in all scenarios, with any machine, and for all operators. Monolithic solution results in what we call a JAM (Just Another MES).

This approach will inevitably result in a solution that is at best “just as good” as the other MES and will inherently have all the associated shortcomings.

  • Monolithic solution take months/years and high effort to deploy - long time to value.
  • They make inherent platform capabilities such as Vision, IIoT, AI harder and sometimes not usable. Ie reducing ability to leverage native digital technologies.
  • They are not human centric, the operator is serving the system vs the more valuable where the system serves the operator. Thus inhibiting productivity gains.
  • They are inherently complex and hard to maintain, they require a dedicated team with unique knowledge of the solution - exactly like a custom built software solution
  • They do not scale well since they expect all operations to adhere to one standard data model.
  • It is a strict top-down approach that assumes changes are minimal and are generally known.
  • They are built to automate a process where humans have to play by a strict set of rules. This assumes very little change and that all variation are known.

Building Composable Solution is Easy but Requires a Change in Mindset.
Composable solutions use the capabilities of the Tulip Platform to provide unique and specific way for frontline operators to interact digitally and enable them to be more productive. It provides to the operator a digital interactive solution where the physical and virtual world are interconnected. This is a critical principle in achieving productivity gains and is inherent to composable solutions.

The Tulip Platform is a Software (SaaS) however Tulip apps should not be thought of as software. They are purpose built highly configurable digital content that should be continuously changed and adapted to the needs of the frontline operations. Modifying or enhancing an app is the same as changing master data, in fact apps are master data! The Tulip platform provides a way to manage app changes through a governed, version-controlled life cycle process to help manage this configurability. Apps are composed using no-code and the App Solution is composed of Apps. Building solutions in Tulip using a monolithic functional based approach, as if its a software solution, critically constrains the ability to rapidly build solution and gain the benefits of a composable system.

Other important benefits are:

  • Provide Augmented frontline workspace for increased productivity
  • Use of seamless integrated digital technologies including Vision, AI/ML, Smart devices, etc.
  • Instrumentation/digitization of processes and frontline operations to enable data driven decision and CI.
  • Guide production execution with shared information from Tables and external systems.

Composable solutions provide added value in their ability to easily integrate and collaborate with other systems. This is at the core of IIoT where different autonomous devices and systems easily communicate and interact. Tulip is an IIoT platform and natively provides that ability to build integration with other systems using its no-code approach. With the platform consuming and sending data to other IIoT, end-points can be achieved in hours by people with little IT background. This all requires a composable approach where Apps have specific flows and connections to the local physical world.

I know this is a bit of a mouthful, I would be happy to go over this with you in more detail. Our team is also well versed in these concepts and can help guide you if you are interested.


  • Gilad

This topic definitely got a lot of great responses. The way we see this based on our internal discussions is that a solution “system” (I use system not to mean one single app, as @giladl has mentioned, but rather a business process for building these kinds of apps) should provide a way to standardize how systems communicate. Not so one app can do 10 things, but so that we can provide other users a readily available “do XYZ” guide.

You made the very good point that exceptions should be linked to a single data object (order) that the user sees. This makes sense, and encapsulates one of the per-material items we’d want to resolve into a per-order item.

@kim.phillips1 also makes a good point that items like “was the routing followed?” and “Is the data accurate?” should be handled at the app level. This also makes sense. It’s easy for us to validate as well that when the user finished the app, the record was updated to the next location.

@giladl’s points about Monolithic vs. Composable are well-taken. We’ve definitely observed that monolithic approaches are not fast. I think there’s absolutely room for us to move some items into an app-level handling, as discussed by other users. For example, storing information about data content in a standardized format in a table is not really helpful or relevant for the purposes of verifying that a DHR is present and complete (and will be a nice change that increases simplicity of creating new OPs).

However, to meet our regulatory requirements, we still need some method of showing that based on a list of work operations we claimed we’d do, that we actually completed those work operations. The same thing applies for tests: Given a list of tests, we need to ensure that the results of that testing are stored somewhere, associated with the material. Our experience using the record history widget is that when this is used with serialized components, this can readily bloom into a massive amount of data to review, so we’d lean towards tables to do this.

Here are my thoughts now:

  1. It would make sense to continue storing some information about item masters/routing masters in Tulip, to tell Tulip where to make new material records visible + where to run eDHR review. But that doesn’t require us to have nearly as much data as our current “prototype” version of this setup has.
  2. Instead of pointing to a discrete “work plan” built in place from operations stored in tables, this can be managed statically and under change-control in an eDHR review app for specific devices which need review.
  3. Verifying that all metrics are present is still probably a requirement for us, but verifying that each metric is passing is probably not, since any failing results should immediately trigger an exception, which can get reviewed.

I’ll do some internal polling to see if maybe I’m overthinking requirement 3. here.

I’m notorious for whiteboard vomit (:smiling_face_with_tear:) so @giladl if you want to a more detailed discussion of that here or in DMs that’s fine.

Remember that components in Tulip including the record history widget is built to support a composable approach. That may be way you are seeing to much detail in your views.

To your points above this can all be easily achieved using an approach where apps built in away that they support the process and there is no need for any additional tables to define routings, operations, etc. We have plenty of customers that do this that are fully validated and even have been through audits with no observations. As I mention its easy but requires a change of mindset on all levels including your QA.

To convey this is going to be hard in this format and will require us to exemplify it through some of your apps. In addition we are working on much new content in University and Knowledge base on this topic.


  • Gilad

@David2 this post and thread is wonderful. First of all, I’ve just looked at some of your applications and, for whatever it’s worth, I think you are a brilliant app builder. One of the best I’ve seen. Keep up the awesome work. Users like you push the platform and inspire us - so THANK YOU.

I want to weigh in on your use case, but admittedly I come from a perspective of general manufacturing where GxP is not as much of a priority relative to general mfg / continuous process improvement. With that said, I think you’ve arrived at good place with your 3 bullet points listed. I’m particularly interested in how you intend to put point 2 in place. I’d love to hear more about your thoughts on this:

Instead of pointing to a discrete “work plan” built in place from operations stored in tables, this can be managed statically and under change-control in an eDHR review app for specific devices which need review.

I’m happy to weigh in on specific questions, like the one with the looper. Based on the problem you’ve described there I can’t honestly think of a better method – you can explore queries that accept “is in” functionality to pass the linked record array in as well as other filters on pass fail… but if you are going to need to apply logic or “calculated” fields within the query (comparing a target to an actual, for example) that is going to have to happen in a loop. Or if you want to look something up per record. It’s a limitation, for sure. But it speaks to something important, and that is complexity - so let’s talk about that for a moment.

You are a unique and talented builder and when you adopt a tool like Tulip you can absolutely change a business and improve the heck out of it (I’m cheering you on over here, btw). The challenge is that, although these table structures make sense to you, it’s wise to be aware of the inadvertent dependency you place upon subsequent owners of the applications you create. Someone is going to have to learn the world you’ve created and work within its rules.

With that said, I see that you’ve already created a number of applications that are built in a “composable” way… keep doing this! New Tulip builders at your company will benefit from these simple applications, it will enable them to quickly and easily add to your work without needing to intimately understand the structures you’ve built with tables and their interdependencies. When we get into master data such as routings (and other things that traditionally live in a system of record such as an erp) we end up creating complexity in Tulip that, while sometimes necessary, may reduce the sustainability of your solution and limit the democratized citizen development that Tulip was built for. It’s something to be vigilant and thoughtful about. In what way will applications depend upon these structures and, in what way can we make them independent? Which features inherent to the Tulip platform can be leveraged in favor of structures that you build on top of them?

Finally as it relates to Tulip Tables, GxP has many compliance related requirements that, when put under a microscope, subject Tulip Tables to an added layer of scrutiny and controls. Your quality manual and quality system are the best indicator of what you can and can’t use tables for… just something to be aware of and I’m not really the right person to weigh in here. Tables, as you know, are not immutable right now and are not version controlled.

All of this is to say, I support you and you seem like you are thinking about this the right way. The use of tables isn’t inherently monolithic, but a balance needs to be struck between simplicity of adoption and complexity and the resulting dependency. To the best of your ability always consider those who will come after you and the challenges that they will have building within the framework you are creating. It’s monolithic if only the most advanced users can change it, it’s composable if even someone with basic Tulip knowledge can contribute and solve problems (and even better if they can lean on your structures and templates to get going even faster).

Good luck and keep pushing!


Hey @freedman @giladl and others here.

This was a big question that got a lot of large responses. I appreciate the compliments about the apps we’ve built, there’s definitely a lot of work that’s gone into them, and I’ve also been very conscious about the complexity budget we can afford as a company when releasing applications. I’ve grabbed Gilad’s response as the most core example of this, and we’ve had some internal discussions about how we approach the DHR.

There’s a lot that has come out of those discussions that has had a big impact on how we’ll approach Tulip, in particular, that we’re comfortable with the “Phase Gate” approach discussed here. But also we’ve gotten more comfortable with the idea that there’s actually a lot more in these applications than just a workflow, and that we can put more weight on those features in the future.

For those who might revisit this topic in the future, I wanted to summarize our change in approach between now and the future

  • We’ve started incorporating more “document-style” components in our applications. In some places this was to bridge the gap between electronic and non-electronic parts of our operations, but in others it’s to improve communication and documentation for reviewers. I am piloting a dedicated “Documentation” section that just includes details like risk level, generated manufacturing data, etc. in a place that reviewers can easily see.
  • I’ve reviewed a lot of the tables in my original model and have abstracted a lot of that info away. The “Master” parts are gone. “Metrics” is also gone. The important data for reference later was only that certain processes and tests were successful. This table was merged with “Activity” to become an even more generically labeled “Events” table.
  • Instead of storing an item master, we planned out and documented for future reference how a process was divided into “Phases” and what requirements had to be met for a material to exit a phase (usually these are 1:1 with apps, but for some pieces there may be 2-3 apps that are checked before the phase moves on).
  • By validating each phase gate operates correctly, we establish that only material that was completed successfully reaches our review applications. The review process is reduced significantly to just reviewing occasions where there were exceptions, like failed tests. The review is now just to ensure these items were dispositioned correctly.

Hi David,

I am glad that you found value in our input. Its great to see how you are adopting our best practices and pushing the limits of adoption at your company. I would recommend at this point to get with your CSM and consider a more in-depth solution review with one of our Solution Leads.

Let me know how it goes.

  • Gilad