[Tutorial] Tulip for Asset Maintenance (Planning Processes around cGMP)

Hi all,

This has been a very exciting project and I wanted to share a bit about it to see how others approach these kinds.

BACKGROUND
Like many companies in the life sciences, we use controlled environments to ensure products are assembled with a low risk of potential contamination. Merrimack Manufacturing is a CM, and therefore we tend to have a lot of different projects come and go. Since we’re growing, we add new controlled environments at least once a year.

Currently, our entire process for monitoring and documenting controlled environments is paper based, and there’s a big push to move it to an electronic format for continuous monitoring and to reduce paperwork (we currently have several fireproof cabinets full of documents)

IMPLEMENTATION
Following a general framework I’ll talk about a bit later, we implemented the set of four apps below:

A quick runthrough of the process:

  1. The CE Engineer creates and configures CEs using the Manager app
  2. The CE Engineer issues some recurring jobs for the CE in the Manager app
  3. The technicians run the jobs using the Daily Maintenance App and the Monitoring Apps
  4. Any out of tolerance data is gathered together into an Excursion Report
  5. The Excursion Report gets reviewed and closed
  6. A subset of jobs that require review get processed through the CE Manager

NOTE: If this was configured for LTS 13 I would probably have split the Review and CE Manager into two apps, as newer Tulip versions include a way to transition to the first step of an app, which allows for a more seamless traversal between the workflows here.

Now let’s look at the data. We haven’t zoomed in on the apps yet but you can tell just looking at this data that there’s quite a bit happening

Without rehashing what’s in this diagram, basically:

  • The CE is an Asset (because it’s a thing we own) and a location (Because it’s a place you can be).
  • The CE has properties (an ISO level, alarm limits, etc.)
  • The CE has a number of testing points (stored as locations, also why we make it a location)
  • We can issue jobs against the CE asset to perform a piece of work
  • As we run our jobs, we save some output data tied to each test location (for analytics)
  • If something goes wrong, we turn it into a Report that must be actioned, with some number of events attached to the report.

Okay, that’s Part 1. In Part 2 I will talk a bit about why this is set up in this way and how it implements the kind of views you would typically use in a cGMP environment

21 Likes

cGMP REQUIREMENTS
Implementing large systems in cGMP is a balancing act of how much and how little slack is allowed. People cannot be free to modify core These are some rules of thumb I generally follow:

  • There should be a way for a single person or small group to alter artifacts in a validated app
  • There should not be a way for a person to alter logs in a validated app
  • There should be a way for any user to export logs
  • There should be a way for any user to view and export artifacts
  • There should be a way for any user to view the full record history widget for artifacts and logs (and references, if they are a major part of your system)
  • There should be an “abbreviated” record history widget for major artifacts in your system (here, Jobs and Assets)

As much as possible you do not want to require people to be going through the cloud interface to do normal day to day activities. Not because there is anything wrong with the cloud interface, but because the probability of errors, losing data, and “jankyness”/appearing unprofessional goes up. Editing data in the cloud environment also induces a compliance risk, as the changes need to be planned and verified, vs. apps with validated workflows.

I’m going to focus on the Manager app first to see how this is set up:

If we look above, we see four groups:

  1. CEs - Our general user interface, for things like issuing new artifacts and viewing existing ones.
  2. Analytics - Our continuous monitoring group which is used less often
  3. Printables - A one-off step for a printable QR code
  4. CE Admin Panel - This is our edit interface

This is the core structure of pretty much all large manager apps. you will need to build.

Let’s open it up:

First, we have our main selection interface. On the left are all the major groups used in this app. I generally recommend using an icon AND subtitle, for validation purposes and to make it very clear what each item refers to.

When we select a CE, we get our core CE view (Identifying pictures removed). On our right is our abbreviated record history. This is only showing us changes to the artifact that occurred in this manager, so at a glance we can see any alterations to the artifact, without needing to search. Note that there is no option to print. This is intentional. People should not be printing the abbreviated history to produce in an audit, they should be printing the full history.

There’s a small menu under our asset picture (large blacked-out block). Details will show us all the info for the CE, Analytics will show us our main continuous monitoring analytics, Asset History will show us the full history.

This is the full history. This kind of step should generally be pretty standard on all your apps. One thing to note is that you should basically always have the five filter options above: App Name, Step Name, User, Start Date, End Date. This ensures you do not have to search/dump the entire, possibly several hundred page, record history if someone asks a question specifically about what Steve did last Thursday when running the monitoring process.

One last one from the CEs: This is our admin interface, and is only accessible to the CE owner, or a list of allowed superusers who are part of the published app version. This follows on my point earlier about allowing a way for artifacts to be modified as needed. Here we can edit the name, details, properties, and test points.

The publish button in the bottom right is also a best practice. Never assume that users will enter all required setup data at once, or even have it ready. If it takes more than a few seconds to do, plan for the artifact to have some kind of “pending”, “waiting”, or “draft” status where it is kept separate from other items in your system.

Part 3 will cover the jobs section

12 Likes

JOBS

Jobs are similar to CEs in that they have a front page with an abbreviated record history

One item I want to draw your attention to is the “Notes” field in the history. This is a technique that has worked very well for me: When you have an artifact record, where you might need to document “conversation” around the item, include a notes field. This can then be used in the record history to very quickly draw attention to major events.

Here, we see that we had an alert for a high Tryptic Soy Agar measurement after incubating our sample. This can easily become part of the review.

Under our menu we have similar options, with the ability to see the Excursion Report and Job Analytics.



NOTE: Plates omitted as there was too much data to redact.

If we open our analytics, we can see the three basic breakdowns of our data. The first is our nice pretty charts that include our alarm limit performance, next is our Test Data, and the last is our Plate data. Each one of these includes all fields for the record (excluding foreign keys) and is exportable for audit.

RUNNING JOBS

Both the apps for running jobs are structurally the same, and follow pretty closely to what I’ll call the “Tulip Formula” for batch processes.

  1. Pick the thing you’re working on
  2. Start the process.
  3. Tulip sets a “Step” field to the step name
  4. Enter data
  5. Make sure all the data is entered
  6. Go to the next step and update the “Step” field to the new step name
  7. Repeat until the process is finished.

You should pretty much never deviate from this structure for any process where the work is done by one person at a time, on a single artifact, over a sequence of time.

One thing I have done, which I’ve included above, is use steps with very little or no interaction as a front-end for reused step processes. This is a lightweight way to reduce the complexity of doing things like logging alerts, closing jobs, etc., and provides some visual feedback for operators.

Here we have a “Job Finished” step that just runs our logic for closing the job and issuing a new job when the user has finished. It avoids the awkward “Hang” that happens when a lot of logic is attached to a single button. The “Excursion Popup” step is basically the same thing, although it requires the user to acknowledge an alert before continuing.

10 Likes

TAKEAWAYS

This was a very quick runthrough of the app but just to summarize some of the takeaways here:

For Architecture:

When setting up new data models

  • There should be a way for a single person or small group to alter artifacts through a validated app
  • There should not be a way for a person to alter logs in a validated app
  • There should be a way for any user to export logs
  • There should be a way for any user to view and export artifacts
  • There should be a way for any user to view the full record history widget for artifacts and logs (and references, if they are a major part of your system)
  • There should be an “abbreviated” record history widget for major artifacts in your system (here, Jobs and Assets)

Apps for “managing” artifacts should be composed of

  • A general view with easy access to artifact data dumps, logs, and analytics, and an abbreviated record history that shows changes to the artifact configuration only
  • A way to print any required labels or tags
  • A restricted admin-interface that allows superusers to update data associated with the artifact configuration

Tips and Tricks

Full artifact histories should have the five filter options : App Name , Step Name , User , Start Date , End Date .

If it takes someone more than a few seconds to configure a record, there should be some way of separating “Draft” data from “Real” data in your system. I recommend using ISA 95 statuses (“Draft” vs. “Effective” for physical artifacts and references, “Waiting” vs. “Running”/“Ready”/“Complete” for operational artifacts)

It’s useful to have a “Notes” field in your artifacts, used in conjunction with the record history widget to highlight events and alerts.

Use “Single Purpose” steps with little or no interaction to handle specific functions like raising alerts, logging exceptions, or closing jobs.

Building Processes in Apps

If your process has one person at a time, handling one artifact, through multiple time-sequenced operations, the best approach is the “Tulip Recipe” of:

  1. Pick the thing you’re working on
  2. Start the process.
  3. Tulip sets a “Step” field to the step name
  4. Enter data
  5. Make sure all the data is entered
  6. Go to the next step and update the “Step” field to the new step name
  7. Repeat until the process is finished.
9 Likes

David, wow thank you for sharing this use case with all this detail! It is really cool to see this from start to end, with how you designed the data architecture and apps and some key decisions you made along the way.

I am sure many @life_sciences Tulip users (and even those who aren’t) can learn a lot from this example and your advice here.

Very nice use of Common Data Model and digital twin concept!

1 Like

@David2 This might very well be my favorite post of all time. Thank you for sharing your best practices. Inspiring, educational… awesome.

You should be proud.

1 Like

Really appreciate the very positive response this has had. I got a question in my PM’s and wanted to include my response here because I believe this is a common question about cGMP with this system.

When using the history widget as the prime vehicle for logging and presentation of data what is your opinion on handling operator mistakes? Whenever I show a design where the log is the completion records the first question I get is “What if end users make mistakes and need to correct a flawed registration?” How do you handle that?
When I ask Tulip they say “make a new registration”. But if they append a note or correct a value it will be placed out of context with the original entry just adding to the confusion.

There’s a two part answer to this.

First Point: Tulip is correct

Because the record history serves as an audit log of the activity that occurs around an artifact, it is very important that the log accurately reflects activities that occur during the manufacturing process. Using a “Notes” field is one way to do this. Another is to document corrections to data via a comment or exception record (such as those included in the Composable MES’s common data model). Finally, you can rerun the process and log a completely new set of data. There is no one correct answer, depending on your needs you may use all of them simultaneously. As a shorthand:

Artifact notes - Allows seeing the history of an artifact ‘at a glance’ in simple terms.
Exceptions - Allows someone else to make decisions about the correction/error.
Rerunning the process - Allows re-generating data used downstream in the process.

Second: Transition to Exception-Based Review

In a cGMP system review serves two primary purposes.

  1. It is a legal requirement. You must review and someone must sign that the product is safe.
  2. It is an opportunity to check records to ensure ALCOA is not violated

Design apps that clearly and accurately assess what each result was, ensure you are recording assets used and employing data validation on entries. Your validation should robustly test that these controls are correctly designed and implemented. This allows you to validate that you are meeting ALCOA, rather than needing to continuously verify it.

For product review, use an Exceptions table. While the audit log can show the full process history, when you want to see whether data was corrected you should be checking Exceptions. Ensure exceptions must be closed (with an e-sig!) before product can be marked as complete. Ensure exceptions come with appropriate context, and that it is possible for operators to correct/update their rationales based on feedback from QA (and that a history of these updates is available for audit!)

Items like “I need to rerun widget stamping due to XYZ” or “I recorded a widget width of 20mm but actually it was 30mm” should be the kind of items that go in Exceptions, to leave a breadcrumb trail of what happened and why when reviewed later. This is both easier to review, and much easier to follow later, especially when your exceptions and artifact notes are “talking” to each other so that these errors appear in both places.

1 Like