Completion Column Limits?

I understand that Tulip tables have a limit of 200 columns. Do completions have the same limit imposed?
When digitizing batch records it is reasonable to encounter needs beyond that limit. Is the recommended approach to break the batch record into multiple tables (perhaps per app) and then link these tables together. If so, that would suggest that any sign-off would be constrained to being “piecewise” relative to the full batch record. Just curious how Tulip approaches that from a compliance/audit perspective.

Hi @Lance ,
If you can provide some additional information on the context of the data being stored in this table, I can refine my commentary below:

Re Completion Records:

  • (For reference, Overview of Completion Records)
  • Completion Records to my knowledge have no known limits in the number of variable(s), column(s) etc… which can be stored.
  • Completion records are great for the purpose of making an immutable audit trail and visibility into that audit trail via the Record History Widget, but ultimately has some limitations, e.g. if there is a business case for visualizing data outside of the Tulip platform and thus is dependent on data stored in Tulip Tables.

Re Tulip Tables:

  • If you have Tulip Table(s) approaching ~ 200 columns, it is indicative of an unscalable table model design. A good exercise is to evaluate such a table and try to categorize/classify the various columns therein.
  • For example, if several dozen columns are essentially collected process parameter values, consider if those particular values need to be stored in a Tulip Table (e.g. for visualization outside of Tulip OR to be used as an input to some trigger logic after being stored). If the parameter does not to be stored in a Tulip Table, then perhaps create a Table Analytic (dynamically filtered for such values collected for a given Batch) and visualize such parameters in that way.
  • For process parameters that do have a legitimate business case for being stored in a Tulip Table, I have previously recommended to customers in various industries a scalable ‘Data Collection’ table, wherein each Record in the table is a distinct Parameter including appropriate context (Value, UoM, Related Batch ID, etc…). If a parameter value is needed for input into a trigger, the appropriate record can be fetched with a well-designed Query and Aggregation of Mode(ID).

I hope this is helpful.

Tim Reblitz
Tulip | Digital Transformation Consultant

1 Like

Tim,

Thanks for reaching out. Here is a quick summary of what we are encountering…

Our customer has paper batch records that have already been approved by the FDA so they are interested in reproducing them as closely as possible within Tulip and to maintain the CFR 21 Part 11 compliance. Their paper documents capture anywhere from a couple hundred data values up to several hundred. We are already anticipating breaking this data collection down into multiple apps based upon when, where or by whom the process is being completed. We anticipate creating a unique table for each app and then linking the various tables (and using shared IDs) to provide traceability of all the batch data. I’m pretty confident that this will keep us below the 200 field limit for tables but because we have additional (fairly frequent) requirements for signature and sign-off during the data collection which will only be stored in the completion record I wanted to understand if we might have another constraint in the design.

Our SmartFactory solution uses the “Data Collector” pattern that you mentioned but customers in regulated industries are hesitant to adopt it because they feel they would have to recertify the auditability and compliance. If you guys have tackled that objection it would be great to understand how you approached it.

Thanks,
Lance

@Lance we have had GxP customers use a Data Collection table like I described, in part because it simplifies eBR Review & eDHR Review activities.

I just want to reiterate my recommendation to establish a table model that is scalable to any number of apps rather than a solution that requires a new table created for a new app. That will ultimately be a bigger maintenance headache in the long run vs. rearchitecting the table model now to allow for future scalability.