Record creation and Duplication


I am working on a app that takes a barcode as input, goes through a series of questions, and creates a record in a table reflecting the steps taken.

The problem I am having is that the barcode is associated with an already existing row, and I want to record each arbitrary step, now and in the future.

To facilitate that, I first store the barcode in a variable
→ then pull the latest row associated with that barcode (through a query and aggregation, an annoying 3 rows right there)
→ then manually duplicate the record
→ the edit it to reflect reality
See a screenshot of the last two steps below.

To make matters worse, this app should deal with up to 4 PHs at a time.

This cannot be the most efficient way to go about this… Any suggestions for making this more straightforward would be very welcome

Hey there - just wanted to let you know we are looking at this and will see if we have anything that can help!

In the meantime, I may tag @jmlowden to see if he has anything thoughts - he is a table guru after all the time he has spent building table-heavy apps with Tulip :slight_smile:

Hi @salmagor!

Just to get a better idea of the use case, can you compare/contrast the purpose of the two tables?

  • Temp PH
  • Incoming PH#1 Record

Morning @jmlowden!

I do not know if I’m using BKM maybe this is just an odd workaround.

The idea is as follows:

  • I have a table where each row represents the current status of a print head, where status can be things like “active”, “refurbished”, “queue” etc.
  • The print heads are identified by a barcode printed on them physically. In a rational DB I would make the primary key the barcode (serial#) + timestamp, but in Tulip it is just the ID column.
  • The goal of the app is to pull (up to four of) the most recent row for an arbitrary print head serial number, update it’s status, and write a new row with the new status.
  • The whole array thing is my (maybe poor) way of trying to replicate “SELECT ID FROM ‘MyTable’
    WHERE Serial Number = PH#1Serial ORDER BY Timestamp DESC LIMIT 1”
    *I would have liked to do “WHERE Serial Number IN (Incoming Array)” but didn’t manage to execute that

Now for your question…

The Temp PH is there because I deal with print heads whose status is ‘Active’ differently from those whose status is ‘Queue’, so I load the row into a staging record, than create a new record based on the status.

Hope this helps you help me, and please let me know if I’m going about this in a backwards way.



Off the top of my head, using queries and aggregations (all log records matching the PH ID, sorted newest to oldest, limit 1 record; mode aggregation for the log record ID) you could check the last entry for a given printhead, and if the last status is Active, then create a new record accordingly and view that new record plus the previous three (same query as above but limit 4).

I will think about this some more.

@salmagor @jmlowden

I got some input from the team on how to best approach something like this:

  1. Use the table to represent the current status of the physical asset. If an asset changes from “in queue” to “active” then we update the table record “status” field for that asset.
  2. Then capture a completion record, and the history/lifecycle of the asset can be viewed in the record history widget. This prevents having to duplicate so much information in the table every time an asset changes.
  3. To capture all the changes to the printheads over time, build an app that captures the steps (via completions) with the last step in the app being a trigger to update the record to whatever it’s current status is.

This may also provide some clarity on the approach:

Hope this help!

Am I understanding correctly you mean Tulip can save the full record (i.e. save every status row sequentially) automatically in the “completion record” while the tables I use are just a subset of the completion record?

Where can I see this completion record? What is the “history widget” and how do I configure it? The Tulip Data Model does not go into those details - it only talks about tables

I highly recommend looking at this article and playing with this fully functional Library example - I think this will help explain completion records vs tables and how they can work together!

  1. Tulip Data Functional Example

  2. Tulip Data Functional Example | Frontline Operations Platform

More information on history widget here: Using the Record History Widget to View Changes to Table Records

Ok, took me a bit but I think I understand.

Am I able to query the “Completions” record / access it via an API? My use case requires me to actually use the data logged not just view it in a step, and the Data Functional Examples you posted only show how to view the completions from the Tulip web interface.

I am checking with the team on those questions and will let you know what I find!

Also a thought, but if you are interested in coming to office hours, that could be a good way for us to walk through your use case here (where you can share you screen so it is easier to understand) and brainstorm how to get the data you need in the most efficient way!

The times and meeting links are in the Office Hours channel here: Office Hours - Tulip Community

Hi! I am in need of a record duplication trigger as well.

Currently, we have many machine building work instructions digitized into Tulip. To tackle saving and recording a history of the machine, we opted for created multiple records for the same machine, documenting the process flow of the machine as it goes through it’s building process within the same table. The alternative was to have only one record for each machine, where we could use the table record history widget to analyze the building history of each machine.

During the machine building process, some issues and fixes can occur and previously completed check items need to be redone. These check items can reside in any of the work instruction tables created. I want to create an application that automatically “undoes” the previously completed check items called the “Fixes Application”. Due to the data saving approach we took, I do not want to delete the data residing in the most recently created data record because we will lose the history for that machine at that time. Instead, it would be convenient to be able to duplicate a data record with one trigger that sets a new data record ID and automatically copies all other data into the new data record. From there, I would be able to delete different data fields without fearing the loss of data history.

Currently, I have over a 100 different data tables that could be affected by the Fixes Application. I do not have the time to create a record duplication function for each unique data table for this application. I will not have time to update this application whenever new columns are added/deleted to existing data tables and I will not have time to create new duplicate record functions for every work instruction application I create.

Is a “Duplicate Record” with a table record input, a table record ID input, and a new table record ID input possible? Otherwise, I think the only remaining option for my team is to switch our data saving model. Let me know what you think.

Thank you!

Duplicating a Record should be possible by loading the Table API read (records endpoint) (filtered to the needed Record). Make sure to only select the fields, that are not meta-data.

Then change the value for ID (to a new id)

And then run the Table API post (records endpoint) with this object (all the same but the id).

You could even change some values if needed…

I’ll give this a shot! Thank you for the suggestion.

I would like to explain my process for clarification. I tried getting a table record using the Table API read. From there, I need to define each output that I want from the table one by one. This means if the table is 200 columns, I need to define 200 columns worth of outputs. (Provided picture only outputs one column for testing purposes)

Then, in the POST call, I need to define each individual column ID statically and set the value of that column using an input variable. 200 columns would equate to 200 input variables.

This could be potentially made a little easier if I could use input variables for the column ID, but I don’t know if I am able to use an object name as a variable.

@thorsten.langner Does this match the procedure you had in mind? Is this the only way to duplicate records using the Table API?

There are some ways to create the post API function.
You could also send the plain text or hand over a whole object.

For getting the data, there is no need to specify one single record ID.
You can get any number of records by using filters.

However, you will always need to assign an ID to all of the records. How would you solve this in an optimal solution?

My first thought was about a “copy a single record” solution.

I never fully followed through this solution, it just came to my mind, to push this to a possible direction.

I agree, I think that this is the only solution I’ve considered. I would like a function that would be scalable, however defining each column for the get and post connector functions would be frustrating to update (especially when connector functions cannot be edited once they’ve been saved).

I think I will try to play around with the custom widgets. Maybe I will be able to read the column ID’s from the GET function, and automate the POST function using the column ID’s and column values from the GET function. If you have any advice for that, let me know! I will update when I make progress.

1 Like

Hello @dsun and @thorsten.langner,

I see that you are well into the different technical approaches to solving the specific use case that David has described. The solution here is to use the available feature in Tulip to eliminate the need to capture history manually and therefore the duplication of records. This is can be simply achieved by using the built in completion records of an app. This way you can have a record in a table of each machine while the apps as they execute record a history log that is tied to the record and even capture the state of the record during the execution.

The Tulip team strongly advocates the use of a “digital twin” concept in how data is captured in Tulip. The intent with this is to enable and enhance citizen development by creating apps that are simple, adoptable, human centric and focused. The concept builds on apps as representation of activities and processes while tables are used to represent physical and operational artifacts.

  • Physical artifacts are things that you can feel or touch in the industrial operation. Eg. equipment, material, units, parts, tools, etc.
  • Operational artifacts are tangible things that are central to industrial operations. Eg. order, job, action, andons, defects, etc.

We recommend to minimize and eliminate use of “abstract tables” e.g. “equipment history”: this information is captured natively in the product. By following this method your apps become much simpler, easier to understand and quick to build and modify. History is captured automatically and is also immutable, ie. cannot be changed or deleted. We are working to support this method of history capture with more enhanced feature coming very soon as well.

I hope this helps, please let me know if you need more help or would like to discuss further?


  • Gilad
1 Like