I’m puzzling through the optimial architecture for master data in a Tulip Table and wondering whether there would be any meaningful differences in performance/efficiency between these two approaches:
A) Create my table records with IDs that I can call up directly based on the barcode scan output - will require expressions to string together non-adjacent sections of the barcode to construct the table record ID that would be loaded.
B) Using a table aggregation to get the ID of the record I need to load based on the content of the barcode scanned
Option A is going to require a higher level of rigor for creating table records, as I would need to add variant suffixes to the GTINs to prevent duplicate IDs.
Option B could use random IDs, but I’m wondering if using barcode scan output from earlier triggers as app input for a table query to produce an aggregation result might have some performance issues if the aggregation doesn’t load in time for the subsequent device trigger to load the table record.
I have similar concerns and thoughts about our clients application about performance and subsequent triggers after aggregation. Can you share your experience? How did you solve it in the end?
I checked in with some of our solution engineers on this!
In terms of evaluating Option A and Option B for Best Practice - Neither are “bad practice”.
Solution Requirements:
Option A: Requires maintaining a barcode schema, managing printing (or ordering) the labels, and creating/maintaining Tulip trigger expression logic everywhere (which might be easier with reusable logic coming in the future)
Option B: Requires understanding Tables queries/aggregations (and things like how to use Mode aggregation), and creating/maintaining special Tulip trigger expression logic everywhere.
There should be no performance concerns with either approach. And, for the record, 200k records is pretty minimal compared to other customer usage (we have customers with over 1 million records running aggregations and such just fine )
So really up to you which option you prefer here! As always, goal is to try and keep things simple, especially for future app builders in your org, so if you feel one of these options is simpler in that sense, that could be a good one to choose also.
So just to make sure, that I understand this correctly if I:
Change aggregation paramaters
Reference aggregation data
Do something based on aggregation data
The next trigger will always work with a loaded aggregation data, there is no possibility that the aggregation won’t load in time, and ruin the next triggers? The triggers will wait for the aggregation to refresh and then continue working?
I haven’t run into a situation yet where a lagging aggregation update has caused an error, so it might just be hypothetical problem / red herring.
That said, if you have a table write or play sound trigger these seem to cause a small delay in trigger execution which can allow time for display refresh for visualization of rapidly changing widget data (such as when step-looping).