Trigger Performance/Efficiency Differences

I’m puzzling through the optimial architecture for master data in a Tulip Table and wondering whether there would be any meaningful differences in performance/efficiency between these two approaches:

A) Create my table records with IDs that I can call up directly based on the barcode scan output - will require expressions to string together non-adjacent sections of the barcode to construct the table record ID that would be loaded.

B) Using a table aggregation to get the ID of the record I need to load based on the content of the barcode scanned

Option A is going to require a higher level of rigor for creating table records, as I would need to add variant suffixes to the GTINs to prevent duplicate IDs.

Option B could use random IDs, but I’m wondering if using barcode scan output from earlier triggers as app input for a table query to produce an aggregation result might have some performance issues if the aggregation doesn’t load in time for the subsequent device trigger to load the table record.

Was this ever an issue with earlier LTS? Is it still an issue now? Am I worrying about nothing?

For reference, the lookup table would have something like 200k rows.