Record retrieval performance: Load Record by ID vs table query filtering on a unique values column?

Hello.

Some of my customers has a Tulip table of articles. They are using ERP IDs (=surrogate keys) as ID values. Often, record retrieval occurs using the ID: no table query is involved, we just use the Load Record trigger and give the ID of the record to load.

However, they also have use cases where they want to fetch a record using, e.g., an article reference (=natural key). This reference is stored in another column of the table (for the purpose of this conversation, we assume that all values of that column are distinct from each other). We plan on using a table query with a simple filter like Article reference Equals App Input.

However the number of articles is getting large (~100’000 to 200’000 items) and I’m worried about performance. Here are my questions:

  1. Is the ID column of a Tulip table indexed for faster search?
  2. Are other columns of a Tulip table indexed?
  3. Is an Equals filter on a non-ID column containing unique values slower than using Load Record to load the record by ID?
  4. Bonus question (which might become a product suggestion): does Tulip plan on allowing to configure a column’s values as unique (denying record insertion if the value for that column already exists in the table)?
  1. The query → aggregation method you are using is a precursor to loading the ID normally, so it inherently has to be slower. And yes, the larger a table gets the slower that query piece will be.

This is an extremely important piece of Tulip application building though, so I hope that Tulip has optimized this form of querying in some way. Perhaps your suggestion for #4 could be part of a related improvement. Although there are times where I want to load the “First” record based on a subset of filters this way, so I would not always want to enforce uniqueness on the filter column.

Yes!

Most are. Columns of type Image and File are not, and neither are table links.

A filter may return multiple rows, so that could be slower. Otherwise, no, they should be essentially the same time.

Curious here what you are looking to accomplish - something like creating a filter + query to get Article reference Equals App Input as a list, and then load a record based on that list’s [0] element ?

I don’t think we’ve seen a request for that before, so I’d say very likely no plans to do this / not on our radar currently. However, please do feel free to submit a feature request for this :slight_smile:

1 Like

Hi @Beth.

First, thanks for this very precise answer! Three things:

A) Is this indexing behavior documented somewhere? This is interesting and important to know. I’ve had customers expressing doubts about Tulip sometimes and I heard arguments such as “besides ID, it’s probably all unindexed” several times. It also allows to feel much more comfortable and in control when using Tulip tables.

B)

Curious here what you are looking to accomplish - something like creating a filter + query to get Article reference Equals App Input as a list, and then load a record based on that list’s [0] element ?

I forgot to mention it but indeed, that’s what we aim on doing, by aggregating on the ID column and then getting the record by ID. This is documented somewhere in a video on the Community but I couldn’t find it right now.

With customers who do use Tulip tables, the pattern filter to a single row then get its ID with a Mode aggregation is something we use extensively. It is probably something many new customers have to painfully rediscover by themselves… In such a case, we don’t really use Mode for its “descriptive statistics” behavior but more like a kind of Single row or nothing aggregation.

C) A documentation page listing and textually describing such patterns / tips / tricks (e.g. single record retrieval, autoincrementing ID, stopping trigger execution by transitioning on the current step…) all at the same location would be very interesting. I insist on the textual and concise aspect of this since this is so much faster / more efficient to consume than a video.

1 Like