Return actual record count

Tulip always returns 1K record count to us although our record count is exceed 1K. This gives us a wrong data to do further analysis. Is there a way to return the actual record count when I execute a query?

In regards to the 1000 record limit on queries (and thus aggregations) we are working to expand and/or remove this limit in the near future.

This limit is in place to insure the performance of your apps. We have users with tables of 50M+ records, so going through each of those, checking the filter values, sorting the data, then doing a summation is an expensive operation. Right now aggregations run on a ~5s cadence, that wouldn’t be possible with a dataset of that size.

The total number of records can be pulled from the Tulip API with the /tables/{tableId}/records endpoint and the includeTotalCount flag set to True. More information about the tables api can be found here (How to Use the Table API | Tulip Knowledge Base - Support for Building Operations Apps)

Pete

2 Likes

Thanks for your reply. In our use case, SAP sends exploded BOM to MES for a given work order. 1 work order could contains more than 1K bill of materials. Today, we can only call Tulip API one by one and we can’t push all the BOM by batch. In case this API fail, we need to know the error. It’s not practical to receive 100 email at once. So, I am thinking to compare the records between SAP and MES to make sure no mismatch between MES and SAP by using aggregation in Tulip. Here is the flow:

  1. SAP to push all the BOM one by one to MES.
  2. SAP to keep record counts.
  3. SAP to run Tulip query to pull total number of records for a given work order.
  4. SAP to compare total number of records (#3) and record counts (#2). SAP should send us an alert if these 2 counts are not match.

Hey @chloe.lau -

So the underlying ask is kinda 2-fold here:

  1. Expose endpoints to add/update multiple records at once
  2. Increase the the max query and aggregation size.

Is this right? How many elements might be in a BOM?

I can provide a little context about each feature:

  1. We have talked about adding bulk edit features over the api but have run into one key question. what do we do if 1 of 10 records can’t be updated? do we throw an error response? if so, do we still respect the changes to the other 9 records? What if 1 of those 10 has a duplicate id error and a second one has a type mismatch error? do we throw both codes?
  2. There is a bunch of work going on right now to extend the range of queries and aggregations (along with a lot more around data management). This project will likely extend to the end of the year if not into 2023.

Pete

Hey @Pete_Hartnett ,

Yes, you are right. I am asking for item 1 and 2 as you mentioned.
RE: 1. We have talked about adding bulk edit features over the api but have run into one key question. what do we do if 1 of 10 records can’t be updated? do we throw an error response? if so, do we still respect the changes to the other 9 records? What if 1 of those 10 has a duplicate id error and a second one has a type mismatch error? do we throw both codes?
In my opinion, if user executes “POST” method to add multiple records, I would suggest to add one more parameter - ContinueProcessNextRecordIfError in the URL. The default value for this parameter is false. User has an option to set this parameter to true. If user set this parameter to true, system will proceed to process next record if previous record has error. Tulip to list down all the error upon completion.
Same logic applied to “PUT” method.

Hey @chloe.lau -

This is a super elegant approach to error catching. I hadn’t thought about a flag and in the backend looping through each of those records. There are some performance losses to not doing the table write all in one query which might be meaningful when you talk about the scale of thousands of records, but that is something our data team can look at.

Thanks again for the great idea.
Pete