Table Aggregations on whole Table / Querie limit 1000

Hi everyone -

We really appreciate the continued feedback about the importance of this product suggestion! I wanted to let you know that the team is aware and actively working on this.

@doneil The fears of slowing things down is why the query limit exists. To answer your concern, it should not slow down an entire instance, it would just slow down the aggregations themselves.

Is there an opportunity for a workaround by using analytics instead of query-aggregations?

Also, which 1000 results would be in the result with no sort applied? Oldest 1000 based on creation date? Last 1000 created? Or based on updated date?

@Beth Thank you for clarifying the potential impact.

Since it shouldn’t impact the performance of the overall Instance, I definitely think it would be better to leave it up to the developers to decide they risk they want to take with their own setups. Of course, there can be warnings and recommended configurations.

I’m interested in understanding what the team at Tulip is considering to solve this limitation.

This is a very important feature requirement for us too. I had no idea that aggregations would return wrong results until very recently through trial and error, but it makes it very hard to present truthful data on larger data sets (Eg. a constant flow of robot data).

I’d be happier to see developers allowed to do it with warnings put in place either when setting the query limit, or errors when running an app if a limit is likely to cause issues.


Would very much like to see queries without limits (but still as an option), or ability to use the aggregation on the entire table. For us, it is really limiting our viable use cases for Tulip tables

1 Like

Don’t forget you can use analytics instead of queries to avoid the 1k limit. I haven’t found a case yet where this hasn’t resolved the limit problem.

Hi everyone, manufacturing like us , flexibility on the data analytics is very important specially when doing improvement on the process. If no sufficient data as mentioned on other users then this can lead to wrong data and users don’t trust our Analytics. Leading to stop using the system and don’t trust the data. I agree to others mentioned here that performance is important but that’s the reason we choose cloud due to scalable resource capacity and fix performance issue. Therefore such query 1000 limit is not helping us now. So what we ask to TULIP dev ops and architecture team is open this to the users to define the limit and we will manage . Or if you have better suggestion please let us know because this is critical requirement from us.

Hoping for the soones solution on this.

Hey everyone,

I want to follow up on this long standing thread with an update that will hopefully be exciting for most folks in this thread:

In r274, so the next release, the record limit for the runAggregation endpoint of the Tulip API will be increased to 100,000. This means you can now use connector functions within Apps to aggregate over up to 100,000 records. See below for a quick demo on how to do this.

(There seems to be a little video embedding issue with the community platform. You’ll likely need to click the link and open the video in a new tab.)



Raise the roof! The ceiling is quite literally higher for Tulip applications now!

Great work to the team and a great tutorial video @stefan


But I have a need for 100,001 records :stuck_out_tongue:

Joking aside, thank you for working on this! upping the limit will come in handy for sure!


Does this mean this was also available in LTS12 as well?

@madison.bynoe This change will be part of LTS12, yes :slight_smile: