Right now it is not possible to run SQL statements against Tulip’s native tables.
This is significantly limiting its functionality and requires extensive (and often problematic) workarounds to get some more advanced but in terms of complexity still basic stuff done.
Good examples are probably simple cross table joins and aggregates as well as batch updates.
Batch updates are outright not possible at the moment and require tinkering with a custom widget (Looper) to get done. If you need to update many records say hello to a sequential queue in which each record update is sent to the server one after the other unnecessarily and significantly driving up response times for the user. Plus you have to deal with all the overhead to make the looper behave in the way intended.
Cross table joins and aggregates are also not possible right now, and having seen the complexity of the “linking” feature, I wonder how something like this would ever make these things “easier” in any way. Your best bet right now in this area is to create data redundancy in your tables by adding additional columns and using those together with Tulips current aggregations to get around some of the present limitations. On top of the data redundancy, this again introduces expensive additional server calls also impacting operator efficiency.
The usual line of thought here seems to be that SQL is too complex for the average user and hence it seems to be a deliberate design decision by the Tulip team.
I would like to question this decision and hence raising this topic here in the community.
Why are you not exposing the native Tulip tables to the SQL connector functionality or considering adding an option to write SQL as part of your internal query and recordset engine?