Right now it is not possible to run SQL statements against Tulip’s native tables.
This is significantly limiting its functionality and requires extensive (and often problematic) workarounds to get some more advanced but in terms of complexity still basic stuff done.
Good examples are probably simple cross table joins and aggregates as well as batch updates.
Batch updates are outright not possible at the moment and require tinkering with a custom widget (Looper) to get done. If you need to update many records say hello to a sequential queue in which each record update is sent to the server one after the other unnecessarily and significantly driving up response times for the user. Plus you have to deal with all the overhead to make the looper behave in the way intended.
Cross table joins and aggregates are also not possible right now, and having seen the complexity of the “linking” feature, I wonder how something like this would ever make these things “easier” in any way. Your best bet right now in this area is to create data redundancy in your tables by adding additional columns and using those together with Tulips current aggregations to get around some of the present limitations. On top of the data redundancy, this again introduces expensive additional server calls also impacting operator efficiency.
The usual line of thought here seems to be that SQL is too complex for the average user and hence it seems to be a deliberate design decision by the Tulip team.
I would like to question this decision and hence raising this topic here in the community.
Why are you not exposing the native Tulip tables to the SQL connector functionality or considering adding an option to write SQL as part of your internal query and recordset engine?
Thanks for the write-up here. I couldn’t agree more. An ETL layer is pretty critical to enabling robust data architectures. We have spent a good chunk of 2022 and will spend a lot of 2023 adjusting architecture to make this possible. The first step is to unify how completions, machine data, tables, and user data are stored so this ETL layer can be simplified as much as possible. This work is well underway, but a gigantic lift to get just right.
There is also a whole lot of work going on around analytics to help enable more powerful visualizations.
Hey @Pete_Hartnett
Any update on this (much desired) capability?
ATM, I’d love to be able to run simple SELECT, INSERT & UPDATE queries in bulk (touching multiple rows). Is there a beta version planned perhaps to give some initial capabilities before rolling out full SQL support?
I’m watching this feature request closely! The API is useful to get data out of Tulip but it would be much better to have the direct SQL capability. Even, just read-only access would be a huge win.
Looks like the underlying application framework does not support this at present - the whole thing works quite differently from SQL → for the techies have a look at Meteor.js
Thank you all for your input and discussion on this! I was talking with @stefan about this, and the current priority is working on improving the query experience for folks who do not know SQL. I know this does not help you currently, as you are looking for this more advanced capability, but the team is aware of the need and will be working on it at a later time.
That being said, Universal Template for analyses (currently in beta) will be officially released soon. This allows you to switch between visualization types by decoupling query and visualization of the data. The Universal Template also supports all analysis types and Tulip data sources (Completions, Table data, and Machine data). This is a big step in the work I mentioned above in improving query experience for all users.
Please keep this discussion rolling with any additional information or feedback you have for Stefan and team, as we know this product request is important to you!