Has anyone forwarded on data from tulip to say S3 buckets

How hard is it to get data out?

Interesting question! We have not had customers try that before but S3 does have an API that could potentially be used by Tulip Connectors. Before trying to answer in detail, can you provide more information on the problem you are attempting to solve? Depending on the use case, there may be other options for storing data that are better suited, such as PostgreSQL.

I guess it comes down to data portability. At some point say you want to do some machine learning or analytics on a years worth of sensor data and display it in a 3rd party BI tool (quicksight \ qlikview what have you)

Yes, that’s definitely a valid use case and Tulip gives you a few options:

I would recommend starting with PostgreSQL and writing a connector function in Tulip to save data to a database.

REF: https://support.tulip.co/en/articles/2986284-how-to-create-a-test-database-for-a-connector-function

From there you can connect and view the data in whatever BI tool you prefer.


If you prefer to use S3, the same principle would apply, but you will have to research what the S3 API provides. Depending on that, you may be able to write a Connector to update S3 directly (as with PostgreSQL). If that doesn’t provide what you need, the last option would be to use AWS Lambda / API Gateway to process the data however you’d prefer. You would then use Tulip Connectors to send data to your custom API, which would process the data and send it to S3.

2 Likes

So, no way to get at the ‘base’ tables behind the scenes?

How would you do this in conjunction with say opcua data? Headless app with a trigger every minute?

Direct access to Tables is not available at this time, but that could change in the future :). You can export to csv, although that is a manual task of course. If you’re instance has machine monitoring available you can write actions in an app that are triggered on ‘machine output’ instead of collecting the data on a timer.

1 Like