In our effort to connect other systems like ERP to Tulip we see that our current setup of Tulip is not suited.
The standard for our other systems is that they have minimum two different environment.
One for testing and one for production. Standard setup would then be to connect test environment to a test environment in Tulip and prod to prod.
Since we only have one environment in Tulip we need to connect the external test environment to Tulip Production environment, meaning external data pushed from both test and prod-environment will be routed to the same table in Tulip.
This in turn can, and do corrupt existing production data.
Thank you for your respond.
We use a middleware to set up the endpoints to be used in Tulip connector function.
The middleware use some services at the ERP side. (This is not something I do, I just use the endpoint)
Back to my concern.
It is the other way around. Pushing data in to tulip table and not triggered from within Tulip.
The setup you refere to is to connect to external sources and request data.
By requesting you always know what you connect to, and I do use this function to connect to other systems when requesting data during Dev, Test and Prod.
We decide to use external database to use 2 environments instead of tables due to the fact to have same table for all environnements. Have tables by environments with version Control (same as connector) can be great to think again use table at scale.
So this means that Tulip does not support test environment and a deployment process from e.g. Test to Production!
I will go for your solution @youri.regnaud , and if data from test environment should be needed in Prod I will export/import via csv.
Edit:
We use Boomi as middleware and whatever is set up in dev/test will be deployed to prod when approved. This inlcudes mapping to table and table fields.
If I set up an external table this needs to have the same unique IDs as the Tulip prod table.
It is only the base URL that is changed between dev/test/prod.
In Tulip we can not set the unique tabel IDs. Is that possible in other DB systems?
Seems to be a major shortcoming of this platform at present, which we seem to have just run into as well… except in our case we have two separate instances running, one for DEV and one for PROD.
The lack of environments for the native product data storage locations (tables, completions, machine activity, and the users table) has been something we are activly doing a lot of product thinking around. This is one of the key areas we see people addressing with multiple instances, which drives a need for import and export of an app between instances. Longer term we want to remove the need to use import/export to manage a more complex application lifecycle.
More to come here. In the last few months Tulip has created a team that will be focused on this part of the product (lifecycle management, more broadly) and some of their first work will be going out this summer. Environments across the product is pretty significant effort, and we will be knocking off a number of smaller things before we take on this monster.
Hi Pete, good to know that there is work underway to fix it. But this leaves the question what to do in the meantime.
The two-instances “solution” does not seem to be a real solution - or only in very narrow cases. It might work if you do not make use of any resources that require the platform to define unique IDs, which there are only very few. With no way to manually influence the IDs of e.g. resources you are at the mercy of what the import function (if it actually exists yet) is doing in the background.