How can a custom connector be authored? Can it only be done by using the HTTP/REST connector and then writing a corresponding service endpoint to invoke?
Hey Rick, I’m pretty sure that’s the case and it’s what I’m doing with Tulip + Highbyte.
There are some ways to use (abuse?) custom widgets to extend connectivity but I’m not sure what the limitations are.
The one limitation of course is that the widget needs to be active on a step.
I always have in mind to be able to create HTPP connectors based on the OpenAPI specification of the source API. With more APIs in the future, I hope to be able to automate the creation of connectors and functions from the specification. For example with the SAP ERP S/4 specifications (SAP API Business Hub) it would be possible to create a Tulip function library to accelerate SAP ERP/WMS connexion to Tulip with a huge market in Europe
It doesn’t really lend it self easily to communications to systems behind a firewall/on-premise though (yes, I know you can create private/virtual networks, but that’s another thing to administer). Also, with something like the HTTP connector there’s really not a concept of “metadata” or a service description, is there? You kind learn what the return data looks like and then map it in. I’d prefer to see more standard way for connectors to describe their functionality and their inputs/outputs.
HTTP Connectors in Tulip can be Cloud or On-Premise with On-Premise host connector. For App Builders it can be difficult to know the content of the hundreds of functions available to build their application. Several years ago, I built a catalog like Swagger to build a Tulip function catalog, a bit like an API portal, based on JSON connectors export
Hey @RickBullotta -
@Richard-SNN and @youri.regnaud mostly hit this one out of the park - generally, the way customers are solving this problem today is to either interact with an existing HTTP service for the other system or interact with a middleware service to host these endpoints if they don’t exist.
There is work on the roadmap to allow authoring of these connectors and connector functions directly from HTTP requests, which will enable some really interesting use cases like auto-creating connectors and associated docs from an OpenAPI spec.
This is unquestionably a gap right now, our vision for connectors is that as opposed to driving data directly into solutions, instead they populate unified modeling of things. So the data pipeline becomes
Connector > [model of an order] > solution that uses orders. This allows the data source to be replaced without any downstream impact, along with allowing far more flexible contextualization of an order.
Re firewalled systems-
@youri.regnaud is totally right here. On-Prem Connector Host is a dockerized build of the service that normally lives in the cloud and facilitates communication from Tulip to other systems. This enables access in a more locked down way to systems that customers otherwise wouldn’t want to open to the wider internet.
Would be good to do a deep dive at some point on the contextualization use cases.
Especially for the contextualization of automatic data generated by machines for example.