How do you modify an HTTP target function?

I can’t quite figure out what the strategy should be for modifying an HTTP function, for example, to add a parameter to it. Even if the function is not in use it doesn’t seem to let me. What strategy do people use to grow/change the API?

To add an input or output you have to clone the function and create a new function. This is to prevent the modification of a connector from breaking existing apps.

Hey @wz2b

@Ethan hit the nail on the head. Right now, you need to duplicate connectors to add or remove inputs or output. This was an intentional choice to make sure production apps weren’t unintentionally broken.

There is working going on right now to add more robust versioning of connectors, to resolve this painpoint. I don’t have a release number yet to share with the community just yet, but expect some big improvements to be coming in the near future.

Pete

Sure. This makes sense. I was just looking to see what strategies people use to cope.

I was a little surprised to find that I couldn’t even change these inputs/outputs if the function wasn’t being used anywhere. For now I’ll make sure I name my connectors _v1 _v2 etc. then if I need to change something I can make a new version, then go back through and update whatever is using that connector.

You know, I’d also add to my wish list that it might be good to have ‘inputs’ be allowed to have a default value.

1 Like

Default value would help out with my use case :slight_smile: Simplify method of sending NULL to connector functions
I was pondering on whether making an input required or not would be a solution

Hey @wz2b

I just created a feature request to be able to set default values for each input. This is something I hadn’t thought about, but absolutely something I would use if it existed. @mellerbeck hit on the way I would currently do this, kinda a pain though.

Thanks for the great idea, keep em coming!
Pete

Hey @wz2b

These are very good suggestions & topics we are actively looking into, specifically giving our users the opportunity to modify connector function inputs & outputs if they are not being used by an App.

Out of curiosity, how often do you have to make changes to the existing connector functions & what is the nature of those changes usually?

Thanks,
Sagar

That’s a great follow-up question! I think based on where I am with this today I want to say it’s heavily weighted toward the very beginning, when I’m just starting to conceive what my API is going to look like. At that time, it isn’t being used by more than maybe one test Step, and it’s pretty fluid. What made me ask this in the first place is I was writing the server side at the same time, and I was going back and forth: let me rename this field, let me remove this field because i don’t actually need it, let me add this parameter. That’s the point I was a little annoyed by having to delete and recreate every time.

I get the idea of the function only being read-only once it’s in use. I think that’s a good short-term solution. I think longer term it’d make more sense to have a legit ‘refactor’ paradigm where you just break the function calls (in the steps) and there’s a red ! next to the step telling the user “This needs attention” for example if a required function parameter was added but the caller doesn’t provide a value (yet).

This is partially why the separate thread on should there be ‘default’ values, and, for that matter, nullable parameters.

1 Like

Should I explain exactly what I’m doing here?

I wrote a little go server. It listens on http/https specifically for pushes from tulip. Then, what it does is it just converts them to MQTT messages and passes them on to a broker. It’s pretty generic in the sense that it doesn’t really care the shape of the input data as long as it’s json.

I’m working on the other direction now - being able to add/update rows in tables also via MQTT message. Once I have that working I’ll have a fully function Tulip to MQTT adapter, that can be placed somewhere safe (e.g. on a DMZ network) so that enterprise apps can communicate with Tulip without Tulip being able to reach into the I.T. or O.T. networks. This is my solution for the security concerns … I’ll probably make it support Machine status eventually, too.

1 Like

I just want a MQTT custom widget :slight_smile: still working on it :slight_smile:

1 Like

@wz2b
Have you considered using an EMC running Node RED for this purpose?

We do have a Node RED node for working with Tulip tables and it would be a fairly simple flow to pipe into and out of an MQTT connection. @mellerbeck 's post here provides a good overview of using tables in Node RED. From Nodered to Tulip Custom Widget Gauge

Hi,

We already do MQTT to REST mapping with Solace Messaging broker, in the two direction. It works well without customization or code as Solace is not only a MQTT broker but a messaging broker supported different protocols

Hey @wz2b !

I am fully aligned with what you proposed here. We are working on a functionality that will allow users to add/remove/change inputs & outputs if they are not being used by an App. Additionally, working towards giving the possibility to add function outputs even if a function is used by an App/

Once we ship the initial release of this functionality, would you be interested to test it when it’s in beta?

Thanks,
Sagar

Hey @wz2b !

This sounds like a good solution to build Tulip <> MQTT communication. We will be looking into building support for MQTT natively within Tulip.

I want to understand what is your use-case here? How are you using Tulip? What are these different enterprise apps you are connecting Tulip with?

Thanks,
Sagar

I’m not sure what an EMC is but I’m actually using Node Red extensively. In fact, I’m making some changes to the node-red-tulip-api node library that I intend to submit a a Pull Request very shortly.

I have spent a lot of time thinking through I.T. / O.T. separation, with some insight gathered from NIST800-82, -53, and -171. My philosophy regarding all of this is that the O.T. network (and probably the I.T. network as well) should mainly be pushing to cloud services, not accepting pushes from them. If Tulip had the ability to push directly to an MQTT server on I.T. network I would probably set up a dedicated broker for that, with its own security context. Then I would set up a different piece of software on the inside to absorb those messages and react to them. More on this in a bit.

Yeah, absolutely. I would at least be willing to try it. We’re not your typical customer - we aren’t a manufacturer, and in fact we’re non-profit. We are building a testbed here to show off different I4.0 technology and how small to medium size businesses, and how they can practically integrate all these different systems. I’m speaking for me here, not my organization, but my personal opinion is that Tulip fits pretty nicely into that, because you’ve got out of your way to provide integration points.

So being able to show off / demonstrate the latest integration options is something I’m definitely interested in.

What I have found about that is that mapping simple, short requests works, but breaks down when you need to also apply logic to the API responses. The two main cases I run into for this are mapping table/column metadata back to the response; and probably the bigger challenge is that some paging required unless the table is very short.

@wz2b That testbed use case makes sense. Do you have a reference architecture you’re working from? I’m thinking something like:

I would say what we’re working on is similar to IIRA but not quite. Our layering nomenclature is similar but not exactly, we conceive of:

O.T. I.T. Cloud

Where a lot of what IIRA calls “Platform” is either in I.T., Cloud, or (in most cases) some hybrid of the two. There’s still a lot of reluctance about cloud dependence in the small/medium manufacturing world. We also talk to a lot of customers who have poor or non-existent I.T./O.T. separation, in some cases they realize that so they intentionally keep machines isolated.

Thanks for these references