Integration using links and the official tulip node red library

I have been working extensively in Node Red, creating a mechanism to load ERP data (mainly production order information) into Tulip. I am working with an ERP system named Level7, built on Business Dynamics specifically for the remanufacturing industry. I am working toward a three part strategy:

  1. Load data from ERP system in whatever form is available. In my case, it’s actually three separate tables: Production Orders, which have multiple Items, which have multiple production Routes.
  2. Transform into a common representation, a stream of objects each containing a complete production order (including all its items and all of each item’s routes)
  3. Send to Tulip, which means loading the table rows then creating the links.

The idea of step 2 is to transform ERP data into a common format so that the by the time you get to step 3 you’re agnostic to the actual ERP system. The idea is to make it adaptable. Ideally this common representation would comply with some well known standard like ISA95 / B2MML. I’m on a deadline, so I am not starting there, but it’s where I hope to get.

In Tulip, adding links is a separate step from creating the records. I have proposed Tulip add the ability to send links with the create table request, but for now links have to be sent in as a separate API call.

Sometimes in node red it’s hard to control flow execution. Time delays are not a safe way to control when things happen. There are some built-in ways in node red to wait for multiple nodes to complete, but using them correctly is difficult if you care about edge cases and exceptions (and you should). The right way way to control a sequence of node executions is through the flow.

To accomplish this, I created a flow like this:


double click to zoom in

Remember, what I am trying to do here is add links after a Create Record. The way this flow works is that each time it writes a row, if that row contains a links array the link request is formatted into an HTTP request then called using the standard Node Red HTTP Request node.

For this to work I had to modify the node-red tulip library slightly. I created an option in the table request that causes it to pass the initial request through to its output. Here is an example of the new output of the Tulip Table node, with the new “Request in Response” option turned on:

This option is off by default, mostly to avoid causing any backward compatibility issues*. The result is that at the output of the tulip table write I now see this:

Note that links array. Presently, Tulip Table node doesn’t know what to do with this, but that’s okay (for now) - the key element here is that it passes it through the Table node unperturbed, so that a subsequent node can see it.

In my flows, after the table node I have a second function block that formats the elements of my links array into a series of outputs emitted one per link. These outputs form the request definitions for a standard HTTP request node. This is not a great long term solution, mainly because I have to duplicate the authentication process rather than use the Tulip configuration node. It makes the whole thing less portable. My hope is to eventually just add a Tulip “links” node (similar to the tables node), but before I do that I want to have a discussion with Tulip so I don’t duplicate their own internal efforts.

To support all of this, I have issued a pull request to the tulip node-red official library: I realize Tulip is very busy, if they can’t get to this I may fork and re-publish the library (I’d prefer to avoid such splintering if possible, but this is actually rather urgent to me. If I do fork it, the entire thing will definitely be rewritten in typescript).

I’m really interested in hearing from other members of the community, especially ones who are using Node Red for doing this type of integration. Any thoughts/questions/wishes? Do you work with linked tables?

*I said that that the option is turned off for backward compatibility at the request of someone from Tulip. I don’t really agree with this … all I’m doing is adding a new attribute to the output msg, any properly written existing code should ignore attributes it doesn’t understand without it generating an error. But I’ll yield to Tulip here, it’s their library.

That’s why GraphQL on top of Tulip table can be super powerful for a complex schema like ISA-95 (part-3)

Have you instantiated ISA-95 schema as a one off, do you use B2MML, or is that a hypothetical application?

It’s hypothetical but something I hope to realize this summer, if I can figure it all out :smiley:

We are in progress to store all Tulip data in what we call a DataHub, a full ISA-95 schema in Graph Database + TimeSeries with GraphQL API on top for mutation and query. We build Tulip apps on top for our operators. Why not use Tulip tables now? Tulip is not the only publisher of data (PLC, Automation, CMMS, QES, …) Balance citizen development with central schema governance to leverage analytics, use Tulip tables with third party tools is not so easy (REST limitations, batch only, no event,…) I hope that these limitations will one day be overcome by Tulip

1 Like

I am not using Tulip as the system of record here, so I’m pretty happy with how it works. I am however interested in chatting about your thoughts about using GraphQL. Are you saying you wish Tulip had a GraphQL interface in addition to the REST interface? How are you using GraphQL today?

Right now neither ISA 95, nor customers, nor vendors, nor any other standard is fully able to address the scope of the industrial interoperability problem. We are almost to the point where data primitives (time, date, names, ID) are able to be managed efficiently. Requiring Open Architecture is also a good step, but is usually deployed as individual connection points. Instead, we need to get to a fundamental design approach that assumes there will be a bunch of systems deployed in a federated manner.

I ran into this limitation last week when interfacing with CESMII datasource. You can make calls to GraphQL with connector functions with plain text request bodies, but it wasn’t incredibly intuitive.

{
    "query": "{   getRawHistoryDataWithSampling(     ids: $tags$    maxSamples: 0     startTime: \"$starttime$\"    endTime: \"$endtime$\"  ) {     id     ts     intvalue     floatvalue     boolvalue} }"
}

Tools like Postman do a better job natively using GraphQL, so there is definitely opportunity to support it better natively in connector functions. I wrote up a feature request to fill this gap.

Pete

1 Like