MQTT and Sparkplug B

That would work at the edge, but I want complete integration in both Machines AND applications.

And don’t get me wrong, I really like Node Red, but depending on the use case you can’t really call it “Enterprise grade” software.

I started to write out my use cases again but in reading my response from last year it’s still pretty much unchanged.

One thing I would add is that I would see it working the same way as other connectors. So instead of just SQL and HTTP connectors, MQTT would be just another choice. and when you open it, you would have an intuitive interface for managing your server settings and subscriptions.

I get what you are saying about Node Red. It was an IBM project so it got its start before it went open source, but at the current moment the packages all feed from npm and that is in itself a little risky. I’m not sure it’s entirely clear cut, though; look at the Rio line from Opto 22. Apparently they are confident enough in it to embed.

We’ve found a few commercial alternatives but they’re kind of expensive.

I think the thing about MQTT and Tulip is I’d need to see how it would all fit together. If the MQTT message is JSON then some kind of json query language would be helpful. A simple thing to do might be to just steal the way Tulip handles output parameter mapping in a bot call that returns json. A more sophisticatedxpath f way to do it might be something like jq or JSONPath (like xpath but for JSON). It might be worth taking a look at how AWS IOT Core rules work; that’s not completely terrible either.

I’m more worried about Sparkplug, though. Sparkplug messages are encoded as protobufs, you can deal with that but you have to know the .proto which means you have to know which version of sparkplug b you’re talking to. What conerns me more is that if you’re using a non-MQTT-Aware broker then you don’t necessarily have STATE set. The problem with that is that when values are sent via Sparkplug B they are sent via a handle of sorts. That handle is assigned when the device connects via DBIRTH and NBIRTH. That means that if your broker doesn’t support STATE, then you have to maintain your own state. If you get a parameter “12345” and you don’t know what that is, you have to request a REBIRTH from the end device. That’s a big concern when it comes to I.T./O.T. separation as usually you want to set up some kind of data-diode, using broker-to-broker relaying or something similar. It would seem like an easy way to do this would be to make a read-only MQTT login to prevent anything from being written back but as I mention that doesn’t work because now you can’t send REBIRTHs.

For this reason I’m not super excited about having MQTT support in Tulip. I’d still propose doing it through Node-Red or something equivalent… or making a little “sparkplug b helper” standalone app that runs on an Edge IO device.

Sagar: that’s my question. too. If someone has their own broker, I wouldn’t think Tulip would want to sit connected to that broker 24x7 waiting for messages. If the request is that Tulip have its own broker - essentially an IoT input that speaks mqtts - that might be something I could get behind. If Tulip were to do that, then you could back it with something meeting the MQTT Aware broker specification so that it supports state.

I don’t know that I have an immediate use for this but I can see how it could be useful, and it’s something I’d like to be involved in if I can. I’d really like to kind of see the design concept. (I think I already have an NDA with you so we could talk through it under that umbrella - if not we could add one).

I’m less worried about SparkplugB, it would be a nice to have, but MQTT 5 would be sufficient for my needs.

I don’t see why being connected to a MQTT broker 24/7 would be an issue. Currently the platform is connected to your connector host which is connected to an OPC UA server like KepserverEX 24/7.

There are companies like HiveMQ and EMQx which provide clustered enterprise grade MQTT servers. There are already best in class brokers, I don’t think Tulip should try to create their own.

My problem is that while I really like Tulip, it has it’s limitations, and in the end creates Just Another Data Silo (JADS ^TM). I want more generic access to my data, both inbound and outbound. Tulip isn’t necessarily the center of my Digital/IoT ecosystem, but a node in that ecosystem along with SAP, Windchill, Smartsolv, etc.

Back to the external data access topic, being limited to 100 records in the API calls just doesn’t cut it. And there’s no access to completion data and rudimentary access to machine data (having to export CSVs for each machine individually), which won’t scale.

We currently are experimenting with using Tulip for Tier 0/1 level dashboards but the company standard thus far is going to be PowerBI for higher Tier level report outs. Right now I don’t have a way to summarize & export machine utilization or other metrics, but if I could send JSON messages within machine triggers that problem would be immediately solved, plus I could ingest directly into Azure IoT to make the data available to other consumers.

Node Red (nor the Tulip API nodes) solve any of these issues.

This is why we use a postgres DB :slight_smile: and nodered → AWS iot core

Still doesn’t directly address completion or machine data (at least from OPC sources).

I could subscribe to the same attributes in Kepserver (and we have) but then I have to re-implement the machine state logic, and any other data I calculate from it, like part count. That’s not a scalable solution, and creates more than one source of truth.

I agree with you. Tulip should facilitate data ingestion by supporting MQTT subscription for machine data, but in parallel the data stored in Tulip should be publishable, ideally in real time, to other systems/brokers. For MQTT client support, some features that would be interesting:

  • Topic parsing
  • Payload parsing (of course)
  • Option for attributes mapping in machine type and not machine by machine if your topic is well structured
  • Topic subscription in apps (not only machines)
  • MQTT publishing
  • Machine can subscribe to several topics

Ooh thats an interesting idea. I think what I want is for every ‘client’ be it tulip player or browser to have access to MQTT. Then you could trigger off of MQTT messages, live update text fields etc. It’s easy to say, I think pretty hard to implement :slight_smile:

For exemple, if you update a system of record with Tulip in async mode, your app can subscribe to a specific topic (work order number, maintenance order,…) to be notify that your work order has be updated… More and more IT solution can publish some message when an object is change, … and more and more apps can leverage IT event to enhance UX for frontline operators

Yes, interoperability through MQTT!

My company offers an MQTT centric solutions architecture where edge devices publish via Sparkplug B.

At this moment, we have customers that are publishing data into a HiveMQ broker and I want to subscribe to this information using Tulip. Just make sense for very fast app development and a short time-to-value.

Node-Red (running on edge device) → Broker → Tulip

Currently, I could do another conversion of SpB to OPC-UA o REST API, but is just complicating the issue.

Also, being able to publish from Tulip to the broker would be ideal.

Thanks for your help.

Short version: Sparkplug B mostly sucks. Seriously. There are so many flaws in the entire protocol and implementation that it gets in the way more than it helps, versus regular MQTT. I would suggest doing the “unpacking” of Sparkplug B content into separate MQTT topics at the broker level (I’ve written a plug-in for HiveMQ that does just that), and focusing Tulip’s MQTT efforts on “regular” MQTT, not Sparkplug B.

@RickBullotta, Kind of unrelated to Tulip but my team is proposing UNS with SPb. I’m interested to know a bit more why you think SPb sucks. On paper SPb sounds great but practically it might not be so. So I’m really keen to hear your thoughts if you can elaborate. Thank you.

Here is my list of issues with Sparkplug B and MQTT in their current state:

Core Sparkplug issues:

  • The rigid Node/Device format does not fit real world models. Do away with one of them and allow more flexible topic hiearchies.
  • The handling of *BIRTH messages and the intermingling of data and metadata in them is a poor design
  • The process for requesting *BIRTH messages is terribly inefficient. Metadata should be retained, not requested each time by each client.
  • There should be *METADATA messages to deal with metadata
  • The opaque/multi-value nature of *DATA messages makes it impossible for client(s) to subscribe to individual metrics. Huge issue.
  • The “primary client” stuff is unnecessary and should be removed
  • Any device commands/methods should be fully declared (inputs and outputs fully typed)
  • Naming of metrics/commands needs to be locked down to a more restricted character set
  • Add a few more top level data types (Location, BLOB (w/mime type),
  • Support other encodings besides Protobuf (e.g. plain JSON, zipped JSON, BSON, etc.)
  • Millisecond resolution for timestamps is inadequate for modern systems where accurate event sequence is critical (e.g. power grid)

Core MQTT issues:

  • Decision to use 4 bits for message type. Short sighted and now a major limitation
  • Multi-publish is essential (publish more than one topic in a single message)
  • Topics should have durable metadata (could be passed as headers in publish or as its own message type)
  • Payloads should have a data type (mime type) to enable parsing/processing by subscribers or intermediaries
  • Topic binding should be a built-in broker feature (like a symlink) - topic aliases are not this
  • The lack of a capability to query topics is a huge flaw/gap. Ideally these queries should be able to include metadata filters as well
  • MQTT’s subscription patterns are hopeless limiting. They should support richer match expressions and also metadata filters
  • MQTT should officially support a REST binding for request/response query/publish/read/subscribe/unsubscribe
  • Handling of large payloads (e.g. file uploads, video content, firmware/software updates, method responses) needs a lot of work to improve reliability
  • Handling of RPC/method invocations remains an awful hack. This MUST be addressed.
3 Likes

I’m a total novice when it comes to MQTT. The ecosystem team is starting to focus on supporting our community and customers in this area - by creating demonstrations and library content that would be leveraging partner technologies (highbyte, hiveMQ, etc) as well as the latest features in Tulip (automations, new data pipelines, node red on edge…). This thread was a treat to read.

I just wanted to chime in to thank all of you for this discussion. There is a treasure trove of knowledge sharing here. I’d love to pick your brains and collaborate!

1 Like

There’s still so much to say and discuss about IT/OT integration. I’m a big fan of the idea that you’re presenting applications that combine the Tulip platform with partners like Highbyte, … There’s no shortage of ideas on this subject in my company :slight_smile:

Got a small win here… installed highbyte on an EC2 and got it reading and writing to Tulip tables as well as writing to machines.

Now that this is opened up, for starters I want to document this and make it available… it was way too hard to figure out without any documentation to use! Besides that, I want to make a little visual that shows data moving around in real time, something like this, but animated.

But I’m coming here to ask the pros! What should I connect? What do you connect? What would you like to see? I feel like I just opened up a little christmas present and looking to play around with it!

1 Like

Of course… this is just an example diagram, I feel like every node can realistically connect to every other node, so what is useful? What is helpful? I want to put some stuff into the ecosystem that you all would find useful in your operations.

This makes me feel good to start the day. Some comments and/or ideas :

  • You can use embedded MQTT broker in Highbyte to simplify architecture
  • A dedicated MQTT broker as HiveMQ is a good option for scalability, multiple plants architecture
  • One of the challenge with Tulip Machine API is to adapt your JSON payload with specific machine and specific ID. Create a Highbyte pipeline dedicate to Tulip machine API can be a good exemple. Functions can help a lot
  • Model and instances in Highbyte I are close to Machine Type and Machines in Tulip, a demo that show that and maybe can automate machine type or machine creation is another idea
  • Tulip can mix now OPC UA and API machine data. Highybte can mix more data source and mix pull and push data to better contextualize machine data. I see a lot of value to publish in Tulip a more advance machine data model
  • In 3.1 version, Highbyte expose REST Data Server as act as an shopfloor API. Use this API can open the doors to trigger from Tulip Apps or Automation a lot of actions/command on the shop floor with specific protocol (file, OPC UA method, …)
  • Most of the things we can do in Highbyte, you can do it with Node-RED with advanced skills. Highbyte is great for scalability. Provide a simple way to connecte thousand of machines. I would recommand that your use case show how Highbyte can be a smart way to scale Tulip deployment with machines data.
  • For OEE, Highbyte can publish machine data, Tulip machine trigger can convert machine data in machine state and automation can republish machine state in Highbyte to store in long term storage (Influx DB, …)
  • … many more

If I had 2 pieces of software to take to a desert island with a manufacturer, I’d take Tulip and Highbyte :slight_smile:

Hope it helps

3 Likes

Thank you for that detailed description. Seems like i have a lot to learn.