With Release 264 we are excited to start providing beta access of Automations and Frontline Copilot™ to Tulip Customers. This release also includes many updates to the Apps Page, Widgets, Vision, and more.
Can you imagine in the future that Text-To-Speech can trigger some button or action, or be an event in the trigger as machine or device? (when “the operator speaks”, if "the text contains “NEXT” then …)
This was a use case that came up when showing speech to text to another customer this week. This new input does fire a trigger when processing is complete, so you certainly can do this on day 1, but I see two potential gaps:
The input needs to be pressed and held to record, making totally hands free operation impractical.
Right now, the transformation of audio to text happens after the button is released, not live as the user speaks. This allows us to leverage higher accuracy voice to text models, but comes at the cost of making it challenging to do this truly hands free interactive use case. Realwear is quite good at addressing this problem, and its always the direction I recommend people go if they are looking for a truly hands free experience.
If the button can be trigger by a event, we can imagine a lot of option as an USB pedal for example
(Hands-free doesn’t mean feet-free :-)) I will make some tests
@thorsten.langner - You had some great feedback or the team from the r260 release - one piece of that was included here - the recents page is now the default view for the Apps Page! Thanks for helping to make Tulip better and easier to use with this valuable feedback