Hi TULIP Community and TULIP Team,
how do you design the interaction between TULIP users and the TULIP player in work instructions?
What options are there to get to the next step in an app, for example?
How can you make it as easy as possible for the user?
Do you use touch displays, devices, a keyboard or other ways to trigger the “next” trigger?
What has been your good experience?
I am looking forward to your answers.
It’s a bit tough to lend a hand with this because it really just depends on what you want that user experience to be. Apologies for the response being long but hopefully it’s useful.
Talking/co-developing with the user(s) is ultimately a crucial part of this process. Not sure what your experience is thus far or what your role is in the grand scheme of Tulip use where you’re at but it would be good to first understand many of the options of triggers and in-app typing inputs first. Then understanding what the flow and progression of the inputs/outputs should be or could be will help you decide or guide the discussions with the users with respect to your triggers and the app behavior. Sometimes, I like to think of it as a “professional” presentation (e.g. PowerPoint). There are times when you want it to do the work for you (timers or motions) and other times we want the material on demand when presenting (when we click next on a remote). If the user reaches a certain point or adds input, you may want the app to progress automatically (i.e. transition) or you want data to post automatically behind the scenes or even if certain conditions are met or not met. If you want/need that data or app progression to be more deliberate and/explicit for the user, you may then want to require them to push more buttons - I try to minimize the user’s input as much as possible. I also factor in aesthetics, ergonomics, and psychology with my apps. Whether it’s eye travel or hand travel, I try to minimize the motion based on repetition vs function and flow. There are studies out there on eye motion (where our eyes are drawn to, how they get drawn to a location, etc.) that can help too - work with human nature rather than subconsciously against. Again, co-developing with the user(s) is key to functionality, value, buy-in and engagement.
In my opinion, if the player is at a common workstation computer and usage requires multiple applications (e.g. MS Word, MS Outlook, Tulip player, browser, etc.) or a personal computer, it would likely make the most sense to have a physical keyboard as it tends to be more natural/familiar. If the player will be at a shared computer with limited functionality (e.g. Tulip player & browser only) at a point-of-use computer at a station or machine, then I think a touchscreen w/o physical keyboard & mouse makes the most sense.
We use touchscreens and use the on-screen keyboard w/o mice. Some triggers are automatic based on user input, some are manually triggered by the user. I might be missing some but with the touchscreens, some pros and cons are…
- No extra hardware is needed or can get damaged
- Because of the above, less clutter
- Not much of a learning curve for less tech savvy individuals for basic typing functions
- No time lost typing for less experienced people typing, compared to traditional keyboards
- Feels more engaging as a user as opposed to the extra hardware - keeping the users’ eyes and hands on the screen.
- Strange behavior with Windows’ on screen keyboards
* If Tulip player is in Full Screen, on screen keyboard overlays
* If Tulip player is not in Full Screen, it annoyingly bumps up/shrinks Tulip player. Can cause a lot of typing errors, frustrating, etc. If having to type a few things, it may cause the player to jump up and down.
- At least with Windows, standard keyboard layout options are limited; there isn’t a standard with full keyboard + num pad to the side.
- Though not very frequently, sometimes it just doesn’t pop up and has to either requested from the task bar OR computer needs restarting. Though maybe it’s a bug with Tulip player, it feels more like an annoying Windows issue. If you have a lot of computers (dozens/hundreds) with this configuration, it can seem like it’s always happening but on a per-computer basis, it’s not frequent.
- Depending how much real-estate you have on your screen and/or app relative to the amount of content you have, button/trigger size and placement can be an important consideration - (think landline phone with extra large buttons). If too close or too small, a user’s touch accuracy is only as good as their perception. This gets rolled into the conversation of user-experience.
- It’s minor but the “?” widget (more information message) is not supported with touchscreen (i.e. requires mouse pointer to hover over the widget).
All in all, I recommend co-developing with users and deploying a minimal viable product (aka MVP), try it out and iterate on-the-fly based on user feedback.
Hopefully this was helpful! All the best!