Tulip Community Competition: Best Vision Use Case Idea

Earlier this week, we announced the availability of new Vision capabilities in beta.

The new Vision capabilities include change detection and jig detection, which build on the barcode and QR code reading capabilities that are available with previous versions of Tulip.

If you had unlimited cameras and time to build and test the apps: how would you use the new Vision capabilities in Tulip?

We are looking for the two best ideas: the idea with the biggest potential business impact and the funniest, most interesting idea. We are just looking for IDEA, no need to actually build anything (I mean, you can). Respond to this post with your own completely original idea and try not to build too much off an existing idea. The deadline for ideas is December 11th, 2020!

The authors of the two best two ideas will each receive a very special gift to help make their idea a reality: a RealSense Camera.

Change Detection
The Change Detector lets users define regions or interest and detect changes such as hand movements for use cases such as pick-to-light. Because change detection is measured as a movement at a distance from the surface, a depth camera is required. Tulip has tested and recommends the Intel RealSense Depth Camera.

Jig Detection
The Jig Detector allows users to print and affix specialized stickers to items to detect movement such as the arrival of material. Most RGB cameras will work for jig detection.


Build OPE indicateur by mixing OEE data from machine monitoring and the operator’s presence front of the machine


Fun Idea: Disc stacking logic puzzle (Tower of Hanoi). Utilize the jig labels to identify the disc size (small, medium, large, etc.) and write triggers to illuminate a red light and sound a fail buzzer if the player stacks a larger disc on top of a smaller disc. Use change tracking to monitor the height of the tower on each peg and indicate a success when all the discs have been moved from the starting point to the end point.


Thanks to vision, we are getting closer to build app that can be controlled remotely. We can overlay buttons/shapes over the vision widget and instead of clicking on buttons they will be triggered by a change detector. Beforehand, we need to create region of change that match the different buttons positions.

I’m thinking about grade A cleanroom where adapted IT materials can be costly (and include an additional cleaning workload). Most cleanroom are surrounded by clear plastic wall so we could just install Tulip hardware outside.

Current technical drawback: Mirror effect of camera / Need to detect the region of change


Use the new vision capabilities for “augmented reality” work instructions. This would overlap 2D work instructions notes/images, or potentially 3D models in to the Tulip viewer. These can be triggered or positioned in the viewer using qr codes on physical objects, linking the viewer and the physical space.

Something similar to this, about 30 seconds in:


Hello Felix,

Thanks to your answer I can now locate the regionName and create conditions based on this name to trigger actions. It solves my concern !

At first, I was thinking about using the webcam of a tablet (outside) and make operator (inside) select in front of screen, hence the problem of mirror effect. Your idea of using a projector is even better ! Not sure yet what could be the installation but it’s all about giving the ability for aseptic operator to control Tulip apps without additional risk of contamination.


Now that you’ve included vision, you could detect that the area has been cleared and no parts have been left behind, screws etc.
That camera is also good for detecting powder as it has the addition of the IR emitters, I’m currently developing line clearance systems that use the functionality to detect things on a surface that should have been cleared (tools, components etc.), using IR and UV also allows for the checking of powders and biological residue.


Business Idea: Automated Kanban/Part ordering system.

Create an app that automatically monitors a station’s inventory and submits pull requests for needed parts before the station runs out.

Use Jig labels to identify parts bins and associate a part number with each jig label.

Use a scale connected to the app to weigh the bin and calculate the number of parts in the bin. Track which part is being weighed with a jig-detection region over the scale. Part weights can be stored in a table.

Use a change detection region at the front of the bin to count the number of times an operator reaches into the bin and decrement the bin quantity. (Obstacle; operators may take more than one part when they reach into the bin)

Use a second change detection region over the bottom of the bin to detect when the bin is getting low. (i.e. a change region from 5-15 mm above the bottom of the bin. This could not be used with stacked bins, the upper bins would obscure the parts in the lower bins from the camera)

When the bin quantity count reaches a preset threshold or the second change region no longer detects parts, place an order to the warehouse to pull the required part or prompt the operator to re-weigh the bin to get an accurate count before placing the order.


1, Use an outside camera to detect trucks on door bays and display empty bay doors in a screen so people/delivery personnel know which door is available. You can also track the time of each delivery by recording time parked

2, when working with different products in the same area, code boxes or products with jig detectors and inform the employee/supervisor when they are placing it the wrong place. This should reduce rework when packaging and hopefully returns when people gets the wrong product


The ability to identify a unique feature of the component that is unaltered by subsequent processing and handling (and potentially, after use) - instant serialization of your product for identification without marking the product.

1 Like

An app that uses vision to monitor/confirm that a pallet has correct amount of product stacked on it, and once vision confirms the correct configuration, it prints product label/pallet label, and once the label is attached to pallet, vision scans that label into inventory.

And/or an app that can ‘count’ like objects stacked in predetermined areas to keep counts. For instance, you could have 4 square marks on the floor, each representing a different reject reason (size, faulty, etc) and everytime a rejected part is added to a certain square the vision app adjust the count and records it…

1 Like

@Alinator What a cool idea! I especially like the Vision+IIoT combination of the camera+scale. Totally doable on Tulip today…

To make it even simpler - put a Jig marker at the bottom of the bin, and when the bin is getting low or almost empty - the markers shows and triggers the replenishment process automatically.

1 Like

@youri.regnaud - you hit a very good use case for cameras.
We are working to get the person detector out the door so this app/use case is definitely going to be feasible in the very near future. Stay tuned for updates!

1 Like

@mvermeer This sounds like a “defect detector” with visual inspection. It’s a solid use case that repeats on many production lines.
I think the ability to tie the visual detector with the process (work instruction) is the key to success. You mention the potential problems or defects may come at a later stage of the process, after things have already been done to the work piece. So adding multiple visual inspection steps within the process is recommended.

@royshilkrot I was thinking more along the lines of individual component identification, not necessarily for defects. Think of a unique pattern on the component, picked up visually (like facial recognition for parts). I’d want this feature to create a digital thread through my process to track each piece, traceable back to supplier and inspection data.

@hed both these ideas are great.
The first idea could be implemented with a Change detector, and be part of a process or scheduling system built on Tulip Tables and Apps.
The second idea is perfect for the Jig detector. However, every jig marker has an ID and the number of IDs is limited (up to 1000), so using them as unique codes for each box may not be feasible. You can print regular barcodes on the boxes and use a simple barcode scanner to identify them, and make sure they go in the right place by using a Change detector.

Hi @royshilkrot I believe that will also work, I was thinking more on the line of assigning one ID per product or SKU, I think it could work for a small or a medium company with up to 1000 products

Love this idea - here is another way to implement it. A fairly common approach to signaling replenishment is the idea of a “water level” for small parts. This would be fairly easy for a camera to evaluate, given the right station design.

(cue dramatic drumroll) :drum:

Announcing the winners :1st_place_medal: of our first Tulip Community Competition: Best Vision Use Case Idea

“Build an OPE indicator by mixing OEE data from machine monitoring and the operator’s presence front of the machine” @youri.regnaud
“Automatically monitor a station’s inventory and submit pull requests for needed parts before the station runs out.” @Alinator
“Coding boxes or products with jig detectors and informing the employee/supervisor when they place materials in the wrong place” @hed

Yes, 3 winners! With so many great ideas, we couldn’t choose just 2.

Along with this award, we will be sending the winners a RealSense Camera :camera_flash:

Thank you to everyone that participated and stay tuned for more on Vision capabilities.