Machine Vision: Designing for Success
In our previous blog, we looked at some of the best practices to keep in mind when designing machine vision solutions. To reiterate, a well-designed machine vision system enables manufacturers to improve product quality, enhance process control, and increase manufacturing efficiency while lowering the total cost of ownership. Good design starts with selecting a motion-vision integration type, based on the machine’s automation tasks.
Integrated machine vision design
In an integrated machine vision system, the motion and the vision systems can have varying levels of interaction, from basic information exchange to advanced vision-based feedback. The level of interaction depends on the requirements of the machine, that is, the sequence, the accuracy and precision, and the nature of the tasks that must be performed by the machine. Depending on the level of interaction between the motion and the vision systems, a design can be based on one of the following four types of integration: synergetic integration, synchronized integration, vision-guided motion, and visual servo control. For a high ROI, the machine must meet the specified requirements at deployment and must scale well with next-generation process and product improvements. Hence, integrators must first identify the current and future requirements and use those requirements to determine the type of integration that will best suit the application.
Synergetic integration
Synergetic Integration is the most basic type of integration. In this type of integration, the motion and the vision systems exchange basic information such as velocity or a time base. The time to communicate between the motion and vision systems is typically on the order of tens of seconds. A good example of synergetic integration is a web inspection system (Figure 1). In a web inspection system, the motion system moves the web, usually at a constant velocity. The vision system generates a pulse train to trigger cameras, and it uses the captured images to inspect the web. The vision system needs to know the velocity of the web in order to determine the rate for triggering the cameras.
Synchronized integration
In synchronized integration, the motion and the vision systems are synchronized through high-speed I/O triggering. High-speed signals wired between the motion and the vision systems are used to trigger events and communicate commands between the two systems. This I/O synchronization effectively synchronizes the software routines running on the individual systems. A good example of synchronized integration is high-speed sorting, in which objects are sorted based on the difference in specific image features, such as color, shape, or size.
In a high-speed sorting application, the vision system triggers a camera to capture the image of a part moving across the camera (Figure 2). The motion system uses the same trigger to capture the position of the part. Next, the vision system analyzes the image to determine if the part of interest exists at that position. If it does, that position is buffered. Because the conveyor is moving at a constant velocity, the motion system can use the buffered position to trigger an air nozzle further down the conveyor. When the part reaches the air nozzle, the air nozzle is triggered to move the part to a different conveyor, sorting the different colored parts. High-speed sorting is widely used in the food industry to sort product types or discard defective products. It achieves a high throughput, lowers labor costs, and significantly reduces defective shipments resulting from human errors.
Vision-guided motion
In vision-guided motion, the vision system provides some guidance to the motion system, such as the position of a part or the error in the orientation of the part. As we move from a basic to a more advanced integration type, there is an additional layer of interaction between the motion and the vision systems. For example, you can have high-speed I/O triggering in addition to vision guidance.
A good example of vision-guided motion is flexible feeding. In flexible feeding, parts exist in random positions and orientations. The vision system takes an image of the part, determines the coordinates of the part, and then provides the coordinates to the motion system (Figure 3). The motion system uses these coordinates to move an actuator to the part to pick it up. It can also correct the orientation of the part before placing it. With this implementation, you do not need any fixtures to orient and position the parts before the pick-and-place process. You can also overlap inspection steps with the placement tasks. For example, the vision system can inspect the part for defects and provide pass/fail information to the motion system, and the actuator can then discard the defective part instead of placing it.
Figure 4 shows the block diagram of the vision-guided motion system described in Figure 3. The vision system provides the position of the part to the motion trajectory generator at least once every second. This type of processing requires fast real-time systems that can meet the timing and processing needs of a vision-guided motion system.
In a vision-guided motion system, the vision system provides guidance to the motion system only at the beginning of a move. There is no feedback during or after the move to verify that the move was correctly executed. This lack of feedback makes the move prone to errors in the pixel-to-distance conversion, and the accuracy of the move is entirely dependent on the motion system. These drawbacks become prominent in high-accuracy applications with moves in the millimeter and submillimeter range.
Visual servo control
The drawbacks of vision-guided motion can be eliminated if the vision system provides continual feedback to the motion system during the move. In visual servo control, the vision system provides initial guidance to the motion system as well as continuous feedback during the move. The vision system captures, analyzes, and processes the images to provide feedback in the form of position setpoints for the position loop (dynamic look and move) or actual position feedback (direct servo). Visual servo control reduces the impact of errors from pixel to distance conversions and increases the precision and accuracy of existing automation. With visual servo control, you can solve applications that were previously considered unsolvable, such as those that require micrometer or submicrometer alignments. Visual servo implementations, especially those based on the dynamic look-and-move approach, are becoming viable through field programmable gate array (FPGA) technologies that provide hardware acceleration for time-critical vision processing tasks and that can achieve the response rates required to close the fast control loops used in motion tasks.