A major manufacturer of electromechanical products had been operating six separate indexing assembly machines, each producing one of a family of more than 75 different products at about 40 parts per minute. The machines frequently sat idle as demand fluctuated for the various products. Labor expenses were high since each machine required a dedicated operator. The manufacturer considered moving the six machines and their entire production offshore to reduce labor expenses and save floor space at its European facility.

Figure 1. The CNCAssembly® configured using a Transport Engine and seven Pick & Place Engines.

Their problem was that although they had a considerable overall production volume, it consisted of smaller quantities of many different products.

They brought in Transformix Engineering (Kingston, Ontario, Canada) to address the problem. The solution was based on the Transformix CNCAssembly® system, which functions as a series of “n-axis robots” — an integrated network of distributed, electronically synchronized servo motor-driven assemblers. Key to the system’s success are Cognex Corporation’s (Natick, MA) vision systems.

Transformix set up a series of six stations. At the first station, a conveyor brings bases in a single lane in a random radial orientation. A Cognex vision system checks the geometry and features of the parts and determines their orientation. These and other inspection operations are performed with the Cognex PatMax® pattern recognition algorithm as well as blob detection and histogram functions. A robot then picks up two bases that the vision inspection has determined are good and, based on the vision inspection, orients them correctly and places them on a servo-controlled pallet. The pallet moves to an inspection location where another Cognex vision system performs inspections to confirm the proper placement of the bases.

Figure 2. Station 1 — RSM Disc,Vision Inspection and a Pick & Place Engine with servo-based tooling. The yellow Cognex In-Sight 7400 can be seen in the upper right of image.

At station 2, spacers are added. The vision system inspects their type, OD, and ID. Nonconforming spacers are removed prior to entering the machine. If the sub-assembly on the pallet is incomplete because a bad part was rejected, the subsequent engines will not try to load or assemble parts to it. The robot at this station picks the good spacers and assembles them to the bases on the pallets.

At station 3, a mechanical lever is loaded onto the post and another vision system is used to inspect the subassemblies to verify that each part has a lever and spacer.

The robot at station 4 orients and assembles plastic spacer-plates onto the bases. One more vision system inspects the subassemblies.

Covers enter in two lanes at station 5 in a known orientation where a vision inspection confirms their identity. Bad or unidentifiable parts are ignored by the robot and rejected. The robot then picks two covers, tilts them to the proper orientation, and presses them down to assemble them to the parts on the subassemblies.

At station 6, a measured amount of Loctite is applied to the post and a nut is threaded on until it reaches a specified position and torque value. A vision system inspects the cover-to-body assembly and nut-to-cover heights to ensure the cover has been properly assembled.

Figure 3. Station 4 — Pick & Place Engine and servo-based end-of-arm-tooling for part orientation and assembly.

Finally, a robot at station 7 picks up each assembly that has passed all the vision inspections and places it onto an unload conveyor. The same robot then clears the mover of any remaining bad assemblies. Parts that have failed earlier tests remain on the mover until this station but no further operations are performed on them.

For a typical assembly step, the camera is triggered at a particular moment in the machine cycle, the shutter opens for a programmed amount of time, and the light is driven at a programmed intensity in order to obtain a bright in-focus single image. A pass/fail decision is based upon the analysis of this image using a set of analysis tools.

The new system has increased the throughput from the previous 40 parts per minute to 300. And only five minutes of setup time per station are required to accommodate a different product.

System Technology

The Cognex PatMax® algorithm is used to train an image — you present an object, hit “train,” and the camera samples it. Features such as angle, size, and shading are automatically analyzed and used to create a master image, which becomes a template against which the next objects are compared. In addition to training against the master image, dimensional data, including tolerances based on actual operating conditions such as temperature range, are programmed in to add more detailed information.

A unique feature of this rule-based system is that these comparisons can be made regardless of the orientation of the part on the incoming feed. “We don’t ask the feed system to orient a part, we accept the parts in a random orientation and we’ll figure out what that orientation is,” said Martin Smith, head of electrical engineering at Transformix. Jeff Walsh, VP of Sales and Business Development added, “Traditionally a feed system would have components that align the parts with mechanical adjustments as they move down the conveyor, or track, or around a bowl so that they arrive at the end of the track oriented.”

The information from the Cognex system regarding which orientation the part has arrived in goes to a PLC that will determine what the servo motors must do to pick it up. The system is designed so that the part will move down the track in a relatively smooth way as quickly as possible. The vision system in combination with the Transformix software determines what orientation the part has arrived in, picks it up, corrects it, and then either loads it or installs it, as need be.

Figure 4. Station 5 — RSM Disc,Vision Inspection and a Pick & Place Engine with servo-based tooling for part manipulation and insertion. The yellow Cognex vision system is seen on the right side of the image.

The simpler the feeding system is, the more reliable it is. This leads to fewer problems with parts getting jammed up in the mechanisms that are trying to orient it. The goal is to take the burden off the feeding system and give it over to the vision system.

Traditional feed systems are designed for a specific product. Taking the burden of orientation away from the feed system therefore allows much more flexibility. This enables a large variety of different parts to be handled by the same feed system, so that it doesn’t have to be changed for each different product.

PatMax is augmented with the following standard analysis tools.

Blob, a tool that counts all pixels either dark or white. If you’re trying to quantify the quality of a part like the surface finish or its size, with blob, you count white pixels, which would represent the size, or a certain shape. Black pixels are simply the absence of white.

While blob does extremes, histogram sees different shades of gray and counts the number of pixels at each intensity. These pixels represent one particular feature and therefore allow much more resolution for a complicated image.

System Considerations

Figure 5. Products being assembled and tested at 300 parts per minute.

Mechanical Design

Every time you take a picture you must determine how much space you want the part to occupy in the picture — that determines your resolution. The mechanical design must account for these factors. It is also crucial to properly integrate the lighting. The light could be mounted off to the side, behind, or underneath the camera. Sometimes it might even form a ring that the camera looks through.

And the camera mount must be rigid. You don’t want the camera to shake. You have a finite exposure time so if your part is moving while the aperture is open, your image will be blurred. Also, if the camera is vibrating, it can go out of focus, especially if it’s zoomed in for close detail; also, the lens alignment might shift.

Lighting

LEDs are an ideal light source since they are robust, consistent over time, and produce a very clear single-color light. They can also be overdriven to produce a high intensity light for a very short time. This is important for achieving a high-speed system.

Lensing

The lens must be very high quality; if it isn’t made properly, you get a lot of distortion. You must control the grinding and surface quality of the lenses and mirrors. Any flaws cause distortion of the image. In that case, you won’t have a linear relationship between the number of pixels and the dimensions.

Speed

Typical exposure times range from 200 microseconds to 1 millisecond. The time for processing each image ranges from 20 to 100ms. This is critical, since the camera will not capture a new image while it is processing data from the previous one.

A complete assembly step can be typically done in 300ms. To achieve that, everything starts happening in parallel: taking the picture, getting the information, making a decision, and sending the information to a servo motor on another robot.

Conclusion

Since the manufacturer produced large numbers of different products, their automated assembly system was inefficient. They had to run several different lines in parallel and changeover for different parts was slow. Transformix designed and built a new system based on vision-guided robots that is not only fast, but is also flexible enough so that changeover can be done quickly.

This article was written by Ed Brown, Associate Editor, Tech Briefs Media Group. Email This email address is being protected from spambots. You need JavaScript enabled to view it. for comments and questions.