How to make the most out of your machine vision system
80% of all human learning occurs through vision. When you consider how much information we gain through sight, it’s unsurprising that industrial robots also need to see to be effective. In this article, Nigel Smith, Managing Director of TM Robotics, explains the best practices for robotic machine vision.
Invented in the 1950s and gaining popularity in the 1980s, robotic machine vision isn’t a new phenomenon. That said, advances in vision systems have made them a lucrative tool for manufacturers looking to streamline production and quality checking.
In its simplest form, a machine vision system consists of a camera or sensors, lighting, a processor, software to extract useful information, and the output device - usually a robotic arm.
Robots without a machine vision system - blind robots - can complete simple, repeatable actions, but cannot compete with machine vision’s ability to allow robots to react to their surroundings intuitively. But, what makes a machine vision system most effective?
Lights, camera, action
Flipping through old photographs is a good example of why lighting is key when capturing images - you need to see what or who, you’re taking a picture of. Lighting is also fundamental for a machine vision system. Poor image capture results in a loss of information, and for a robot, this could result in an inability to conduct the process in question.
Another thing to consider is where to place the camera or sensor. The imaging device can either be positioned on the robot hand itself, known as the end-of-arm tooling (EOAT) configuration, or mounted above the robot. In this instance, the camera would be looking down at the workspace in fixed configuration.
Fixed configuration is usually the preferred method - the camera has a larger field of view and can take pictures while the robot is moving, reducing cycle time. What’s more, because the camera’s position is always the same, you don’t need to account for slight variance in the robot’s movements.
However, there are applications where EOAT configuration is most effective. This configuration is ideal for inspecting complex parts from all angles, or areas that are difficult to access. This does slow cycle time considerably, as the camera cannot capture images while the robot is in motion. An experienced automation consultant can provide advice for the most appropriate configuration.
2D, or not 2D?
The required location of the camera or sensors can also depend on what kind of machine vision is deployed - and choosing 2D or 3D vision can depend on your application.
2D vision works well in situations where colour or texture of the target object is important. This has traditionally been used for inspection tasks like barcode reading or presence detection. The limitations of 2D vision include the inability to perceive depth. Any task where shape or position are important, like bin picking, is better served by 3D machine vision.
In 3D vision, multiple cameras are used to create a 3D model of the target object. Shibaura Machine’s TSVision3D system operates in this way and, as a result, doesn’t require complex CAD data to recognise objects.
Using two integrated, high-speed stereo cameras to capture continuous, real-time 3D images, the software can recognise any object that’s positioned in its field of vision. Using this technology, TSVision3D enables automated bin-picking, even for non-uniform products - think bananas or mangos, as an example.
When choosing a vision system, it is crucial for manufacturers to consider what kind of objects the robot will be interacting with. For bin picking systems and unusual shaped products, 3D vision will be essential.
Speak to the experts
Technology often mimics nature, and machine vision systems are no exception. Being able to see means robots can respond to changes in their workspace, target objects of different shapes and sizes, and makes them more flexible, productive and capable than their blind predecessors.