Skip to main content

Industrial robots are big, expensive, inflexible and dangerous to human workers, right? Wrong, the next generation of Collaborative robots or Cobots can operate alongside humans and don’t need to be caged or fenced off.

YuMi_Cobot_edd5cea1676ae6a9ddcfa129771c9ecad1d7cfe4.jpg

Image credit: ABB

The Caged Robot

The development of Cobots has seen robots move from the realms of science fiction to practical reality. Until recently, the powerful and intelligent humanoid machine working alongside humans only existed in the fictional world. Early industrial robot arms were relatively unsophisticated with basic sensors providing feedback on the joint positions to the control computer. ‘Safety sensors’ in the form of limit-switches were only provided to stop the arm damaging itself in the event of an out-of-range movement being commanded by the controller. They are big, bolted to the floor and often wield heavy welding tools with great speed and dexterity. It’s difficult to imagine anything less benign or collaborative, so humans are kept at a safe distance by physical barriers while the arm is operating. These rather awe-inspiring machines have been a feature of large-scale car assembly plants since the 1980’s but their high capital cost and inflexibility has meant few companies can afford them. So how do we remove the fences and turn a monster into a benign and flexible cobot?

Remove the fences but react if people get too near

A simple PIR detector can be used to sense someone approaching too closely and shut down the robot. It’s called a Safety-Rated Monitored Stop and is easy to implement, but highly disruptive as the generous safety-margin required would inevitably lead to frequent shutdowns. The factory floor may look a lot less intimidating without the cages, but it doesn’t make financial sense if workers still need regular access to the area.

A better method is to employ ultrasonic (sonar) or infrared (lidar) range sensors around the robot providing a more measured solution. A narrower ‘switch-off’ zone around the robot is now surrounded by a ‘slow-down’ zone. This ensures that a person encroaching on the safety-margin reduces the machine’s speed and brings it to a standstill only if they continue their approach. Naturally they will have to ignore warning sounds and lights while doing so. This technique is known as Speed and Separation Monitoring. A Cobot relying on SSM for safe operation is demonstrated in this video:

Notice how the ‘collaboration’ consists of human actions alternating with robot movements: the robot will not move until the human is out of the way. This method of operation is quite satisfactory for many tasks, but because of the ‘virtual cage’ moving the robot to another location involves unbolting it from the bench together with the sensor system. Both will require to be bolted down again and re-aligned at the new site. For maximum workplace flexibility a Cobot should not require a separate safety sensor system with all the spanner-work and realignment that entails. It is possible to retrofit SSM safety to a legacy robotic arm, and at least one company is working on the necessary vision system to turn an old robot into a cobot.

Remove the ‘Do not Touch’ restriction

The word collaboration does imply that robot and human must get close enough to interact and each must be sure the other won’t ‘hurt’ them. In the collaborative environment, physical contact between human and machine may be inevitable and indeed desirable, enabling the robot to learn from experience. An obvious way to minimise bruising and breakages would be to make the robot arm and its end-effector (gripper) out of a soft, pliable material. A lot of research has gone into ‘Soft Robotics’, divided into two areas: complete robots made from a soft material, such as Snakebots, and actuator/grippers able to pick up odd-shaped or fragile objects. For most purposes in the workshop or on the factory floor, bendy robots do not offer enough precision of movement or grip. Other ways of ensuring human colleague safety and comfort must be found.

The Collaborative Robot

So far, we have established two essential features that make a robot a Cobot: it must perform a task with a reasonable degree of precision and repeatability, and while doing so, sense and react safely to unexpected intrusions into its envelope of operation. To these can be added some very desirable capabilities: easily programmed by a non-expert, quickly moved around and re-positioned for new tasks, and force-limiting control to avoid damage to itself and the workpiece. The next video illustrates what might be termed a ‘desktop’ robot assistant that fulfils these requirements and can truly be termed a Cobot:

The YuMi is not unique, but it does illustrate the current thinking behind cobot design. Visually the new cobots look completely different from bench-sized robot arms of the past.

  • Joints are designed to eliminate ‘pinch-points’ where a human hand could be crushed.
  • Arm components are spaced away from each other so there is no possibility of a scissor action on a human limb.
  • The arms are clean: all gears, shafts and cables completely enclosed in a smooth, rounded elbow-friendly padded casing.
  • Speed of movement is limited for compatibility with human reaction times. This is only applicable if contact-sensing is available when human and cobot operate in the same space.
  • YuMi also features contact-sensing, forcing a stop should the arm hit an obstruction.

Vision Systems

If you trawl through the many manufacturers’ videos of their collaborative robot products you may have noticed that a) not much ‘collaboration’ is in evidence and b) the tasks largely consist of moving an object from one specific place to another. My idea of a useful cobot would be one that could perform a similar task to an operating-theatre nurse in a surgical hospital, handing instruments to the surgeon when asked, or more likely, doing it without being asked based on training and experience. In order to achieve this goal, the cobot needs to have ‘eyes’ and be able to interpret what it sees. Cobots such as YuMi and Rethink Robotics’ Sawyer have cameras and they can be trained to recognise objects. It’s not very sophisticated but it least it means that objects needing to be relocated (Pick ‘n’ Place) don’t have to be precisely positioned at the pick-up point, and the robot can finely adjust the put-down point by observing target reference points. Watch Sawyer using its Robot Positioning System with Landmarks in this video:

As you can see, it can lock-on to a tray of components in a learning mode, but should the tray be moved accidently, it can’t realign itself without manual intervention. My ideal robot assistant would need to recognise tools randomly placed on a workbench, pick them up and ‘hand’ them over with the correct orientation, without stabbing me in the process. A tall order. Fortunately, object recognition and tracking systems have seen massive improvements in performance recently – all thanks to the continuing development of autonomous vehicles; so expect more able cobots soon.

Learning versus Programming

There are three main ways of training a robot to perform a particular sequence of movements: Pendant, Hand-Guiding and Programming. Pendant is only applicable to those big caged robots where an operator uses a pendant control box outside the fence to drive the robot manually through its routine and save the ‘map’ in memory. The machine is 'taught' what to do by its human operator.

Hand-Guiding uses the same principle of teaching as Pendant, except that it’s literally hands-on. The cobot is put in Learn mode whereby motors are deactivated or their power reduced to a level where they can just hold position, and then the arm is moved by hand through the desired sequence. The motion sensors on each joint motor shaft, used to provide closed loop control in normal operation, instead record the actions for subsequent ‘replay’. See it being demonstrated in this video at an exhibition:

Many robots are still programmed with code generated off-line on a computer. Forth is a computer language developed many years ago for the specific application of real-time robot control. Its ability to produce fast and compact code very quickly, means it still gets used today: the ST Robotics R-series industrial robot arms are programmed in RoboForth for example. Modern hackers have discovered its charms too. Even space probes such as the Rosetta comet chaser and its lander Philae were programmed in Forth. I’m mentioning Forth a lot because of personal interest: I’ve been creating a version for embedded applications based on the Microchip dsPIC microcontroller. FORTHdsPIC has been the subject of several DesignSpark blog posts in recent years! The drawback with programming is the time and trained staff needed to create or modify code every time the cobot needs to be redeployed. And that rather defeats the benefit to a small company of owning one. For the time being, hand-guidance is likely to be the ‘training’ method of choice.

In the future it’s likely that Artificial Intelligence will become a dominant feature of robot control. In other words, robots, particularly cobots, will learn for themselves how to achieve a specified goal. This concept has led to ‘robots replacing human workers’ being the subject of furious debate recently. Just as fully driverless cars are a long way off, so are robots with even basic human skills. My robot assistant at the workbench is unlikely to appear any time soon; not until I’m convinced it won’t mistake my hand for the workpiece and attack it with a socket wrench.

Work in Progress

A great deal of research and development is underway to achieve that goal of an ideal cobot. Here is a selection of recent papers on the subject.

Learning Robot Objectives from Physical Human Interaction. This idea takes hand-guidance a step further: if the cobot makes a wrong move while operating, the human operator can immediately physically correct it. The robot will thus ‘learn from its mistakes’.

Custom Soft Robotic Gripper Sensor Skins for Haptic Object Visualization. Researchers are developing a touch sensor system for gripper ‘fingers’. This sensory feedback allows the robot to make a 3D visualisation of the object it is grasping.

Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics. This paper addresses the problem of object identification for the purposes of selecting the optimal gripper action. A vision system uses a Convolutional Neural Network with an image database to recognise objects with random orientations.

International Standards/Specifications: ISO 10218 and ISO/TS 15066

Now for the boring stuff. The fact is, if you want to create a cobot, or add one to your workforce, then it must comply with these international safety standards. ISO 10218 was written to cover the big caged robots of the time, but it does contain general references to safety which also apply to the new generation of cobots. The newer ISO/TS 15066 is an attempt to provide a safety specification for cobots. For example, the specification indicates that: If contact between robots and humans is allowed, and incidental contact does occur, then that contact shall not result in pain or injury. The levels of pain that can be tolerated by different parts of the human body from contact with a cobot were researched and are provided in the specification. I can’t imagine how they obtained those numbers. It’s called ‘Pain Onset Level Data’ and must be taken into account in any design. Welcome to the world of un-caged robots!

Finally

We are nowhere near a C3P0-style robot assistant yet and despite all the talk of collaboration, most of the current crop of cobots seem to spend their lives devoid of human company, just moving things from one place to another. In the meantime, it’s visual style that sets them apart from each other: many have gone for human-like ‘muscular’ arms, but I prefer Sawyer, looking like animated plumbing with blinking eyes that may one day have real intelligence behind them.

If you're stuck for something to do, follow my posts on Twitter. I link to interesting articles on new electronics and related technologies, retweeting posts I spot about robots, space exploration and other issues.

Engineer, PhD, lecturer, freelance technical writer, blogger & tweeter interested in robots, AI, planetary explorers and all things electronic. STEM ambassador. Designed, built and programmed my first microcomputer in 1976. Still learning, still building, still coding today.