Robot Achieves Autonomous Obstacle Detection on High-Voltage Power Lines
In a major advancement for robotic inspection of critical energy infrastructure, a research team has developed a new multi-sensor system enabling a high-voltage transmission line inspection robot to autonomously detect, locate, and identify obstacles with high precision and real-time performance. The breakthrough, detailed in a study published in Chinese Journal of Mechanical Engineering, introduces a layered sensor integration architecture that allows the robot to adapt its perception strategy based on its proximity to potential obstacles, significantly improving reliability in complex outdoor environments.
The work addresses a persistent challenge in the automation of power grid maintenance: how to enable robots to safely and efficiently navigate overhead transmission lines, which are cluttered with hardware such as dampers, clamps, and splices—each a potential obstacle requiring a unique crossing strategy. Traditional inspection methods rely heavily on human technicians working in dangerous, high-altitude conditions, making automation not just a matter of efficiency but of worker safety. While robotic solutions have been explored for decades, achieving robust autonomy in the face of dynamic environmental factors like wind-induced line oscillation, variable lighting, and unpredictable mechanical disturbances has remained elusive.
The research team, led by Ding Ping from Yunnan Power Grid Co., Ltd.’s Chuxiong Power Supply Bureau, in collaboration with Li Zhenhui from Zhejiang University’s Advanced Technology Institute and Liu Ai hua from the State Key Laboratory of Robotics at the Shenyang Institute of Automation, Chinese Academy of Sciences, has proposed a comprehensive solution that moves beyond single-sensor approaches. Their method, grounded in a multi-sensor fusion framework, systematically breaks down the obstacle interaction process into four distinct operational phases: obstacle-free travel, obstacle-approach travel, close-range obstacle localization, and obstacle recognition. Each phase leverages a different combination of sensors and algorithms, tailored to the specific demands of accuracy, range, and speed required at that stage.
This phased approach represents a significant shift from previous attempts, which often tried to apply a one-size-fits-all detection method throughout the robot’s journey. The team’s insight was that the requirements for environmental perception vary dramatically depending on the robot’s operational context. During high-speed, obstacle-free travel, the primary need is for wide-area surveillance with high real-time performance to quickly detect any potential hazards. At this stage, the robot can tolerate lower precision in distance measurement, as its goal is simply to identify the presence of an obstacle and initiate a slowdown. As the robot approaches a detected obstacle, the priority shifts to accurate distance estimation to ensure a smooth and stable deceleration. Finally, during the close-range interaction and identification phase, the focus is on high-precision localization and detailed feature extraction to correctly classify the obstacle type, allowing the robot to plan and execute the appropriate crossing maneuver. Real-time performance becomes less critical during this final phase, as the robot is operating at low speed.
The core of the system is a sophisticated sensor integration architecture that coordinates internal and external sensing modalities. Internal sensors, including encoders, current sensors, and limit switches, monitor the robot’s own state—its speed, motor load, and joint positions. External sensors, primarily a pan-tilt camera mounted at the front of the robot and two pinhole cameras positioned beneath its arms, provide environmental data. The pan-tilt camera serves as the robot’s long-range “eyes,” scanning the line ahead, while the arm-mounted pinhole cameras provide high-resolution, close-up views of obstacles once the robot is in position.
The obstacle detection process begins with the pan-tilt camera. During high-speed travel, the system continuously analyzes the video feed, using image processing techniques to segment the power line from the background based on grayscale differences. The algorithm then analyzes the continuity and shape of the conductor’s centerline. Deviations from a smooth, continuous line—such as bulges, protrusions, or discontinuities—trigger the identification of a “suspicious region,” which is flagged as a potential obstacle. This initial detection is designed for speed, allowing the robot to maintain a high cruising speed of 15 meters per minute while remaining vigilant.
Once a suspicious region is identified, the robot transitions to the obstacle-approach phase. Here, the system employs a monocular vision ranging technique using the same pan-tilt camera. By analyzing the apparent movement of the obstacle in the camera’s field of view as the robot moves forward a known distance, the system can triangulate the horizontal and vertical distance to the obstacle. This method leverages the fixed geometric relationship between the camera and the power line, assuming the camera is aligned directly beneath the conductor. As the robot closes the distance, it uses this ranging data to dynamically adjust its speed, slowing down in a controlled manner. For example, when the obstacle is 480 centimeters away, the robot begins to decelerate; at 323 centimeters, its speed is reduced to 9.7 meters per minute; and at 255 centimeters, it slows further to 7.6 meters per minute. This graduated deceleration ensures a smooth and stable approach, minimizing mechanical shock and preventing the robot from overshooting or colliding with the obstacle at high speed.
When the robot reaches a predetermined close distance—typically less than 100 centimeters—it enters the close-range localization phase. This is where the limitations of vision-based methods become apparent. At very close range, visual occlusion, lighting glare, and the complex 3D geometry of line hardware can make accurate visual localization difficult. To overcome this, the team’s system switches to a fusion of tactile and proprioceptive sensors: contact switches, motor encoders, and current sensors.
The contact switches, mounted on the ends of the robot’s driving wheels, act as the first physical indicator of an obstacle. When the wheel makes contact, the switch is triggered. However, the researchers recognized that a simple contact signal is not sufficient. Power lines are not perfectly straight; they can have kinks, sag points, or temporary deformations caused by wind or thermal expansion. A contact event could be a true obstacle, or it could be a false positive caused by the robot navigating a natural curve in the line. To distinguish between these scenarios, the system implements a multi-sensor confirmation protocol.
The algorithm monitors not just the contact signal, but also the behavior of the driving motor. When the robot encounters a solid obstacle, the wheel’s rotation is impeded, causing a measurable drop in motor speed (as detected by the encoder) and a corresponding increase in motor current (as detected by the current sensor) due to the increased load. The system continuously samples these three data streams—contact status, speed change, and current change—over a short time window. It then applies a logical fusion rule: a confirmed obstacle detection only occurs if the contact switch is triggered, the motor speed decreases significantly below a reference threshold, and the motor current increases significantly above a reference threshold. This triple-confirmation mechanism effectively filters out false alarms caused by line irregularities or sensor noise, ensuring that the robot only stops for a true, impassable obstacle.
This sensor fusion approach was validated in field tests conducted on a mountainous section of transmission line. The experimental setup included a sequence of common obstacles: a damper, a suspension clamp, another damper, a long stretch of bare conductor, and finally a tension clamp serving as a transition to a different section of line. The results demonstrated the system’s robustness. In one test, the robot successfully approached a suspension clamp, made contact, and stopped precisely at the point of collision. The data log showed a clear sequence: the contact switch triggered, followed immediately by a sharp drop in motor speed and a spike in motor current. The fused detection algorithm correctly identified this as a true obstacle event and commanded the robot to halt, preventing any damage. The researchers noted that without the current and speed data, the contact signal alone might have been dismissed as a false positive, or worse, a false negative could have led to a damaging collision.
With the obstacle physically located and the robot stopped in a stable position, the final phase—obstacle recognition—begins. This is critical for autonomous operation, as the robot must know what kind of obstacle it is facing to execute the correct crossing procedure. A damper requires a different manipulation strategy than a suspension clamp or a tension clamp. To achieve this, the system activates the pinhole cameras mounted beneath the robot’s arms. As the robot stops, the obstacle is positioned directly under one of these cameras, providing a clear, close-up view of its structure.
The recognition algorithm is based on the analysis of the obstacle’s edge features using a mathematical descriptor known as wavelet moment invariants. These features are derived from the image’s edge map, which is first extracted using a Canny edge detection filter. The wavelet moment invariants are particularly well-suited for this task because they are invariant to changes in the object’s position, orientation, and scale in the image. This means the robot can correctly identify an obstacle regardless of how it is rotated in the camera’s view or how far the camera is from it (within a reasonable range). The algorithm computes a set of these invariant features from the edge image, creating a unique “fingerprint” for the object.
To improve classification accuracy, the system employs a two-stage fusion strategy. The first stage uses the wavelet moment features from the close-up pinhole camera image to calculate a preliminary probability that the object belongs to one of several predefined classes—such as damper, suspension clamp, or no obstacle. However, the researchers realized that valuable information is also available from the earlier, long-range phase of the approach. During the obstacle-approach phase, the system already extracted simple geometric features from the suspicious region in the pan-tilt camera image, such as its area and aspect ratio (width-to-height). A damper typically appears as a compact, rounded object, while a clamp might appear as a longer, more linear structure.
The final recognition decision is made by fusing these two sources of information. The system takes the probability distribution from the long-range geometric features and the probability distribution from the close-range wavelet moment features and combines them using a geometric mean. This fusion method effectively weights the confidence of both classifiers, producing a final, more robust identification. For instance, in one test case, the long-range analysis gave a high probability (0.8829) that the object was a damper based on its shape, while the close-range wavelet analysis gave a slightly lower probability (0.7676). The fused result was 0.8232, which was decisively higher than the probabilities for the other classes, leading to a confident identification of the obstacle as a damper. This fusion approach significantly reduces the chance of a misclassification due to poor lighting or a partially obscured view in either camera.
The practical implications of this research are substantial. By enabling a robot to autonomously and reliably navigate complex transmission line environments, this technology paves the way for more frequent, comprehensive, and safer inspections of the power grid. It reduces the need for human technicians to perform dangerous climbs on high-voltage towers, especially in remote or mountainous regions. The system’s modular, phase-based design also makes it adaptable. The core principles of using different sensors and algorithms for different operational contexts could be applied to other types of field robots, such as those used for pipeline inspection, bridge maintenance, or even space exploration.
The research was conducted under the practical constraints of real-world utility operations. Ding Ping, whose primary role is in transmission line operation and maintenance management at Yunnan Power Grid, brought a crucial field perspective to the project, ensuring that the robot’s capabilities were aligned with actual operational needs. Li Zhenhui contributed advanced expertise in sensor integration and signal processing from Zhejiang University, while Liu Ai hua provided deep knowledge of robotic autonomy and control systems from the national robotics laboratory. This collaboration between a utility company, a top-tier university, and a national research institute exemplifies a successful model for translating academic research into practical engineering solutions.
The success of this project highlights a key trend in modern robotics: the move away from relying on a single, perfect sensor toward the intelligent integration of multiple, imperfect sensors. No single sensor—be it a camera, a laser scanner, or a contact switch—is perfect. Cameras fail in low light, lasers can be blinded by dust, and contact switches can be triggered by false positives. But by combining their data in a smart, context-aware way, a robot can achieve a level of perception and reliability that surpasses the capability of any individual sensor. This research, with its clear, phased approach to obstacle interaction and its robust sensor fusion algorithms, represents a significant step forward in the quest for truly autonomous field robots capable of operating in the unstructured, dynamic environments of the real world.
Ding Ping, Li Zhenhui, Zhang Linhua, Yang Xiaolong, Liu Ai hua. Obstacle location and recognition method for an inspection robot based on multi-sensors. Chinese Journal of Mechanical Engineering, 2021, 34(2): 36-42. DOI: 10.16731/j.cnki.1671-3133.2021.02.006