Robot Vision Teaching Case Enhances Engineering Education at Changshu Institute of Technology

Robot Vision Teaching Case Enhances Engineering Education at Changshu Institute of Technology

In an era where automation and intelligent manufacturing are rapidly transforming industrial landscapes, the integration of robotics and machine vision has become a cornerstone of modern production systems. As industries demand more adaptive, precise, and autonomous solutions, the need for skilled engineers who can design, implement, and optimize these technologies has never been greater. However, traditional engineering education often struggles to bridge the gap between theoretical knowledge and real-world application. A recent breakthrough in pedagogical methodology from China offers a compelling model for how universities can prepare the next generation of engineers for the challenges of Industry 4.0.

At the School of Electrical and Automatic Engineering at Changshu Institute of Technology, Associate Professor Zhu Jianjiang has developed a comprehensive teaching case that immerses students in the practical complexities of robot vision-based pick-and-place systems. Published in the Journal of Electrical and Electronic Engineering Education, this innovative approach combines theoretical instruction with hands-on experimentation, enabling students to master critical skills in camera calibration, image segmentation, coordinate transformation, and system communication—all within the context of a real industrial application.

The teaching case centers on a “vision-guided robotic picking” system, a technology increasingly deployed in automated warehouses, assembly lines, and logistics centers. Unlike conventional robotic systems that rely on fixed programming and pre-defined positions, vision-guided robots can dynamically locate and manipulate objects even when their positions or orientations vary. This flexibility is essential in environments where parts arrive on conveyor belts in random arrangements, a common scenario in high-mix, low-volume manufacturing.

Zhu’s case study adopts the “eye-to-hand” configuration, where a stationary camera observes the workspace and provides positional data to the robot. This setup is widely used in industrial applications due to its stability and scalability. Students are tasked with building and programming a complete system that captures images of mechanical components, identifies their location and orientation, and transmits this information to a robotic arm for precise grasping.

What sets this educational model apart is its emphasis on problem-solving and system integration. Rather than isolating individual concepts, the course challenges students to think holistically about the entire workflow—from hardware selection to software implementation. This mirrors the real-world engineering process, where success depends not only on technical proficiency but also on the ability to analyze requirements, design robust solutions, and troubleshoot under uncertainty.

One of the foundational elements of the case is hand-eye calibration, a process that establishes the spatial relationship between the camera and the robot. Without accurate calibration, even the most sophisticated image processing algorithms would fail to guide the robot correctly. Students learn that calibration is not a one-time setup but a critical step that must be performed with precision. Using a circular grid calibration plate, they collect data from multiple robot poses and apply mathematical transformations to compute the coordinate mapping between the vision system and the robotic arm.

This process introduces students to the concept of coordinate frames and transformation matrices—abstract ideas that are often difficult to grasp in a lecture setting. By physically moving the robot to touch specific points on the calibration plate and comparing those positions with the pixel coordinates detected by the camera, learners gain an intuitive understanding of how different reference systems are aligned. The tactile experience reinforces theoretical knowledge, making the learning more durable and meaningful.

Equally important is the design of the image acquisition system. Students are guided through the selection of cameras, lenses, and lighting setups based on the physical characteristics of the target objects. They learn that image quality is not just a function of hardware specifications but also of environmental factors such as ambient light, surface reflectivity, and object texture. For instance, metallic components with high reflectivity may produce glare under certain lighting conditions, while matte surfaces may absorb too much light, resulting in low contrast.

Through experimentation, students discover that a ring light provides uniform illumination for the metallic workpieces used in the lab, minimizing shadows and enhancing edge definition. They also explore the concept of depth of field, realizing that variations in object height can lead to defocusing if the lens is not properly configured. These insights underscore the importance of system-level thinking—where every component must be chosen and adjusted in relation to the others.

Once the imaging system is operational, the focus shifts to image processing. Here, the challenge lies in developing algorithms that are both accurate and robust. Traditional thresholding methods, which rely on fixed intensity values to separate objects from the background, often fail when lighting conditions vary or when objects have inconsistent surface properties. Zhu’s teaching case illustrates this limitation through a series of real-world images where workpieces exhibit significant grayscale variation due to differences in material finish and surface roughness.

Rather than providing a ready-made solution, Zhu encourages students to analyze the data themselves. Using tools like grayscale histograms and feature detection software, they identify common patterns across the image set. They observe that while the absolute brightness of the workpieces fluctuates, their intensity consistently remains higher than that of the background. This insight leads to the development of an adaptive thresholding strategy—where the threshold value is dynamically calculated based on the average intensity of each individual image.

This approach, known as parametric thresholding, proves highly effective. When students apply a fixed threshold, approximately 37.5% of the test images (15 out of 40) fail to segment the workpiece correctly. However, by computing the mean gray value and adding a small offset, they achieve 100% success in isolating the target objects. This dramatic improvement not only validates the effectiveness of the method but also teaches students a fundamental lesson in algorithm design: robustness often comes from adaptability.

With the workpieces successfully segmented, the next step is to extract geometric features such as the center coordinates and rotational angle. These parameters are essential for the robot to approach the object from the correct position and orientation. Students implement algorithms to compute the centroid and principal axis of the detected region, converting pixel coordinates into real-world measurements using the previously established calibration data.

The final phase of the project involves system integration through network communication. The vision system, running on a PC, acts as a server that sends coordinate data to the robot controller via TCP/IP sockets. Students write both the server-side and client-side code, learning how to structure data packets, handle network connections, and ensure reliable transmission. This component of the course introduces them to the realities of industrial communication protocols, where timing, data integrity, and error handling are critical.

Throughout the implementation, students engage in iterative testing and debugging. They encounter issues such as coordinate misalignment, communication timeouts, and inconsistent object detection—challenges that mirror those faced by professional engineers. By working through these problems, they develop resilience and a deeper appreciation for the complexity of real-world systems.

The pedagogical impact of Zhu’s teaching case extends beyond technical skill acquisition. It fosters a mindset of inquiry and experimentation. Instead of passively receiving information, students are actively involved in hypothesis formation, testing, and refinement. They learn to ask questions such as: Why does this algorithm fail under certain conditions? How can we make our system more resilient? What trade-offs exist between speed and accuracy?

This inquiry-based approach aligns with contemporary educational theories that emphasize active learning and cognitive engagement. Research has shown that students retain knowledge better when they construct it through experience rather than receive it through lectures. Moreover, the case study promotes collaboration, as students often work in teams to design, implement, and optimize their systems.

From an institutional perspective, the success of this teaching model reflects a broader shift in engineering education toward experiential and project-based learning. Universities are increasingly recognizing that graduates must be more than just technically competent—they must also be innovative, adaptable, and capable of working across disciplines. Programs that simulate real engineering workflows provide a powerful platform for developing these competencies.

Changshu Institute of Technology’s initiative also highlights the growing role of industry-academia collaboration in shaping curriculum design. The case study was supported by grants from the Ministry of Education and industry partners, ensuring that the content remains aligned with current technological trends and workforce needs. This synergy between education and industry helps ensure that students are not learning outdated practices but are instead equipped with skills that are immediately applicable in the workplace.

The implications of this work extend beyond China. As automation continues to reshape global manufacturing, there is a pressing need for scalable, effective educational models that can produce a workforce capable of deploying and maintaining advanced robotic systems. Zhu Jianjiang’s teaching case offers a replicable framework that other institutions can adopt and adapt to their own contexts.

For example, educators in North America or Europe could modify the case to use different robotic platforms (such as UR or ABB arms), alternative vision software (like OpenCV or MATLAB), or distinct application scenarios (such as bin picking or quality inspection). The core principles—problem-based learning, system integration, and algorithmic robustness—remain universally relevant.

Moreover, the case study demonstrates how relatively modest laboratory setups can yield significant educational outcomes. The equipment used—industrial cameras, standard lenses, basic lighting, and commercially available robots—is accessible to many engineering schools. This accessibility makes the model particularly attractive for institutions seeking to enhance their curricula without requiring massive capital investment.

Looking ahead, Zhu and his team plan to expand the case study to include 3D vision systems, deep learning-based object recognition, and dynamic path planning. These enhancements will further increase the complexity and realism of the learning experience, preparing students for the next wave of automation technologies.

In conclusion, the robot vision teaching case developed by Zhu Jianjiang at Changshu Institute of Technology represents a significant advancement in engineering education. By grounding theoretical concepts in practical application, it equips students with the analytical, technical, and problem-solving skills needed to thrive in the age of smart manufacturing. As industries continue to embrace automation, such innovative educational approaches will play a crucial role in closing the skills gap and driving technological progress.

Zhu Jianjiang, School of Electrical and Automatic Engineering, Changshu Institute of Technology; Journal of Electrical and Electronic Engineering Education, DOI: 10.13313/j.issn.1008-0686.2021.06.025