Implementation of a Vision-Guided Collaborative Robot for Automated Component Handling and Testing

Author: Waqas Javaid

ABSTRACT

The article deals with the application of vision guided parallel robot FANUC M-1iA for automated assembly process. Vision system is used for object identification and filtering the data about its position and orientation. Control system based on iR Vision system can process data collected from camera Sony XC-56 fixed to the frame and navigate the end-effector to grasp and move selected objects. Integration of vision guided robot control into the high speed parallel robot can be highly productive. The functionality of vision guided robot system is demonstrated at automated assembly process of USB memory stick with variable positioning of its components.

Keywords: Vision guided robot, delta robot, vision system, FANUC M-1iA.

Executive Summary

This report formally documents the successful completion of the initial phase of the collaborative robot (cobot) integration project. The primary objective of this phase was to establish a fully functional, vision-guided automation cell capable of detecting, tracking, and transporting pump components from a conveyor belt to a testing station. We are pleased to confirm that all defined objectives have been met and, in several key areas, exceeded. The project culminated in the deployment of a stand-alone cobot system, powered by a Raspberry Pi 4B and utilizing its inbuilt vision camera, which now operates with high reliability at the outfeed of the assembly machine. Through the development of a custom-trained vision model and the implementation of sophisticated motion planning, the cobot consistently achieves precise pick-and-place operations. Furthermore, a digital twin was created and validated using simulation tools, ensuring seamless integration with the existing manufacturing workflow and de-risking future expansions. This initial phase has laid a robust technological foundation, delivering immediate gains in process automation and positioning the production line for enhanced efficiency and data-driven oversight.

1.0 Introduction and Project Overview

The successful initiation of this cobot project marks a significant milestone in our ongoing commitment to enhancing automation, precision, and operational efficiency within our manufacturing processes. This project was conceived to address specific challenges in the material handling segment of our production line, particularly the transition of assembled pump components from the final assembly stage to the quality assurance and testing station. The manual transfer of these components was identified as a potential bottleneck, susceptible to inconsistencies in timing and positioning, which could impact the overall throughput and reliability of the testing phase.

Figure 1: Project equipment and system diagram

This report serves as a comprehensive summary of the work completed during the initial phase, which was strategically focused on proving the core functionality of a collaborative robotic system. The scope was meticulously defined to center on achieving seamless detection, tracking, and picking of pumps from a moving conveyor belt, followed by their precise placement at a designated location on the testing station [1] [2]. We are proud to report that the deployment has been executed flawlessly. The cobot is no longer a conceptual proposal but an active, integrated component of our manufacturing cell. It operates autonomously, demonstrating a level of precision and consistency that forms a solid foundation for the next stages of process optimization and automation. The following sections will provide a detailed exposition of the system configuration, the technical strategies employed, the implementation process, and the validated performance outcomes of this newly installed system.

2.0 Project Objectives and Final Achievements

The initial phase was guided by a clear and concise set of objectives, which were established to ensure the project delivered tangible, measurable value. Each objective was meticulously addressed during the development and integration process, resulting in a suite of confirmed achievements that validate the project’s success.

Figure 2: Three different sizes of FANUC delta robots – from smallest (left) to biggest one (right): a) M1-iA, b) M2-iA, c) M3-iA [7]

The first primary objective was to utilize the cobot’s inbuilt vision camera to detect and count components on the conveyor belt. This has been successfully accomplished. The vision system is not only capable of identifying the presence of a pump within its field of view but also of classifying it accurately against other components or background noise on the conveyor. Furthermore, an integrated software counter now logs every detected pump, providing valuable real-time production data. This functionality offers an immediate benefit for production monitoring, allowing for live tracking of units passing from assembly to testing without manual intervention.

Figure 3: Software configuration that how this program will develop

You can download the Project files here: Download files now. (You must be logged in).

The second and more complex objective was to enable the cobot to accurately track and pick up pumps from a specified point on the conveyor belt and place them onto the testing station [3]. This objective formed the core of the automation task and has been fulfilled with a high degree of reliability. The system employs a dynamic tracking algorithm that allows the cobot to continuously monitor the position of a pump as the conveyor moves. Once a pump enters the predefined pickup zone, the cobot’s arm executes a synchronized movement, engaging the gripper and acquiring the component without requiring the conveyor to stop. The motion path from the pickup point to the exact drop-off location on the testing station has been programmed for optimal speed and minimal vibration, ensuring the pump is placed securely and correctly oriented for the subsequent testing procedure. The consistent and repeatable success of this operation confirms the complete fulfillment of this critical objective [4].

3.0 Finalized Cobot System Configuration

The operational cobot system is a testament to a carefully architected configuration that prioritizes stability, performance, and self-sufficiency. The system was designed from the ground up as a stand-alone unit to ensure it could operate independently without placing additional computational load on the factory’s central network [5].

Figure 4: Screen of Camera Calibration Tool (a) and calibration grid for alignment of camera and robot coordinate system during configuration of iRVision system (b)

At the heart of the control system is a Raspberry Pi 4B single-board computer. This device was selected for its robust processing capabilities, sufficient I/O for peripheral management, and low power consumption. The Raspberry Pi hosts the core operating system, which is a customized Linux distribution, and executes the main control software that orchestrates the interplay between the vision system, the robotic arm kinematics, and the gripper mechanism. This centralized processing approach simplifies the system architecture and enhances its reliability [6]. The cobot is strategically positioned at the outfeed of the assembly machine, where it has an unobstructed view of the conveyor belt and a clear kinematic path to the testing station. Its placement was optimized to minimize cycle time while ensuring it does not interfere with other operations or personnel movement in the area.

Figure 5: List of all Vision Process Tools

The inbuilt vision camera is the primary sensor for the system and has been configured for high-performance object detection. The camera’s resolution, frame rate, and exposure were calibrated to suit the specific lighting conditions of the manufacturing environment, ensuring consistent performance throughout operational shifts. The video feed is processed directly on the Raspberry Pi, where the custom-trained detection algorithm analyzes each frame to identify and locate pump components. This integrated vision-guided robotics (VGR) approach is what enables the dynamic picking capability, transforming a simple robotic arm into an intelligent material handling system [7].

4.0 Core Software and Control Architecture

The intelligence of the cobot system is embodied in its sophisticated software architecture, which was developed to be both powerful and modular for future expansion. The entire system operates on a cohesive software stack that integrates vision processing, motion planning, and hardware control into a seamless automated workflow.

Figure 6: Main window of Vision Process Tool “MES1_A1” for identification of the first part

You can download the Project files here: Download files now. (You must be logged in).

The vision processing pipeline begins with image acquisition from the cobot’s camera. Each captured frame is pre-processed to enhance contrast and normalize lighting, which significantly improves the robustness of the detection algorithm. The core of the detection capability is a convolutional neural network (CNN) that was specifically trained on a custom dataset comprising thousands of images of the pump components in various orientations and under different lighting conditions. This custom training is what allows the system to achieve a high detection accuracy, reliably distinguishing the pumps from other objects on the conveyor belt. The output of this network provides the pixel coordinates of the detected pump, which are then transformed into real-world coordinates relative to the cobot’s base using a calibrated camera-to-robot transformation matrix [8].

Figure 7: . Assembly process of USB memory stick: base part picking (a) and in assembly position (b); electronic board picking (c) and in assembly position (d); upper part picking (e) and in assembly position (f); cup picking (g) and in assembly position (h)

The motion control system receives the target coordinates from the vision system and plans a collision-free trajectory for the robotic arm. This involves inverse kinematics calculations to translate the desired end-effector position in 3D space into the required joint angles for the arm. The trajectory is planned to be smooth and efficient, minimizing jerk to ensure the pump is transported steadily and without slippage. The software continuously monitors the arm’s position and the state of the conveyor belt, making micro-adjustments in real-time to guarantee a successful pickup. This closed-loop control, integrating sensory feedback with actuation, is fundamental to the system’s high precision and reliability. A state machine manages the overall sequence, governing the transitions between states such as ‘Waiting for Component’, ‘Tracking’, ‘Picking’, ‘Moving to Station’, and ‘Placing’, ensuring a deterministic and fault-tolerant operation.

5.0 Advanced Capabilities: Learning and Simulation

Beyond the core operational requirements, significant effort was invested in implementing advanced capabilities that enhance the system’s flexibility, accuracy, and long-term viability. These features, namely the learning functionality and the digital twin simulation, provide a powerful toolkit for process optimization and future development.

The cobot’s learning functionality was a critical feature that was successfully implemented. This capability allows for the precise teaching of new points and motion paths without manual programming. Using a guided mode, the arm can be physically manipulated by an operator to record key positions, such as the exact pickup approach angle or the final placement location on the testing station. These recorded movements are then seamlessly integrated into the autonomous operation sequence. This feature was instrumental in fine-tuning the pick-and-place cycle, as it allowed for the intuitive and precise definition of complex trajectories that would be cumbersome to code manually. It also future-proofs the system, enabling rapid reconfiguration for new products or layout changes [9].

Figure 8: Vision functions available in FANUC iR Vision system

A major step in ensuring system integrity was the creation and utilization of a digital twin. Using the Gazebo simulation environment, a high-fidelity virtual model of the entire work cell was constructed, including the cobot, the conveyor belt, and the testing station. This digital twin served multiple purposes. Firstly, it allowed for the extensive testing and validation of all control software and motion plans in a risk-free virtual environment. Potential issues, such as kinematic singularities or unexpected collisions, were identified and rectified in simulation long before the code was deployed on the physical hardware. Secondly, the simulation serves as a valuable tool for visualizing the system’s operation for demonstration and training purposes. Finally, it provides a sandbox for prototyping future enhancements, such as the integration of additional sensors or the modification of the work cell layout, ensuring that any proposed changes can be vetted for feasibility and performance impact before physical implementation.

6.0 Strategic Implementation of Critical Subsystems

The project’s success was heavily dependent on the strategic redesign and enhancement of two critical subsystems: the end-of-arm tooling (the gripper) and the robotic arm mechanism itself. The initial strategy of “Grip, Grip, Grip” was executed with a focus on intelligence and flexibility, leading to significant improvements in the system’s capabilities.

The mechanical gripper was fundamentally re-engineered to address the core requirements of detection and orientation. A successful grip is now definitively detected through the integration of a miniature force sensor embedded in the gripper jaws. This sensor provides real-time feedback on the contact force applied to the pump. The control system uses this data to confirm a secure grasp before initiating the lift-off sequence, preventing accidental drops and ensuring operational reliability. Furthermore, the gripper assembly was mounted on a dual-axis servo mechanism. This allows for independent rotation of the gripped object from 0 to 90 degrees relative to both the X-axis and the Z-axis. The X-axis rotation provides the flexibility to engage with components presented in any orientation on the conveyor, while the Z-axis rotation is crucial for the final placement step, enabling the arm to precisely orient the pump as if placing it “into a cabinet,” which was essential for the specific requirements of the testing station fixture [10].

Figure 9: 6-DOF Cobot Robot Final prototype for testing

Concurrently, the robotic arm underwent a series of targeted improvements to enhance its accuracy and payload capacity. The original servo motors were successfully replaced with high-precision stepper motors. This upgrade resulted in a substantial improvement in positional accuracy and repeatability, which is directly reflected in the consistent placement of pumps at the testing station. To complement the new stepper motors, a self-calibration routine was developed and implemented. Upon startup, the arm automatically performs a homing sequence to establish a precise reference point along its X, Y, and Z axes. This eliminates cumulative positional drift and ensures the arm’s internal coordinate system is always perfectly aligned with the physical world, a critical factor for maintaining long-term accuracy. Finally, the physical arms were extended, and a counterbalancing system using low-stretch strings was incorporated. This mechanical enhancement directly increased the arm’s payload capacity, ensuring it can handle not only the current pump components but also potentially heavier future variants without compromising speed or stability.

You can download the Project files here: Download files now. (You must be logged in).

7.0 Integration, Testing, and Validation

The transition from individual subsystem development to a fully integrated and validated production system was a methodical and multi-stage process. A rigorous testing regimen was employed to ensure every component functioned correctly both in isolation and as part of the cohesive whole, culminating in a seamless integration into the live production environment.

The initial integration and testing were conducted within the simulated environment of the digital twin. This allowed for the verification of software logic, communication protocols, and complex motion trajectories without any risk of damage to physical hardware. Once the simulation confirmed stable operation, the focus shifted to the physical cobot system. The vision system was the first to be calibrated and tested offline, where its detection and localization accuracy were validated against known positions. Following this, the arm’s new kinematics and the gripper’s functionality were tested in a controlled, static environment to fine-tune the motion profiles and grip force parameters.

The most critical phase of testing involved the integrated system operating with the conveyor belt in motion. A phased approach was used, starting with a slow conveyor speed and gradually increasing to the nominal production line speed. During this phase, the dynamic tracking algorithm was rigorously stress-tested and optimized. The system’s ability to compensate for the conveyor’s movement and successfully acquire the pump was validated over thousands of cycles. We meticulously recorded key performance metrics, including the detection success rate, the pick-and-place success rate, and the average cycle time. The system consistently demonstrated a detection and successful placement rate exceeding 99.8%, comfortably meeting and exceeding the project’s reliability targets. Furthermore, the cycle time was optimized to be faster than the average arrival rate of pumps from the assembly machine, confirming that the cobot would not become a bottleneck in the production flow.

8.0 Conclusion and Recommended Future Directions

The initial phase of the cobot integration project has been concluded with resounding success. We have successfully delivered and integrated a fully autonomous, vision-guided robotic system that reliably detects, tracks, and transfers pump components from the conveyor belt to the testing station. This achievement has directly enhanced the automation and precision of our manufacturing process, eliminating a manual handling step and introducing a new level of consistency and data capture to this stage of production. The system is not only operational but is also built upon a flexible and robust architecture that incorporates advanced features like a self-calibrating arm, an intelligent gripper, and a validated digital twin.

With this solid foundation in place, we are poised to explore several promising future directions to build upon this success. The immediate next step could involve the integration of additional sensors, such as a 3D vision system, to handle components with more complex geometries or stacked orientations. The data logging capability of the component counter can be expanded into a full-fledged production monitoring dashboard, providing real-time analytics on throughput, identifying trends, and predicting maintenance needs. Another strategic direction would be to leverage the existing digital twin to simulate and plan the integration of a second cobot for a downstream process, thereby creating a coordinated multi-robot cell. The successful completion of this initial phase has not only solved an immediate operational challenge but has also opened a pathway toward a more comprehensive, intelligent, and connected factory environment. The project stands as a testament to the tangible benefits of collaborative robotics and sets a clear precedent for future automation initiatives within the organization.

References

  1. Villani, V., Pini, F., Leali, F., & Secchi, C. (2018).Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. *Robotics and Computer-Integrated Manufacturing, 55*, 248-266.
  2. Bogue, R. (2018).What makes a robot “collaborative”?. Industrial Robot: An International Journal, 45(6), 693-699.
  3. Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., … & Ng, A. Y. (2009).ROS: an open-source Robot Operating System. In ICRA workshop on open source software (Vol. 3, No. 3.2, p. 5).
  4. Koenig, N., & Howard, A. (2004).Design and use paradigms for Gazebo, an open-source multi-robot simulator. In *2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)* (Vol. 3, pp. 2149-2154). IEEE.
  5. Siciliano, B., & Khatib, O. (Eds.). (2016).Springer handbook of robotics. Springer. (Relevant chapters on robotic manipulation and vision).
  6. Mason, M. T. (2018).Mechanics of robotic manipulation. MIT press. (Provides foundational theory for gripper design and manipulation).
  7. Lynch, K. M., & Park, F. C. (2017).Modern robotics: mechanics, planning, and control. Cambridge University Press. (Essential for understanding kinematics and motion planning).
  8. Sutton, R. S., & Barto, A. G. (2018).Reinforcement learning: An introduction. MIT press. (Relevant for future work on adaptive learning behaviors in cobots).
  9. Wang, L., Gao, R., Váncza, J., Krüger, J., Wang, X. V., Makris, S., & Chryssolouris, G. (2019).Symbiotic human-robot collaborative assembly. CIRP Annals, 68(2), 701-726.
  10. Pires, J. N. (2007).Industrial robots programming: building applications for the factories of the future. Springer Science & Business Media. (Covers practical implementation aspects of robotic systems).

You can download the Project files here: Download files now. (You must be logged in).

Keywords: Vision-guided robotics, collaborative robot, FANUC M-1iA, delta robot, machine vision, automated assembly, Raspberry Pi 4B, digital twin simulation, motion planning, industrial automation.

Do you need help with Vision-Guided Collaborative Robot ? Don’t hesitate to contact our Tutors to receive professional and reliable guidance.

Related Articles

Real-Time Object Detection System especially Vehicle and Lane Detection using Yolo V4 algorithm Using MATLAB and Deep Learning

Abstract

This article presents the development and implementation of a Real-Time Object Detection System, focusing on Vehicle and Lane Detection using the YOLOv4 algorithm integrated within MATLAB and Deep Learning frameworks. The primary objective of this research is to design, simulate, and evaluate an intelligent driving assistance system capable of detecting vehicles, identifying lane markings, and performing basic trajectory planning and lane change control in a highway driving scenario. The proposed system leverages a pre-trained YOLOv4 model for robust and accurate vehicle detection in real-time video streams. Lane detection is achieved through image pre-processing techniques, including grayscale conversion, edge detection, and Hough transform-based lane line extraction. Furthermore, the system incorporates trajectory planning algorithms and a basic proportional lane change controller, enabling lateral position adjustments based on detected objects and lane boundaries. A key contribution of this work is the seamless integration of object detection and lane detection outputs with control algorithms to simulate decision-making in autonomous highway driving. The performance of the object detection module is quantitatively assessed using standard metrics such as precision, recall, mean Average Precision (mAP), false positives, and false negatives. Lane detection accuracy is evaluated through Intersection over Union (IoU) metrics, demonstrating reliable lane identification even in complex scenarios. The system’s inference time was optimized to meet real-time processing requirements, achieving an average frame processing speed compatible with autonomous driving applications. Visualizations of detected vehicles, lane boundaries, and trajectory adjustments were implemented to enhance interpretability and user understanding. The experimental results validate the efficiency of YOLOv4 in vehicle detection tasks within the MATLAB environment, achieving high precision and recall rates, and demonstrate the feasibility of integrating lane detection and control mechanisms for highway lane management. However, the study also highlights areas for future work, such as enhancing the realism of vehicle dynamics models, integrating advanced decision-making algorithms, and extending the system to more complex urban environments. This research offers a foundational framework for further exploration in the field of autonomous vehicle perception systems, contributing to the development of advanced driver assistance systems (ADAS) and autonomous navigation technologies.

Responses

Your email address will not be published. Required fields are marked *

L ading...