摘要:Green shoot thinning operation in vineyards helps to reduce the crop load in favor of optimal quality wines. Mechanical green shoot thinning exists, but it causes cluster removal efficiencies to vary widely between 10-85 % because of difficulty in controlling the thinning end-effector position precisely to cordon trajectories. Automatically positioning the thinning end-effector to cordon trajectories will help to precisely remove the green shoots and to increase the efficiency and performance of the mechanical green shoot thinning operation. However, heavy occlusion of cordons due to shoots/leaves during thinning season makes it challenging to accurately determine the trajectories of cordons. Successfully detecting the visible parts of the cordons during the thinning season will help to estimate the trajectories of cordons for automated/robotics operation. In this study, a total of 390 wine grape vines were selected, and color images of these wine grapes were captured from a fixed distance and height for three weeks during the thinning season in real-time field conditions. Faster R-CNN (Faster regions-convolutional neural network) was deployed through transfer learning and fine tuning using the pre-trained networks (AlexNet, VGG16, VGG19 and ResNet18) to detect the visible parts of the cordons. Results showed that, Faster-RCNN model trained with ResNet18 networks provides higher accuracy in detecting visible parts of cordons compared to other tested networks with faster detection speed. Moreover, the detection accuracy with week 2 dataset was higher compared to that with week 3 and week 4 datasets because of the higher visibility of cordons. These results show the potential of Faster-RCNN model in detecting the visible parts of cordons, which will be used in the future to estimate the trajectories of the cordons for the automated green shoot thinning operation in vineyards.