Individual Identification of Holstein Cows from Top-View RGB and Depth Images Based on Improved PointNet++ and ConvNeXt

Guardado en:
Detalles Bibliográficos
Publicado en:Agriculture vol. 15, no. 7 (2025), p. 710
Autor principal: Zhao, Kaixuan
Otros Autores: Wang, Jinjin, Chen, Yinan, Sun, Junrui, Zhang, Ruihong
Publicado:
MDPI AG
Materias:
Acceso en línea:Citation/Abstract
Full Text + Graphics
Full Text - PDF
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Resumen:The identification of individual cows is a prerequisite and foundation for realizing accurate and intelligent farming, but this identification method based on image information is easily affected by the environment and observation angle. To identify cows more accurately and efficiently, a novel individual recognition method based on the using anchor point detection and body pattern features from top-view depth images of cows was proposed. First, the top-view RGBD images of cows were collected. The hook and pin bones of cows were coarsely located based on the improved PointNet++ neural network. Second, the curvature variations in the hook and pin bone regions were analyzed to accurately locate the hook and pin bones. Based on the spatial relationship between the hook and pin bones, the critical area was determined, and the key region was transformed from a point cloud to a two-dimensional body pattern image. Finally, body pattern image classification based on the improved ConvNeXt network model was performed for individual cow identification. A dataset comprising 7600 top-view images from 40 cows was created and partitioned into training, validation, and test subsets using a 7:2:1 proportion. The results revealed that the AP50 value of the point cloud segmentation model is 95.5%, and the cow identification accuracy could reach 97.95%. The AP50 metric of the enhanced PointNet++ neural network exceeded that of the original model by 3 percentage points. Relative to the original model, the enhanced ConvNeXt model achieved a 6.11 percentage point increase in classification precision. The method is robust to the position and angle of the cow in the top-view.
ISSN:2077-0472
DOI:10.3390/agriculture15070710
Fuente:Agriculture Science Database