Deep Learning-Based Sow Posture Classifier Using Colour and Depth Images

19 Pages Posted: 10 Aug 2024

See all articles by Tami Brown-Brandl

Tami Brown-Brandl

University of Nebraska-Lincoln

Rafael Vieira de Sousa

University of São Paulo (USP)

Raj Sharma

University of Nebraska-Lincoln

Luciane Silva Martello

University of São Paulo (USP)

Verônica Madeira Pacheco

University of Nebraska-Lincoln

Gary Rohrer

USDA-ARS

Abstract

Assessing sow posture is essential for understanding their physiological condition and helping farmers improve herd productivity. Deep learning-based techniques have proven effective for image interpretation, offering a better alternative to traditional image processing methods. However, distinguishing transitional postures such as sitting and kneeling is challenging with only conventional top-view RGB images. This study aimed to develop and compare different deep learning-based sow posture classifiers using different architectures and image types. Using Kinect v.2 cameras, RGB and depth images were collected from 9 sows housed individually in farrowing crates. A total of 26362 images were manually labelled by posture: “standing”, “kneeling”, “sitting”, “ventral recumbency” and “lateral recumbency”. Different deep learning algorithms were developed to detect sow postures from three types of images: colour (RGB), depth (depth image transformed into greyscale), and fused (colour-depth composite images). Results indicated that the ResNet-18 model presented the best results and that including depth information improved the performance of all models tested. Depth and fused models achieved higher accuracies than the models using only RGB images. The best model used only depth images as input and presented an accuracy of 98.3%. The mean precision and recall values were 97.04% and 97.32%, respectively (F1-score = 97.2%). The study shows improved posture classification using depth images. Future research can improve model accuracy and speed by expanding the database, exploring fused methods and computational models, considering different breeds of sows, and incorporating more postures. These models can be integrated into computer vision systems to automatically characterise sow behavior.

Keywords: computer vision, multisource image, precision livestock farming, sow posture detection.

Suggested Citation

Brown-Brandl, Tami and Sousa, Rafael Vieira de and Sharma, Raj and Martello, Luciane Silva and Pacheco, Verônica Madeira and Rohrer, Gary, Deep Learning-Based Sow Posture Classifier Using Colour and Depth Images. Available at SSRN: https://ssrn.com/abstract=4922113

Tami Brown-Brandl (Contact Author)

University of Nebraska-Lincoln ( email )

1400 R Street
Lincoln, NE 68588
United States

Rafael Vieira de Sousa

University of São Paulo (USP) ( email )

Raj Sharma

University of Nebraska-Lincoln ( email )

1400 R Street
Lincoln, NE 68588
United States

Luciane Silva Martello

University of São Paulo (USP) ( email )

Verônica Madeira Pacheco

University of Nebraska-Lincoln ( email )

Gary Rohrer

USDA-ARS ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
25
Abstract Views
144
PlumX Metrics