ARTIFICIAL NEURAL NETWORK MODEL APPROACH TO PREDICT BODY WEIGHT IN SOUTHERN ANATOLIAN RED CATTLE
H. Hizli
Ministry of Agriculture and Forestry, Eastern Mediterranean Agriculture Research Institute, Adana, Türkiye
Corresponding Author’s email: haticehizli@gmail.com
Author’s ORCID: Hatice HIZLI: 000000254511397
ABSTRACT
For sustainable animal breeding, body weight and morphological measurements are taken. In this study, a multilayer feedforward neural network model was created utilizing several morphological measures to estimate body weight in Southern Anatolian Red Cattle. The withers height, body length, chest girth, and rump width were defined as inputs while body weight was defined as a single output in the feedforward neural network architecture. Network training was performed using LevenbergMarquardt, Scaled Conjugate Gradient, and Bayesian Regularization algorithms. The linear function at the output and the hyperbolic tangent sigmoid function at the input of the hidden layer were both maintained constant, and the number of neurons in the hidden layer was varied to search for the optimal geometry for each transfer function. Feedforward neural network optimization was performed using MSE and R^{2} performance criteria. The performance metrics RMSE, MAE, MAPE%, and VAF% were used to compare the optimized feedforward neural network models and predict the best model. The neural network models model created with the Bayesian Regularization algorithm was confirmed to be the best model. All morphological measurements as predictors had a high correlation (r < 0.8) with body weight estimation, with the greatest correlation among the morphological measurements being 0.947 between chest girth and withers height (p < 0.001). As a result, the optimum feedforward neural network model was determined to be the Bayesian Regularization backpropagation algorithm. The proposed feedforward neural network model has been proven to accurately predict body weight in Southern Anatolian Red Cattle (SAR) using input and output variables within the study's data range.
Keywords: Backpropagation algorithm, bayesian regularization, feedforward neural network, Cattle, Türkiye
INTRODUCTION
For sustainable animal breeding and herd management, measurements of body weight and morphological characteristics are conducted in livestock. Regression algorithms are widely used for forecasting. Today, the use of artificial neural networks (ANN) and machine learning technologies are also increasing. ANNs are computerlike information processing systems inspired by the organization of the organic nervous system and coupled by weighted connections. Because ANN models can represent both linear and nonlinear datasets, pattern recognition, time series estimation, regression analysis, function approximation operations, and forecast estimates, they are being employed in a wide range of applications (Abraham, 2005; Haykin, 2009).
The information gathered from the review of the literature and a few animal husbandry related investigations utilizing ANN models is presented below. The partial lactation records and the first lactation records of hybrid dairy cattle were used to compare the estimation of 305day lactation yield with multiple linear regression and ANN techniques (Salehi et al.,1998; Grzesiak et al., 2003). The radialbased function neural network model, the backpropagation neural network model, and multiple linear regression (MLR) analysis methods were compared for their ability to accurately predict 305day milk outputs from the first lactation data of Karan Fries dairy cows (Sharma et al., 2006). The effects of lactation time, calving age and service period on lactation yield in Holstein were compared with ANN and MLR analysis methods (Takma et al., 2012). ANN and MLR approaches were used to compare the impact of morphological measurements on the body weights of hair goats (Akkol et al., 2017). Regression trees, chisquare automatic interaction detectors, multilayer detectors, dataset mining algorithms, and other techniques were presented to estimate body weight from different morphological measurements (Eyduran et al., 2017). Pig eating patterns were estimated using an feedforward neural network model (FFNN) model (Cross et al., 2018). ANN was used to assess the reproductive values of the milk characteristic on body weight in 6monthold Kermani sheep (PourHamidi et al., 2017; Ghotbaldini et al., 2019). An ANN model developed using the photogrammetric approach was used to estimate the live weight of cows (Taşdemir and Özkan, 2019). The accuracy of body weight calculation using dromedary camel body measurements at birth and 240 days of age were compared using seven Machine Learning algorithms (Asadzadeh et al., 2021).
Due to their effectiveness and dependability, ANN models can be commonly applied in assessing crucial aspects of livestock research. Furthermore, no studies on Southern Anatolian Red Cattle (SAR) have been reported using the ANN model. Therefore, in this study, a FFNN algorithm model was developed to estimate body weight using a dataset of morphological measurements collected from domestic cattle breeds SAR cattle, and the developed ANN model were tried to be revealed efficiency and reliability.
MATERIALS AND METHODS
Animal material: This study used data from 409 female SAR cattle raised at the Eastern Mediterranean Agricultural Research Institute between 1995 and 2020, which included morphological measurements and body weight. Measurements were made at ages at birth, three months, six months, twelve months, eighteen months, twentyfour months, and fortyeight months age. Withers height (WH), body length (BL), chest girth (CG), rump width (RW), and body weight (BW) were taken from each animal as described by FAO (2012). The data of BW was obtained using a digital scale (kg), and the data of WH, BL, RH, and CG were obtained using a measuring stick (cm) and tape measure together. In the network model, WH, BL, CG, and RW are defined as input neurons, and body weight is an output neuron. Thus, the descriptive statistics of the input and output neurons in ANN model are given in Table 1.
Table 1.The descriptive statistics of input and output neurons in the ANN model
Variables

Unit of Variables

ANN parameters

Properties of data set used in network topology

Min.

Max.

Mean

SD

SE

BW

kg

Output

15

700

120

147

7

WH

cm

Input

53

173

88

25

1

BL

cm

Input

16

189

83

31

2

CG

cm

Input

12

212

97

39

2

RH

cm

Input

10

71

23

10

1

SD: Standart deviation; SE: Standart error
The dataset was divided into three sections for the LevenbergMarquardt (LM) and Scaled Conjugate Gradient (SCG) algorithms: training, testing, and validation, as 70%, 15%, and 15%, respectively. As a result, the number of cattle in the subdivisions of the LM and SCG algorithms was obtained as 287, 61, and 61, respectively. The subsections of the Bayesian Regularization (BR) algorithm were divided into two, 80% training and 20% testing, so that in the subdivisions 326 and 83 cattle were used for training and testing, respectively.
Method: An ANN is a mathematical model that uses a learning algorithm to explore the complex linear and nonlinear correlations between input and output data (Haykin, 2009). The network architecture, training algorithm, and activation functions must all be specified before creating a basic ANN architecture for whichever system. The simplest processing component in each ANN design, known as an artificial neuron, imitates the behavior and functions of real neurons in the human brain and allows for the simultaneous storage and processing of massive volumes of data (Dawson and Wilby, 1998; Akıllı and Atıl, 2014; Rachmad et al., 2018; Zador, 2019). The basic building block of an ANN architecture is a collection of synthetic neurons that resemble biological neurons. Different numbers of neurons make up the input and output layers of an ANN, which are coupled to one another via one or more hidden layers. The most wellknown ANN architecture is the feedforward neural network (FFNN), which has three layers: an input layer, a hidden layer (or layers), and an output layer (Aladağ et al., 2010; Asteris et al., 2017; Anitha and Chakravarthy, 2018). The FFNN architecture employed in this research article is shown in Figure 1 to demonstrate a basic FFNN topology.
Figure 1.The architecture structure of FFNN developed for the prediction of BW
The input values are information entering the cell from other cells or external environments in the first layer. These are WH, BL, CG and RW and are coded as x_{1}, x_{2}, x_{3}, and x_{4}. Information enters the cell through represents weight values that the interconnection is multiplied by in the sum function for each neuron in the network (Asteris et al., 2017). After that,, named the weighted sum of the input dataset, as follows expressed by Equation (1) :
(1)
where, is treated with an activation function of , network can have a threshold input (b) with a value of +1 that increases the net input or 1 that decreases it, as the input neuron, and the output of the network y represents the estimated output values (Zhang and You, 2015) and as expressed by as follows Equation (2):
(2)
where, is called the interconnection weight of nodes from hidden layer to output layer. In order to produce accurate prediction results, the training algorithm should be defined while designing an ANN model. The most powerful and common training algorithm for learning is the backpropagation neural network technique and there are many algorithms developed for this purpose (Putra and Wanto, 2017; Putro et al., 2022). In this research, LM, BR, and SCG algorithms from the backpropagation training algorithms are compared. In ANN models, overfitting and memorization problems may occur during training. There is no overfitting or memorization problem as the BR backpropagation does not need the validation dataset to validate the network (MacKay, 1992; Saini, 2008; Kayri, 2016). In MATLAB software (2016b), it is checked using a hyperparameter called "maximum validation failures", which specifies the maximum number of selfverification that the neural network allows for validating the network model (Beal et al., 2010). For this aim information detailed in the developed model, the hyperparameter values as learning rate, momentum factor (µ), maximum validation performance, and the number of epochs were set to 0.01, 0.001, 1000, and 1000, respectively in the FFNN model.
While developing an ANN model the other important point, the activation functions must be selected (Mhaskar and Micchelli, 1994). The activation functions commonly used in neural networks are sigmoid and hyperbolic tangent functions, and in the proposed ANN model, the hyperbolic tangent sigmoid function is selected for the best performance in hidden layer network training (Beal et al., 2010), equation (3) is as follows.
(3)
The dataset was normalized in order for the network to produce successful results, by using Equation (4) (Zhang and You, 2015).
(4)
Where, , the measurements to be normalized, , is the smallest of the available measurements, , is the largest of the available measurements. After the network training was realized all inputoutput datasets were back normalized (Zhang and You, 2015), the following equation (5) has been used.
(5)
The performances of the developed FFNN models were compared by the coefficient of determination (R^{2}), Mean Square Error (MSE), root mean squared error (RMSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE%) and Variance Accounted For (VAF%). These were represented by Equations (6), (7), (8), (9) (10) and (11), respectively (Karlik and Olgac, 2011; Erzin and Çetin, 2013).
(6)
(7)
(8)
(9)
(10)
(11)
Where, , represents variance measured value and the predicted value, respectively. The correlations between the input variables and the output variables are examined to determine which input variable ANN has the most impact on training performance. According to Chan (2003) and Akoğlu (2018), there was a weak correlation if  r  < 0.2, a moderate correlation if 0.2 <  r  < 0.8, and a strong correlation if  r  ≥ 0.8. In the network design and implementation of the study, the results were obtained by using the MATLAB software (2016b).
RESULTS AND DISCUSSION
In the present study, the ANN topology was developed after designing to estimate the BW. In Figure 1 shows that the architectural structure of the FFNN consists of parameters WH, BL, CG, and RW in the input layer, BW in the output layer, and a hidden layer. As seen in Table 2, network training was performed using backpropagation algorithms such as LM, BR, and SCG algorithms. In order to search for the optimum geometry in each algorithm the hyperbolic tangent sigmoid function at the input of the hidden layer and the linear function at the output were kept constant and the number of neurons in the hidden layer was tested from 1 to 13 values, and for feedforward neural network optimization, MSE and R^{2} performance criteria were used.
Table 2. Comparison of performance parameters of different backpropagation algorithms and different network models of different neurons.
Training algorithms

Number of hidden neurons

Number of epochs

Training

Testing

Validation

MSE

R^{2}

MSE

R^{2}

MSE

R^{2}

LM

1

45

0.0009

0.964

0.00109

0.962

0.0016

0.978

LM

2

171

0.00091

0.966

0.0011

0.955

0.00148

0.956

LM

3

16

0.00088

0.97

0.00045

0.974

0.00091

0.976

LM

4

21

0.00067

0.976

0.0015

0.925

0.0081

0.974

LM

5

5

0.00088

0.968

0.00106

0.956

0.0013

0.958

LM

6

5

0.00087

0.97

0.00082

0.964

0.00082

0.968

LM

7

9

0.00058

0.976

0.0035

0.904

0.00093

0.97

LM

8

7

0.00086

0.97

0.00081

0.956

0.00105

0.964

LM

9

4

0.00081

0.97

0.0019

0.945

0.00106

0.96

LM

10

5

0.00094

0.968

0.00059

0.98

0.00092

0.955

LM

11

3

0.00088

0.97

0.00049

0.974

0.00078

0.97

LM

12

3

0.00085

0.966

0.0056

0.956

0.0011

0.97

LM

13

9

0.00082

0.97

0.0013

0.955

0.0012

0.956

BR

1

14

0.00103

0.962

0.00093

0.966

BR

2

27

0.0008

0.972

0.0021

0.953

BR

3

54

0.00079

0.972

0.00097

0.958

BR

4

62

0.00065

0.974

0.0017

0.951

BR

5

144

0.00064

0.976

0.00091

0.927

BR

6

75

0.00054

0.810

0.00077

0.966

BR

7

68

0.00067

0.974

0.00092

0.97

BR

8

165

0.00036

0.984

0.00071

0.974

BR

9

98

0.00087

0.968

0.0012

0.943

BR

10

423

0.00083

0.968

0.0017

0.964

BR

11

62

0.00061

0.978

0.0017

0.937

BR

12

70

0.00074

0.974

0.0011

0.955

BR

13

44

0.00076

0.972

0.0012

0.956

SCG

1

27

0.0011

0.955

0.0015

0.943

0.0012

0.966

SCG

2

41

0.001

0.964

0.0015

0.955

0.00084

0.955

SCG

3

30

0.0011

0.955

0.0019

0.953

0.00081

0.974

SCG

4

38

0.0019

0.955

0.0009

0.974

0.0008

0.964

SCG

5

43

0.0012

0.956

0.0004

0.972

0.0048

0.98

SCG

6

19

0.0011

0.96

0.00072

0.964

0.00089

0.976

SCG

7

37

0.001

0.96

0.001

0.974

0.00062

0.968

SCG

8

33

0.001

0.96

0.0015

0.953

0.0012

0.943

SCG

9

55

0.00088

0.97

0.0015

0.945

0.0012

0.935

SCG

10

90

0.00083

0.97

0.0016

0.927

0.0007

0.974

SCG

11

45

0.0009

0.966

0.0009

0.966

0.0007

0.962

SCG

12

29

0.0007

0.97

0.0027

0.935

0.0009

0.955

SCG

13

27

0.0011

0.922

0.0017

0.867

0.00092

0.925

It can be seen that the LM and SCG training algorithms do not produce consistent results when Table 2 is reviewed by comparing the minimum MSE and maximum R^{2} for the best performance in the training, testing, and validation data sets. It can be seen that the LM and SCG training algorithms do not produce consistent results when Table 2 is reviewed by comparing the minimum MSE and maximum R^{2} for the best performance in the training, testing, and validation data sets. The MSE and R^{2} results for the 7th neuron in the training dataset for the LM algorithm are the best; the MSE was found to be the smallest in the 3rd neuron and the R^{2} to be the biggest in the 10th neuron in the testing dataset; and in the validation dataset, the first neuron had the highest R^{2} and the MSE was lowest in was the 11th neuron. The MSE and R^{2} results for the 12th neuron in the training dataset for the SCG algorithm are the best; in the testing dataset, the MSE was found to be the smallest in the 5th neuron and the R^{2} to be the biggest in the 4th and 7th neurons; and in the validation dataset, the 5th neuron had the highest R^{2} and the MSE was lowest in was the 7th neuron. On the other hand, in the BR training algorithm, both the training and testing datasets consistently produced the best performance results in the 8th neuron. When compared to other algorithms, these results are the most effective.
Akkol et al., (2017) predicted body weight from body measures in hair goats using multilayered feedforward backpropagation algorithms, LM, BR, and SCG. According to their findings, the algorithm with the best R^{2} and lowest MSE is BR. KhorshidiJalali et al., (2019) used multilayered feedforward backpropagation algorithms to estimate body weight from morphological measures in Raini Cashmere goats. Researchers worked on eleven algorithms apart from the BR algorithm and discovered that the algorithm with the highest R^{2} and the lowest MSE is the LM algorithm. Asadzadeh et al., (2021) used dromedary camel body measurements to evaluate the accuracy of their weight prediction using seven machine learning algorithms, including bayesian regularization neural network (BRNN), extreme learning (EL), random forest (RF), support vector machine with the linear kernel (LSVM), polynomial kernel (PNLSVM), radial basis kernel (RNLSVM), and linear regression (LR). Despite the fact that they found BRNN to be the most accurate learning method among the models, they recommended the PNLSVM model based on all of the criteria they analyzed. According to Burden and Winkler (2009), the BR algorithm is more reliable than other standard optimization algorithms because it does not require validation.
Figure 2.Mean squared error (MSE) on the best training performance of developed FFNN model
The results of Table 2 are also similar to Figure 2, revealing that the BR algorithm perform effectively best in the 8th neuron and the 165th period. Figure 2 demonstrates that the best training performance is 0.00036 in the 165th epoch when the ANN model at 8 hidden neurons and the lowest MSE value is trained. Akkol et al. (2017) evaluated three alternative backpropagation algorithms for numbers ranging from three to ten neurons and revealed that the BR approach performed best with three neurons. They also stated The LM and SCG algorithms also have successful neuron counts of five and ten, respectively.
The residual of the created ANN model, which was derived using the BR algorithm, is depicted in Figure 3 as a histogram graph. The residual represents the difference between the measured and anticipated BW values. The residuals range from 0.1401 to 0.1254, and they are very modest.
Figure 3.Residuals, the difference between measured and predicted BW values of developed FFNN model
For the best FFNN model in Figure 3, the residues exhibit a normal distribution with mean 0, variance 1, and the difference between measured and predicted BW values.
Figure 4.Regression plot for the best FFNN model developed
Figure 4 shows the regression graphs of the actual and estimated BW values for the training and test data sets for the developed FFNN model. Figure 4 and Table 2 results overlap. The R values for training and test data are 0.99249 and 0.96342, respectively, in Figure 4. Table 2 displays the same results as R^{2} in training and test datasets. Figure 4 shows R values that are very close to 1. The linear leastsquares fitting line on the regression plot is almost certainly distributed uniformly across the training and test data. As a result, assuming the values of WH, BL, CG, and RW are known, high performance results in neuron network training show that the FFNN model can estimate BW.
In order to find out which input variable has a howrelationship with the output variable, correlation coefficients were calculated in Table 3 and the calculated r values were tried to be explained with the suggestions made by Chan (2003) and Akoğlu (2018). Correlation values of WH, BL, CG, RW with BW were 0.930, 0.914, 0.935 and 0.840, respectively, all of them have strong relationship and were significant (p < 0.001). These results show that all inputs and output variables contribute to successful training performance results. Therefore, the relationship between the WH, BL, CG, RW, and BW variables used in ANN may be an important reason for successful performance results found in network training. The good performance results found in the ANN training show that the ANN model can predict BW if the WH, BL, CG, and RW values are known.
Table 3. Pearson correlation coefficients for morphological measurements
Pearson Correlation

BW

WH

BL

CG

RH

BW

1

WH

0.930^{**}

1

BL

0.914^{**}

0.946^{**}

1

CG

0.935^{**}

0.947^{**}

0.924^{**}

1

RH

0.840^{**}

0.852^{**}

0.841^{**}

0.863^{**}

1

*: Correlation is significant at the 0.05 level (2tailed); **: Correlation is significant at the 0.01 level (2tailed).
Ünalan and Işık (2007) reported that the same correlation value results for SAR cattle among birth weight and all other measurements. Koç and Akman, (2007) reported that the estimation of body measurements of HolsteinFriesian bulls at different periods and live weight prediction from body measurements and reported that using CG alone would be sufficient to predict BW. Turini et al. (2021) reported that among the traits, the highest and the lowest correlations results in the same live weight prediction from body measurements for HolsteinFriesian.
In Table 4, the best ANN model results obtained with the best neuron numbers from three different training algorithms backpropagation to predict body weight from morphological measurement for SAR cattle are presented to compare. RMSE, MAE, MAPE% and VAF% were used as performance criteria. In LM and SCG algorithms, the data set was divided into three subgroups: training, testing, and validity. But since there is no need for a set of validity in the BR backpropagation, it was divided into only training and test subgroups. As a result, apart from the BR algorithm, LM and SCG algorithms also calculated performance criteria for training, testing, and validity, respectively.
Table 4.Performance criteria of developed FFNN models
Training Alg.

Data set

N

RMSE

MAE

MAPE%

VAF%

LM

Training

287

0.0241

0.0142

5.7

97.930

Testing

61

0.0592

0.0255

10.4

92.428

Validation

61

0.0305

0.0181

7.1

96.631

BR

Training

327

0.0212

0.0134

5.5

98.147

Testing

82

0.0266

0.0155

6.2

97.443

SCG

Training

287

0.027

0.016

6.3

97.453

Testing

61

0.052

0.023

10.6

95.636

Validation

61

0.03

0.017

6.5

96.841

Therefore, the model with the lowest RMSE, MAE, and highest VAF% performance criterion was anticipated to have the best fit. Lewis (1982) defined "very good" MAPE (%) values as less than 10%, "good" MAPE (%) values as 10% to 20%, "acceptable" MAPE (%) values as 20% to 50%, and "wrong or faulty" MAPE (%) values as more than 50%. Given that the results are fewer than 10%, it is reasonable to draw the conclusion that the model developed is "very good" when the MAPE (%) values discovered by the BR method are examined using this categorization. The BR method beat other algorithms in terms of performance criteria in both training and test datasets. It was determined that the network model created using the BR method was the most accurate predictor of body weight in SAR cattle in the study. The study's results showed that the network model created using the BR algorithm was the most accurate predictor of body weight in SAR cattle.
Similar outcomes to those of this study were discovered, and it was noted that the BR algorithm produced superior outcomes. Akkol et al. (2017) reported that the "Bayesian Regularization " algorithm is the best estimation model among the three different backpropagation algorithms for estimating body weight in hair goats. Joy et al. (2022) reported that they studied infrared thermography (IRT) and machine learning techniques that can predict sheep rectal temperature when subjected to heat stress, and ANN developed by applying the BR algorithm showed the highest accuracy and performance for predicting rectal temperature using IRT. Furthermore, significant investigations have also been done comparing artificial neural networks and regression models to the outcomes of the ANN technique. Comparing artificial neural network model and regression models, Favaro et al. (2014) in goat kids, Salawu et al. (2014) in rabbits, SzyndlerNędza et al. (2016) in pigs, and KhorshidiJalali et al. (2019) in Raini Cashmere goats revealed that the findings of ANN were better and more accurate.
Conclusion: Although artificial neural networks have recently been widely used because they allow experimental problems to be modelled accurately in all disciplines, they are not enough used in animal husbandry studies. For this aim, an FFNN model was designed and developed to estimate the effect of morphological measurements of SAR cattle on body weight to be compared to these models. The Bayesian Regularization backpropagation algorithm was found to be the best FFNN model topology with the smallest MSE, the largest R^{2}, and the most appropriate regression estimation in both the training dataset and the test dataset. The best network models developed are compared to find the best model estimate based on RMSE, Mean MAE, MAPE%, and VAF% performance criteria. It was found that body weight predictive values were in good agreement with all morphological measurements for the ANN model. As a result, it has been shown that the proposed FFNN model is the best model, can be used effectively to predict body weight with all morphological measurements, and has a very good agreement with body weight in terms of robustness and reliability.
Acknowledgements: This study was carried out by using the data of the “Southern Anatolian Red Kilis (SAR)Sub Project” of the "National Project for Conservation of Local Genetic Resources" conducted under the coordination of the General Directorate of Agricultural Research and Policies of the Ministry of Food, Agriculture and Livestock. I would like to thank the General Directorate of Agricultural Research and Policies of the Ministry of Food, Agriculture and Livestock, the Directorate of the Eastern Mediterranean Agricultural Research Institute, and all those who contributed.
Financial Support: This research received no grant from any funding agency/sector.
Ethical Statement: This study does not require approval from the Animal Experiments Local Ethics Committee.
Conflict of Interest: The authors declared that there is no conflict of interest.
REFERENCES
 Abraham, A. (2005) Artificial neural networks: Handbook of measuring system design, John Wiley and Sons, Ltd., ISBN: 0470021438, 908 p.
 Akıllı, A. and H. Atıl (2014) Artificial intelligence technologies in dairy science: fuzzy logic and artificial neural network. J. Anim. Prod., 55(1): 3945.
 Akkol, S., A. Akilli, and İ. Cemal, (2017) Comparison of artificial neural network and multiple linear regression for prediction of live weight in hair goats. Yüzüncü Yil University J. Agri. Sci., 27(1): 2129. DOI: 29133/yyutbd.263968
 Akoğlu, H. (2018) User's guide to correlation coefficients. Turkish J. Emergency Medicine 18(3): 9193. DOI: 1016/j.tjem.2018.08.001
 Aladağ, C.H., E. Egrioglu, and C. Kadilar (2010) Modeling brain wave dataset by using artifıcial neural networks. Hacettepe J. Math. Stat., 39(1): 8188.
 Anitha, P., and T. Chakravarthy (2018) Agricultural crop yield prediction using artificial neural network with feedforward algorithm. Intl., J. Computer Sci. & Engin., 6(11): 178181. DOI: 26438/ijcse/v6i11.178181
 Asadzadeh, N., M. Bitaraf Sani., E. ShamsDavodly, J. ZareHarofte, M. Khojestehkey, S. Abbaasi, and A. ShafieNaderi (2021). Body weight prediction of dromedary camels using the machine learning models. Iranian J. Appl. Anim. Sci., 11(3): 605614. DOI: 1001.1.2251628.2021.11.3.19.5
 Asteris, P.G., P.C. Roussis, and M.G. Douvika (2017) Feedforward neural network prediction of the mechanical properties of sandcrete materials. Sensors 17: 1344 1364 DOI: 3390/s17061344
 Beal, M., M.T. Hagan, and H.B. Demuth (2010) Neural network toolbox™ 6 user’s guide; the math works inc., Natick, MA, USA; 146175.
 Burden, F. and D. Winkler (2009) Bayesian regularization of neural networks. In D.J. Livingstone (Ed.), Artificial neural, networks: Methods and applications (pp. 2342). Totowa, NJ: Humana Press.
 Chan, Y.H. (2003) Biostatistics 104: Correlational analysis, The Singapore Medical Journal 44(12): 614619.
 Cross, A.J., G.A Rohrer, T.M. BrownBrand, J.P. Cassady, and B.N. Keel (2018) Feedforward and generalised regression neural networks in modelling feeding behaviour of pigs in the growfinish phase. Biosystems Engineering 173:124133. DOI: 1016/j.biosystemseng.2018.02.005
 Dawson, C.W. and R. Wilby (1998) An artificial neural network approach to rainfallrunoff modelling. Hydrological Sciences Journal 43(1): 4766. DOI: 1080/02626669809492102
 Erzin, Y. and T. Çetin (2013) The prediction of the critical factor of safety of homogeneous finite slopes using neural networks and multiple regressions. Computers and Geosciences 51:305313. DOI: 1016/j.cageo.2012.09.003
 Eyduran, E., D. Zaborski, A. Waheed, S. Celik, K. Koksal, and W. Grzesiak (2017) Comparison of the predictive capabilities of several dataset mining algorithms and multiple linear regression in the prediction of body weight by means of body measurements in the indigenous beetal goat of Pakistan. Pakistan Journal of Zoology 49(1): 257265. DOI: 17582/journal.pjz/2017.49.1.257.265
 (2012) Phenotypic characterization of animal genetic resources. FAO animal production and health guidelines No.11. Rome, Italy (available at http://www.fao.org/docrep/015/i2686e/i2686e00.htm).
 Favaro, L., E.F. Briefer, and A.G. McElligott (2014) Artificial neural network approach for revealing individuality, group membership and age information in goat kid contact calls. Acta Acustica United with Acustica 100(4): 782789. DOI: 3813/AAA.918758
 Ghotbaldini, H., M. Mohammadabadi, NezamabadiPour, O.I. Babenko, M.V. Bushtruk, and S.V. Tkachenko (2019) Predicting breeding value of body weight at 6month age using artificial neural networks in Kermani sheep breed. Acta Scientiarum. Animal Sciences 41. DOI: 10.4025/actascianimsci.v41i1.45282
 Grzesiak, W., R. Lacroix, J. Wójcik, and P.A. Blaszczyk (2003) Comparison of neural network and multiple regression predictions for 305day lactation yield using partial lactation records. Canadian Journal of Animal Science 83(2): 307310. DOI: 4141/A02002
 Haykin, S.S. (2009). Neural networks and learning machines/Simon Haykin. New York: Prentice Hall,. Copyright © 2009 by Pearson Education, Inc., Upper Saddle River, New Jersey 07458, ISBN13: 9780131471399, ISBN10: 0131471392, 937 p.
 Joy, A., Taheri, S., Dunshea, F.R., Leury, B.J., DiGiacomo, K., OseiAmponsah, R., Brodie, G. and S.S. Chauhan (2022). Noninvasive measure of heat stress in sheep using machine learning techniques and infrared thermography. Small Ruminant Research 207: 106592. DOI: 1016/j.smallrumres.2021.106592
 Karlik, B. and A.V. Olgac (2011). Performance analysis of various activation functions in generalized MLP architectures of neural networks. International Journal of Artificial Intelligence and Expert Systems 1: 111122.
 Kayri, M. (2016). Predictive abilities of Bayesian regularization and LevenbergMarquardt algorithms in artificial neural networks: A comparative empirical study on social dataset. Mathematical and Computational Applications 21(2): 2031. DOI: 3390/mca21020020
 KhorshidiJalali, M., M. Mohammadabadi, A.E. Koshkooieh, A. Barazandeh, and O. Babenko (2019) Comparison of artificial neural network and regression models for prediction of body weight in Raini Cashmere goat. Iranian Journal as Applied Animal Science ijas.ir
 Koç, A. and N. Akman (2007) Body Measurements of HolsteinFriesian Bulls at Different Periods and Live Weight Prediction from Body Measurements. Journal of Adnan Menderes University Agricultural Faculty 2007 4(12): 21 – 25.
 Lewis, C.D. (1982) Industrial and Business Forecasting Methods. Butterworths Publishing: London, 1982. 40 p.
 MacKay, D.J. (1992) Bayesian interpolation. Neural computation; 4: 415447. DOI: 1162/neco.1992.4.3.415
 (2016) R, software 2016b. Matlab R., The mathworks, Inc., United States of America. Retrieved in March, 3, 2022 from https://ch.mathworks.com/
 Mhaskar, H.N. and C.A. Micchelli (1994) How to choose an activation function. Retrieved in March, 10, 2022 from https://papers.nips.cc/paper/874howtochooseanactivationfunction.pdf.
 PourHamidi, S., M.R. Mohammadabadi, M. Asadi Foozi, and H. NezamabadiPour (2017) Prediction of breeding values for the milk production trait in Iranian Holstein cows applying artificial neural networks. Journal of Livestock Science and Technologies 5(2): 5361. DOI: 22103/jlst.2017.10043.1188
 Putra, S. and A. Wanto (2017) Analysis of artificial neural network accuracy using backpropagation algorithm in predicting process (forecasting). International Journal of Information System and Technology 1(1): 3442. DOI: 30645/ijistech.v1i1.4
 Putro, S.S., M.A. Syakur, M.S. Rochman, and A. Rachmad (2022) Comparison of backpropagation and ERNN methods in predicting corn production. Communications in Mathematical Biology and Neuroscience 17 p. DOI: 10.28919/cmbn/7082
 Rachmad, A.E., M.S. Rochman, D. Kuswanto, I. Santosa, R.K. Hapsari, T. Indriyani, and E. Purwanti (2018) Comparison of the traveling salesman problem analysis using neural network method, In: International Conference on Science and Technology, Atlantis Press 10571061. DOI: 2991/icst18.2018.213
 Saini, L.M. (2008) Peak load forecasting using bayesian regularization, resilient and adaptive backpropagation learning based artificial neural networks. Electric Power Systems Research 78(7): 13021310. DOI: 1016/j.epsr.2007.11.003
 Salawu, E.O., M. Abdulraheem, A. Shoyombo, A. Adepeju, S. Davies, O. Akinsola, and B. Nwagu (2014) Using artificial neural network to predict body weights of rabbits. Open Journal of Animal Science 4: 182186. DOI: 4236/ojas.2014.44023
 Salehi, F., R. Lacroix, and K.M. Wade (1998) Improving dairy yield predictions through combined record classifiers and specialized artificial neural networks. Computers and Electronics in Agriculture 20(3): 199213. DOI: 1016/S01681699(98)000180
 Sharma, A.K., R.K. Sharma, and H.S. Kasana (2006) Empirical comparisons of feedforward connectionist and conventional regression models for prediction of first lactation 305day milk yield in Karan Fries dairy cows. Neural Computing and Applications 15 (3): 359365. DOI: 1007/s005210060037y
 SzyndlerNędza, M. Robert, E. Tadeusz, B. Mirosław, and P. Artur (2016) Prediction of carcass meat percentage in young pigs using linear regression models and artificial neural networks. Annals of Animal Science 16(1): 275286. DOI: 1515/aoas20150069
 Takma, Ç., H. Atıl, and V. Aksakal. (2012) Comparison of multiple linear regression and artificial neural network models goodness of fit to lactation milk yields. Journal of the Faculty of Veterinary Medicine, Kafkas University 18: 941944. DOI: 10.9775/kvfd.2012.6764
 Taşdemir, Ş. and I.A. Özkan (2019) Ann approach for estimation of cow weight depending on photogrammetric body dimensions. International Journal of Geoscience Engineering and Technology 4 (1): 036044. DOI: 26833/ijeg.427531
 Turini, L., G. Conte, F. Bonelli, Madrigali, B. Marani, M. Sgorbini, and M. Mele (2021) Designing Statistical Models for Holstein Rearing Heifers’ Weight Estimation from Birth to 15 Months Old Using Body Measurements. Animals 11(7):1846. DOI: 10.3390/ani11071846
 Ünalan, A. and A. Işık (2007) A Study on Determination of Environmental Effects and Phenotypic Correlations Among Some Body Measurements of South Anatolian Red (SAR) Calves. Journal of Agriculture Faculty, Çukurova University 22. (2): 16.
 Zador, A.M. (2019) A critique of pure learning and what artificial neural networks can learn from animal brains. Nature Communications 10: 3770 DOI: 1038/s41467019117866
 Zhang, T. and X. You (2015) Improvement of the training and normalization method of artificial neural network in the prediction of indoor environment. Procedia Engineering 121: 12451251. DOI: 1016/j.proeng.2015.09.152
