TY - JOUR
T1 - Using fitness dependent optimizer for training multi-layer perceptron
AU - Abbas, D.K.
AU - Rashid, T.A.
AU - Abdalla, K.H.
AU - Bacanin, N.
AU - Alsadoon, A.
PY - 2021
Y1 - 2021
N2 - This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent Optimizer (FDO). The stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages using some standard measurements. This influenced our target to gauge the performance of the algorithm in training multilayer perceptron neural networks (MLP). This study combines FDO with MLP (codename FDO-MLP) for optimizing weights and biases to predict outcomes of students. This study can improve the learning system in terms of the educational background of students besides increasing their achievements. The experimental results of this approach are affirmed by comparing with the Back-Propagation algorithm (BP) and some evolutionary models such as FDO with cascade MLP (FDO-CMLP), Grey Wolf Optimizer (GWO) combined with MLP (GWO-MLP), modified GWO combined with MLP (MGWO-MLP), GWO with cascade MLP (GWO-CMLP), and modified GWO with cascade MLP (MGWO-CMLP). The qualitative and quantitative results prove that the proposed approach using FDO as a trainer can outperform the other approaches using different trainers on the dataset in terms of convergence speed and local optima avoidance. The proposed FDO-MLP approach classifies with a rate of 0.97.
AB - This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent Optimizer (FDO). The stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages using some standard measurements. This influenced our target to gauge the performance of the algorithm in training multilayer perceptron neural networks (MLP). This study combines FDO with MLP (codename FDO-MLP) for optimizing weights and biases to predict outcomes of students. This study can improve the learning system in terms of the educational background of students besides increasing their achievements. The experimental results of this approach are affirmed by comparing with the Back-Propagation algorithm (BP) and some evolutionary models such as FDO with cascade MLP (FDO-CMLP), Grey Wolf Optimizer (GWO) combined with MLP (GWO-MLP), modified GWO combined with MLP (MGWO-MLP), GWO with cascade MLP (GWO-CMLP), and modified GWO with cascade MLP (MGWO-CMLP). The qualitative and quantitative results prove that the proposed approach using FDO as a trainer can outperform the other approaches using different trainers on the dataset in terms of convergence speed and local optima avoidance. The proposed FDO-MLP approach classifies with a rate of 0.97.
UR - https://hdl.handle.net/1959.7/uws:66809
UR - https://jit.ndhu.edu.tw/article/view/2628
U2 - 10.53106/160792642021122207011
DO - 10.53106/160792642021122207011
M3 - Article
SN - 1607-9264
VL - 22
SP - 1575
EP - 1585
JO - Journal of Internet Technology
JF - Journal of Internet Technology
IS - 7
ER -