How to cite this paper
Yılmaz, Ã., Altun, A & Köklü, M. (2022). Optimizing the learning process of multi-layer perceptrons using a hybrid algorithm based on MVO and SA.International Journal of Industrial Engineering Computations , 13(4), 617-640.
Refrences
Abedinia, O., & Amjady, N. (2015). Short-term wind power prediction based on Hybrid Neural Network and chaotic shark smell optimization. International journal of precision engineering and manufacturing-green technology, 2(3), 245-254.
Abusnaina, A. A., Ahmad, S., Jarrar, R., & Mafarja, M. (2018, June). Training neural networks using salp swarm algorithm for pattern classification. In Proceedings of the 2nd international conference on future networks and distributed systems (pp. 1-6).
Alboaneen, D. A., Tianfield, H., & Zhang, Y. (2017, December). Glowworm swarm optimization for training multi-layer perceptrons. In Proceedings of the Fourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (pp. 131-138).
Aljarah, I., Faris, H., & Mirjalili, S. (2018a). Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Computing, 22(1), 1-15.
Aljarah, I., Faris, H., Mirjalili, S., & Al-Madi, N. (2018b). Training radial basis function networks using biogeography-based optimizer. Neural Computing and Applications, 29(7), 529-553.
Alweshah, M. (2014). Firefly algorithm with artificial neural network for time series problems. Research Journal of Applied Sciences, Engineering and Technology, 7(19), 3978-3982.
Anderson, J. A. (1995). An introduction to neural networks. MIT press.
Bairathi, D., & Gopalani, D. (2019). Salp swarm algorithm (SSA) for training feed-forward neural networks. In Soft computing for problem solving (pp. 521-534). Springer, Singapore.
Bennett, K. P., & Mangasarian, O. L. (1992). Robust linear programming discrimination of two linearly inseparable sets. Optimization methods and software, 1(1), 23-34.
Brajevic, I., & Tuba, M. (2013). Training feed-forward neural networks using firefly algorithm. In Proceedings of the 12th International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED’13) (pp. 156-161).
Czerniak, J., & Zarzycki, H. (2003). Application of rough sets in the presumptive diagnosis of urinary system diseases. In Artificial intelligence and security in computing systems (pp. 41-51). Springer, Boston, MA.
Çınar, İ., Köklü, M., & Taşdemir, Ş. (2020). Classification of raisin grains using machine vision and artificial intelligence methods. Gazi Mühendislik Bilimleri Dergisi (GMBD), 6(3), 200-209.
Dua, D., & Graff, C. (2019). UCI Machine Learning Repository [http://archive. ics. uci. edu/ml]. Irvine, CA: University of California. School of Information and Computer Science.
Everett, H., Wheeler, J. A., DeWitt, B. S., Cooper, L. N., Van Vechten, D. & Graham, N. (1973). DeWitt, Bryce; Graham, R. Neill (eds.). The Many-Worlds Interpretation of Quantum Mechanics. Princeton Series in Physics. Princeton, NJ: Princeton University Press.
Faris, H. (2016). EVOLOPY_NN. [Repository]. https://github.com/7ossam81/EvoloPy-NN
Faris, H., Aljarah, I., & Mirjalili, S. (2016a). Training feedforward neural networks using multi-verse optimizer for binary classification problems. Applied Intelligence, 45(2), 322-332.
Faris, H., Aljarah, I., Al-Madi, N., & Mirjalili, S. (2016b). Optimizing the learning process of feedforward neural networks using lightning search algorithm. International Journal on Artificial Intelligence Tools, 25(06), 1650033.
Faris, H., Aljarah, I., Mirjalili, S., Castillo, P. A., & Guervós, J. J. M. (2016c). EvoloPy: An Open-source Nature-inspired Optimization Framework in Python. In IJCCI (ECTA) (pp. 171-177).
Faris, H., Aljarah, I., & Mirjalili, S. (2018). Improved monarch butterfly optimization for unconstrained global search and neural network training. Applied Intelligence, 48(2), 445-464.
Gori, M., & Tesi, A. (1992). On the problem of local minima in backpropagation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(1), 76-86.
Gupta, J. N., & Sexton, R. S. (1999). Comparing backpropagation with a genetic algorithm for neural network training. Omega, 27(6), 679-684.
Hasan, S., Tan, S. Q., Shamsuddin, S. M., & Sallehuddin, R. (2011). Artificial neural network learning enhancement using artificial fish swarm algorithm. In Proceedings of the 3rd International Conference on Computing and Informatics (ICOCI), (pp 117–122).
Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., & Chen, H. (2019). Harris Hawks Optimization: Algorithm and Applications. Future Generation Computer Systems, 97, 849–872.
Heisenberg, W. (1985). Ãœber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. In Original Scientific Papers Wissenschaftliche Originalarbeiten (pp. 478-504). Springer, Berlin, Heidelberg.
Holland, J. H. (1992). Genetic algorithms. Scientific american, 267(1), 66-73.
Jia, H., Peng, X., Song, W., Lang, C., Xing, Z., & Sun, K. (2019). Hybrid multiverse optimization algorithm with gravitational search algorithm for multithreshold color image segmentation. IEEE Access, 7, 44903-44927.
Karaboga, D., Akay, B., & Ozturk, C. (2007). Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In International conference on modeling decisions for artificial intelligence (pp. 318-329). Springer, Berlin, Heidelberg.
Kaya, S., & Fığlalı, N. (2018). Çok amaçlı esnek atölye tipi çizelgeleme problemlerinin çözümünde meta sezgisel yöntemlerin kullanımı. Harran Üniversitesi Mühendislik Dergisi, 3 (3), 222-233.
Kennedy, J., & Eberhart, R. (1995). Particle Swarm Optimization. In Proceedings of ICNN’95-International Conference on Neural Networks, (pp. 1942– 1948), IEEE.
Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598), 671-680.
Kolay, E., Tunç, T., & Eğrioğlu, E. (2016). Classification with some artificial neural network classifiers trained a modified particle swarm optimization. American Journal of Intelligent Systems, 6(3), 59-65.
Mafarja, M. M., & Mirjalili, S. (2017). Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing, 260, 302-312.
McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943)
Mirjalili, S., & Hashim, S. Z. M. (2010). A new hybrid PSOGSA algorithm for function optimization. In 2010 international conference on computer and information application (pp. 374-377). IEEE.
Mirjalili, S., & Sadiq, A. S. (2011). Magnetic optimization algorithm for training multi layer perceptron. In 2011 IEEE 3rd International Conference on Communication Software and Networks (pp. 42-46). IEEE.
Mirjalili, S., Hashim, S. Z. M., & Sardroudi, H. M. (2012). Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Applied Mathematics and Computation, 218(22), 11125-11137.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014a). Let a biogeography-based optimizer train your multi-layer perceptron. Information Sciences, 269, 188-209.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014b). Grey wolf optimizer. Advances in engineering software, 69, 46-61.
Mirjalili, S. (2015). How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Applied Intelligence, 43(1), 150-161
Mirjalili, S. Z., Saremi, S., & Mirjalili, S. M. (2015). Designing evolutionary feedforward neural networks using social spider optimization algorithm. Neural Computing and Applications, 26(8), 1919-1928.
Mirjalili, S. (2016). SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems, 96, 120-133.
Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in engineering software, 95, 51-67.
Mirjalili, S., Mirjalili, S. M., & Hatamlou, A. (2016). Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Computing and Applications, 27(2), 495-513.
Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H., & Mirjalili, S. M. (2017). Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, 163-191.
Rao, R. (2016). Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. International Journal of Industrial Engineering Computations, 7(1), 19-34.
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986). Learning Internal Representations by Error Propagation. In D. E. Rumelhart & J. L. Mcclelland (ed.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations (pp. 318--362) . MIT Press.
Seiffert, U. (2001). Multiple layer perceptron training using genetic algorithms. In ESANN (pp. 159-164).
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information processing & management, 45(4), 427-437.
Song, R., Zeng, X., & Han, R. (2020). An Improved Multi-Verse Optimizer Algorithm For Multi-Source Allocation Problem. International Journal of Innovative Computing, Information and Control, 16(6), 1845–1862.
Stehman, S. V. (1997). Selecting and interpreting measures of thematic classification accuracy. Remote sensing of Environment, 62(1), 77-89.
Storn, R., & Price, K. (1997). Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization, 11(4), 341–359.
Talbi, E. G. (2009). Metaheuristics: from design to implementation (Vol. 74). John Wiley & Sons.
Ting, T. O., Yang, X. S., Cheng, S., & Huang, K. (2015). Hybrid metaheuristic algorithms: past, present, and future. Recent advances in swarm intelligence and evolutionary computation, 71-83.
Tuba, M., Alihodzic, A., & Bacanin, N. (2015). Cuckoo search and bat algorithm applied to training feed-forward neural networks. In Recent advances in swarm intelligence and evolutionary computation (pp. 139-162). Springer, Cham.
Wilcoxon, F. (1992). Individual Comparisons by Ranking Methods. Breakthroughs in Statistics, 196–202, Springer, New York, NY.
Wolberg, W. H., & Mangasarian, O. L. (1990). Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proceedings of the national academy of sciences, 87(23), 9193-9196.
Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1), 67-82.
Wu, H., Zhou, Y., Luo, Q., & Basset, M. A. (2016). Training feedforward neural networks using symbiotic organisms search algorithm. Computational intelligence and neuroscience, 2016.
Yamany, W., Fawzy, M., Tharwat, A., & Hassanien, A. E. (2015). Moth-flame optimization for training multi-layer perceptrons. In 2015 11th International computer engineering Conference (ICENCO) (pp. 267-272). IEEE.
Yang, X. S. (2008). Nature-inspired metaheuristic algorithms, Luniver press. Beckington, UK, 242-246.
Yang, X. S., & Deb, S. (2009). Cuckoo Search via L´evy Flights. In 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), (pp. 210–214), IEEE.
Yang, X. S. (2009). Firefly algorithms for multimodal optimization. In International symposium on stochastic algorithms (pp. 169-178). Springer, Berlin, Heidelberg.
Yeh, I. C., Yang, K. J., & Ting, T. M. (2009). Knowledge discovery on RFM model using Bernoulli sequence. Expert Systems with Applications, 36(3), 5866-5871.
Yılmaz, Ö., Altun, A., & Köklü, M. (2022). A new hybrid algorithm based on MVO and SA for function optimization. International Journal of Industrial Engineering Computations, 13(2), 237-254.
Abusnaina, A. A., Ahmad, S., Jarrar, R., & Mafarja, M. (2018, June). Training neural networks using salp swarm algorithm for pattern classification. In Proceedings of the 2nd international conference on future networks and distributed systems (pp. 1-6).
Alboaneen, D. A., Tianfield, H., & Zhang, Y. (2017, December). Glowworm swarm optimization for training multi-layer perceptrons. In Proceedings of the Fourth IEEE/ACM International Conference on Big Data Computing, Applications and Technologies (pp. 131-138).
Aljarah, I., Faris, H., & Mirjalili, S. (2018a). Optimizing connection weights in neural networks using the whale optimization algorithm. Soft Computing, 22(1), 1-15.
Aljarah, I., Faris, H., Mirjalili, S., & Al-Madi, N. (2018b). Training radial basis function networks using biogeography-based optimizer. Neural Computing and Applications, 29(7), 529-553.
Alweshah, M. (2014). Firefly algorithm with artificial neural network for time series problems. Research Journal of Applied Sciences, Engineering and Technology, 7(19), 3978-3982.
Anderson, J. A. (1995). An introduction to neural networks. MIT press.
Bairathi, D., & Gopalani, D. (2019). Salp swarm algorithm (SSA) for training feed-forward neural networks. In Soft computing for problem solving (pp. 521-534). Springer, Singapore.
Bennett, K. P., & Mangasarian, O. L. (1992). Robust linear programming discrimination of two linearly inseparable sets. Optimization methods and software, 1(1), 23-34.
Brajevic, I., & Tuba, M. (2013). Training feed-forward neural networks using firefly algorithm. In Proceedings of the 12th International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED’13) (pp. 156-161).
Czerniak, J., & Zarzycki, H. (2003). Application of rough sets in the presumptive diagnosis of urinary system diseases. In Artificial intelligence and security in computing systems (pp. 41-51). Springer, Boston, MA.
Çınar, İ., Köklü, M., & Taşdemir, Ş. (2020). Classification of raisin grains using machine vision and artificial intelligence methods. Gazi Mühendislik Bilimleri Dergisi (GMBD), 6(3), 200-209.
Dua, D., & Graff, C. (2019). UCI Machine Learning Repository [http://archive. ics. uci. edu/ml]. Irvine, CA: University of California. School of Information and Computer Science.
Everett, H., Wheeler, J. A., DeWitt, B. S., Cooper, L. N., Van Vechten, D. & Graham, N. (1973). DeWitt, Bryce; Graham, R. Neill (eds.). The Many-Worlds Interpretation of Quantum Mechanics. Princeton Series in Physics. Princeton, NJ: Princeton University Press.
Faris, H. (2016). EVOLOPY_NN. [Repository]. https://github.com/7ossam81/EvoloPy-NN
Faris, H., Aljarah, I., & Mirjalili, S. (2016a). Training feedforward neural networks using multi-verse optimizer for binary classification problems. Applied Intelligence, 45(2), 322-332.
Faris, H., Aljarah, I., Al-Madi, N., & Mirjalili, S. (2016b). Optimizing the learning process of feedforward neural networks using lightning search algorithm. International Journal on Artificial Intelligence Tools, 25(06), 1650033.
Faris, H., Aljarah, I., Mirjalili, S., Castillo, P. A., & Guervós, J. J. M. (2016c). EvoloPy: An Open-source Nature-inspired Optimization Framework in Python. In IJCCI (ECTA) (pp. 171-177).
Faris, H., Aljarah, I., & Mirjalili, S. (2018). Improved monarch butterfly optimization for unconstrained global search and neural network training. Applied Intelligence, 48(2), 445-464.
Gori, M., & Tesi, A. (1992). On the problem of local minima in backpropagation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(1), 76-86.
Gupta, J. N., & Sexton, R. S. (1999). Comparing backpropagation with a genetic algorithm for neural network training. Omega, 27(6), 679-684.
Hasan, S., Tan, S. Q., Shamsuddin, S. M., & Sallehuddin, R. (2011). Artificial neural network learning enhancement using artificial fish swarm algorithm. In Proceedings of the 3rd International Conference on Computing and Informatics (ICOCI), (pp 117–122).
Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., & Chen, H. (2019). Harris Hawks Optimization: Algorithm and Applications. Future Generation Computer Systems, 97, 849–872.
Heisenberg, W. (1985). Ãœber den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. In Original Scientific Papers Wissenschaftliche Originalarbeiten (pp. 478-504). Springer, Berlin, Heidelberg.
Holland, J. H. (1992). Genetic algorithms. Scientific american, 267(1), 66-73.
Jia, H., Peng, X., Song, W., Lang, C., Xing, Z., & Sun, K. (2019). Hybrid multiverse optimization algorithm with gravitational search algorithm for multithreshold color image segmentation. IEEE Access, 7, 44903-44927.
Karaboga, D., Akay, B., & Ozturk, C. (2007). Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks. In International conference on modeling decisions for artificial intelligence (pp. 318-329). Springer, Berlin, Heidelberg.
Kaya, S., & Fığlalı, N. (2018). Çok amaçlı esnek atölye tipi çizelgeleme problemlerinin çözümünde meta sezgisel yöntemlerin kullanımı. Harran Üniversitesi Mühendislik Dergisi, 3 (3), 222-233.
Kennedy, J., & Eberhart, R. (1995). Particle Swarm Optimization. In Proceedings of ICNN’95-International Conference on Neural Networks, (pp. 1942– 1948), IEEE.
Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing. science, 220(4598), 671-680.
Kolay, E., Tunç, T., & Eğrioğlu, E. (2016). Classification with some artificial neural network classifiers trained a modified particle swarm optimization. American Journal of Intelligent Systems, 6(3), 59-65.
Mafarja, M. M., & Mirjalili, S. (2017). Hybrid whale optimization algorithm with simulated annealing for feature selection. Neurocomputing, 260, 302-312.
McCulloch, W.S., Pitts, W. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133 (1943)
Mirjalili, S., & Hashim, S. Z. M. (2010). A new hybrid PSOGSA algorithm for function optimization. In 2010 international conference on computer and information application (pp. 374-377). IEEE.
Mirjalili, S., & Sadiq, A. S. (2011). Magnetic optimization algorithm for training multi layer perceptron. In 2011 IEEE 3rd International Conference on Communication Software and Networks (pp. 42-46). IEEE.
Mirjalili, S., Hashim, S. Z. M., & Sardroudi, H. M. (2012). Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm. Applied Mathematics and Computation, 218(22), 11125-11137.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014a). Let a biogeography-based optimizer train your multi-layer perceptron. Information Sciences, 269, 188-209.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014b). Grey wolf optimizer. Advances in engineering software, 69, 46-61.
Mirjalili, S. (2015). How effective is the Grey Wolf optimizer in training multi-layer perceptrons. Applied Intelligence, 43(1), 150-161
Mirjalili, S. Z., Saremi, S., & Mirjalili, S. M. (2015). Designing evolutionary feedforward neural networks using social spider optimization algorithm. Neural Computing and Applications, 26(8), 1919-1928.
Mirjalili, S. (2016). SCA: a sine cosine algorithm for solving optimization problems. Knowledge-based systems, 96, 120-133.
Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in engineering software, 95, 51-67.
Mirjalili, S., Mirjalili, S. M., & Hatamlou, A. (2016). Multi-verse optimizer: a nature-inspired algorithm for global optimization. Neural Computing and Applications, 27(2), 495-513.
Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H., & Mirjalili, S. M. (2017). Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems. Advances in Engineering Software, 114, 163-191.
Rao, R. (2016). Jaya: A simple and new optimization algorithm for solving constrained and unconstrained optimization problems. International Journal of Industrial Engineering Computations, 7(1), 19-34.
Rumelhart, D. E., Hinton, G. E. & Williams, R. J. (1986). Learning Internal Representations by Error Propagation. In D. E. Rumelhart & J. L. Mcclelland (ed.), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations (pp. 318--362) . MIT Press.
Seiffert, U. (2001). Multiple layer perceptron training using genetic algorithms. In ESANN (pp. 159-164).
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information processing & management, 45(4), 427-437.
Song, R., Zeng, X., & Han, R. (2020). An Improved Multi-Verse Optimizer Algorithm For Multi-Source Allocation Problem. International Journal of Innovative Computing, Information and Control, 16(6), 1845–1862.
Stehman, S. V. (1997). Selecting and interpreting measures of thematic classification accuracy. Remote sensing of Environment, 62(1), 77-89.
Storn, R., & Price, K. (1997). Differential Evolution – A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization, 11(4), 341–359.
Talbi, E. G. (2009). Metaheuristics: from design to implementation (Vol. 74). John Wiley & Sons.
Ting, T. O., Yang, X. S., Cheng, S., & Huang, K. (2015). Hybrid metaheuristic algorithms: past, present, and future. Recent advances in swarm intelligence and evolutionary computation, 71-83.
Tuba, M., Alihodzic, A., & Bacanin, N. (2015). Cuckoo search and bat algorithm applied to training feed-forward neural networks. In Recent advances in swarm intelligence and evolutionary computation (pp. 139-162). Springer, Cham.
Wilcoxon, F. (1992). Individual Comparisons by Ranking Methods. Breakthroughs in Statistics, 196–202, Springer, New York, NY.
Wolberg, W. H., & Mangasarian, O. L. (1990). Multisurface method of pattern separation for medical diagnosis applied to breast cytology. Proceedings of the national academy of sciences, 87(23), 9193-9196.
Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE transactions on evolutionary computation, 1(1), 67-82.
Wu, H., Zhou, Y., Luo, Q., & Basset, M. A. (2016). Training feedforward neural networks using symbiotic organisms search algorithm. Computational intelligence and neuroscience, 2016.
Yamany, W., Fawzy, M., Tharwat, A., & Hassanien, A. E. (2015). Moth-flame optimization for training multi-layer perceptrons. In 2015 11th International computer engineering Conference (ICENCO) (pp. 267-272). IEEE.
Yang, X. S. (2008). Nature-inspired metaheuristic algorithms, Luniver press. Beckington, UK, 242-246.
Yang, X. S., & Deb, S. (2009). Cuckoo Search via L´evy Flights. In 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), (pp. 210–214), IEEE.
Yang, X. S. (2009). Firefly algorithms for multimodal optimization. In International symposium on stochastic algorithms (pp. 169-178). Springer, Berlin, Heidelberg.
Yeh, I. C., Yang, K. J., & Ting, T. M. (2009). Knowledge discovery on RFM model using Bernoulli sequence. Expert Systems with Applications, 36(3), 5866-5871.
Yılmaz, Ö., Altun, A., & Köklü, M. (2022). A new hybrid algorithm based on MVO and SA for function optimization. International Journal of Industrial Engineering Computations, 13(2), 237-254.