Processing, Please wait...

  • Home
  • About Us
  • Search:
  • Advanced Search

Growing Science » Tags cloud » Bayesian optimization

Journals

  • IJIEC (747)
  • MSL (2643)
  • DSL (668)
  • CCL (508)
  • USCM (1092)
  • ESM (413)
  • AC (562)
  • JPM (271)
  • IJDS (912)
  • JFS (96)
  • HE (32)
  • SCI (26)

Keywords

Supply chain management(166)
Jordan(161)
Vietnam(149)
Customer satisfaction(120)
Performance(113)
Supply chain(111)
Service quality(98)
Competitive advantage(95)
Tehran Stock Exchange(94)
SMEs(87)
optimization(86)
Trust(83)
TOPSIS(83)
Financial performance(83)
Sustainability(82)
Job satisfaction(80)
Factor analysis(78)
Social media(78)
Artificial intelligence(77)
Knowledge Management(77)


» Show all keywords

Authors

Naser Azad(82)
Mohammad Reza Iravani(64)
Zeplin Jiwa Husada Tarigan(63)
Endri Endri(45)
Muhammad Alshurideh(42)
Hotlan Siagian(39)
Jumadil Saputra(36)
Dmaithan Almajali(36)
Muhammad Turki Alshurideh(35)
Barween Al Kurdi(32)
Ahmad Makui(32)
Basrowi Basrowi(31)
Hassan Ghodrati(31)
Mohammad Khodaei Valahzaghard(30)
Sautma Ronni Basana(29)
Shankar Chakraborty(29)
Ni Nyoman Kerti Yasa(29)
Sulieman Ibraheem Shelash Al-Hawary(28)
Prasadja Ricardianto(28)
Haitham M. Alzoubi(27)


» Show all authors

Countries

Iran(2184)
Indonesia(1290)
India(788)
Jordan(786)
Vietnam(504)
Saudi Arabia(453)
Malaysia(441)
United Arab Emirates(220)
China(206)
Thailand(153)
United States(111)
Turkey(106)
Ukraine(104)
Egypt(98)
Canada(92)
Peru(88)
Pakistan(85)
United Kingdom(80)
Morocco(79)
Nigeria(78)


» Show all countries
Sort articles by: Volume | Date | Most Rates | Most Views | Reviews | Alphabet
1.

Optimizing contextual bandit hyperparameters: A dynamic transfer learning-based framework Pages 951-964 Right click to download the paper Download PDF

Authors: Farshad Seifi, Seyed Taghi Akhavan Niaki

DOI: 10.5267/j.ijiec.2024.6.003

Keywords: Hyperparameter Optimization, Contextual Bandit, Transfer Learning, Bayesian optimization

Abstract:
The stochastic contextual bandit problem, recognized for its effectiveness in navigating the classic exploration-exploitation dilemma through ongoing player-environment interactions, has found broad applications across various industries. This utility largely stems from the algorithms’ ability to accurately forecast reward functions and maintain an optimal balance between exploration and exploitation, contingent upon the precise selection and calibration of hyperparameters. However, the inherently dynamic and real-time nature of bandit environments significantly complicates hyperparameter tuning, rendering traditional offline methods inadequate. While specialized methods have been developed to overcome these challenges, they often face three primary issues: difficulty in adaptively learning hyperparameters in ever-changing environments, inability to simultaneously optimize multiple hyperparameters for complex models, and inefficiencies in data utilization and knowledge transfer from analogous tasks. To tackle these hurdles, this paper introduces an innovative transfer learning-based approach designed to harness past task knowledge for accelerated optimization and dynamically optimize multiple hyperparameters, making it well-suited for fluctuating environments. The method employs a dual Gaussian meta-model strategy—one for transfer learning and the other for assessing hyperparameters’ performance within the current task —enabling it to leverage insights from previous tasks while quickly adapting to new environmental changes. Furthermore, the framework’s meta-model-centric architecture enables simultaneous optimization of multiple hyperparameters. Experimental evaluations demonstrate that this approach markedly outperforms competing methods in scenarios with perturbations and exhibits superior performance in 70% of stationary cases while matching performance in the remaining 30%. This superiority in performance, coupled with its computational efficiency on par with existing alternatives, positions it as a superior and practical solution for optimizing hyperparameters in contextual bandit settings.
Details
  • 17
  • 1
  • 2
  • 3
  • 4
  • 5

Journal: IJIEC | Year: 2024 | Volume: 15 | Issue: 4 | Views: 1467 | Reviews: 0

 
2.

Extending the hypergradient descent technique to reduce the time of optimal solution achieved in hyperparameter optimization algorithms Pages 501-510 Right click to download the paper Download PDF

Authors: Farshad Seifi, Seyed Taghi Akhavan Niaki

DOI: 10.5267/j.ijiec.2023.4.004

Keywords: Hyperparameter optimization, Hypergradient descent, Multi-fidelity optimization, Bayesian optimization, Population-based optimization, Metaheuristic algorithm

Abstract:
There have been many applications for machine learning algorithms in different fields. The importance of hyperparameters for machine learning algorithms is their control over the behaviors of training algorithms and their crucial impact on the performance of machine learning models. Tuning hyperparameters crucially affects the performance of machine learning algorithms, and future advances in this area mainly depend on well-tuned hyperparameters. Nevertheless, the high computational cost involved in evaluating the algorithms in large datasets or complicated models is a significant limitation that causes inefficiency of the tuning process. Besides, increased online applications of machine learning approaches have led to the requirement of producing good answers in less time. The present study first presents a novel classification of hyperparameter types based on their types to create high-quality solutions quickly. Then, based on this classification and using the hypergradient technique, some hyperparameters of deep learning algorithms are adjusted during the training process to decrease the search space and discover the optimal values of the hyperparameters. This method just needs only the parameters of the previous two steps and the gradient of the previous step. Finally, the proposed method is combined with other techniques in hyperparameter optimization, and the results are reviewed in two case studies. As confirmed by experimental results, the performance of the algorithms with the proposed method have been increased 36.62% and 23.16% (based on the best average accuracy) for Cifar10 and Cifar100 dataset respectively in early stages while the final produced answers with this method are equal to or better than the algorithms without it. Therefore, this method can be combined with hyperparameter optimization algorithms in order to improve their performance and make them more appropriate for online use by just using the parameters of the previous two steps and the gradient of the previous step.
Details
  • 17
  • 1
  • 2
  • 3
  • 4
  • 5

Journal: IJIEC | Year: 2023 | Volume: 14 | Issue: 3 | Views: 1725 | Reviews: 0

 
3.

Predictive models based on machine learning to analyze the adoption of digital payments in Latin America and the Caribbean Pages 411-418 Right click to download the paper Download PDF

Authors: Jiang Wagner Mamani Lopez, Antonio Víctor Morales Gonzales, Pedro Pablo Chambi Condori

DOI: 10.5267/j.ijdns.2025.3.001

Keywords: Digital payments, Financial innovation, Data mining, Bayesian optimization, Hyperparameter Tuning

Abstract:
The use of technology in the financial industry has experienced sustained growth in recent years. However, in many emerging economies, a significant proportion of the population still does not utilize digital solutions for financial transactions. Promoting financial inclusion through digital environments is essential for driving social and economic development. This study aims to develop machine learning models to predict the adoption of digital payments in Latin America and the Caribbean using statistical data from the World Bank's Global Findex Database for 2021. The performance of the Random Forest, LightGBM, XGBoost, and CatBoost algorithms was compared, with the optimal hyperparameter combination identified through Bayesian optimization. The results show that LightGBM achieved the highest performance in predicting digital payments, with an F1-score of 90.25% and a more stable balance between precision and recall compared to the other models. These findings highlight the value of machine learning models in the financial sector, as they enable a more accurate identification of users adopting digital solutions, facilitating the design of strategies to strengthen financial inclusion in the region.
Details
  • 0
  • 1
  • 2
  • 3
  • 4
  • 5

Journal: IJDS | Year: 2025 | Volume: 9 | Issue: 3 | Views: 269 | Reviews: 0

 

® 2010-2026 GrowingScience.Com