Processing, Please wait...

  • Home
  • About Us
  • Search:
  • Advanced Search

Growing Science » Tags cloud » Q-Learning

Journals

  • IJIEC (777)
  • MSL (2643)
  • DSL (690)
  • CCL (528)
  • USCM (1092)
  • ESM (421)
  • AC (562)
  • JPM (293)
  • IJDS (952)
  • JFS (96)
  • HE (32)
  • SCI (26)

Keywords

Supply chain management(168)
Jordan(165)
Vietnam(151)
Customer satisfaction(120)
Performance(115)
Supply chain(112)
Service quality(98)
Competitive advantage(97)
Tehran Stock Exchange(94)
SMEs(89)
optimization(87)
Artificial intelligence(85)
Financial performance(84)
Sustainability(84)
Trust(83)
TOPSIS(83)
Job satisfaction(81)
Factor analysis(78)
Genetic Algorithm(78)
Social media(78)


» Show all keywords

Authors

Naser Azad(82)
Zeplin Jiwa Husada Tarigan(66)
Mohammad Reza Iravani(64)
Endri Endri(45)
Muhammad Alshurideh(42)
Hotlan Siagian(40)
Dmaithan Almajali(37)
Jumadil Saputra(36)
Muhammad Turki Alshurideh(35)
Ahmad Makui(33)
Barween Al Kurdi(32)
Sautma Ronni Basana(31)
Basrowi Basrowi(31)
Hassan Ghodrati(31)
Mohammad Khodaei Valahzaghard(30)
Shankar Chakraborty(29)
Ni Nyoman Kerti Yasa(29)
Sulieman Ibraheem Shelash Al-Hawary(28)
Prasadja Ricardianto(28)
Haitham M. Alzoubi(28)


» Show all authors

Countries

Iran(2190)
Indonesia(1311)
Jordan(813)
India(793)
Vietnam(510)
Saudi Arabia(477)
Malaysia(444)
China(231)
United Arab Emirates(226)
Thailand(160)
United States(114)
Ukraine(110)
Turkey(110)
Egypt(105)
Peru(94)
Canada(92)
Morocco(86)
Pakistan(85)
United Kingdom(80)
Nigeria(78)


» Show all countries
Sort articles by: Volume | Date | Most Rates | Most Views | Reviews | Alphabet
1.

A metaheuristic algorithm co-driven by Q-learning and a learning mechanism for the distributed blocking flowshop scheduling problem with preventive maintenance and sequence-dependent setup times Pages 767-784 Right click to download the paper Download PDF

Authors: Congcong Sun, Hongyan Sang, Li Yuan, Jinfeng Gong, Hongmin Zhu

DOI: 10.5267/j.ijiec.2025.3.006

Keywords: Distributed blocking flowshop scheduling problem, Preventive maintenance, Sequence-dependent setup times, Discrete grey wolf optimization algorithm, Q-learning

Abstract:
Drawing inspiration from manufacturing production processes like chemical and steel manufacturing, the distributed blocking flowshop scheduling problem with preventive maintenance and sequence-dependent setup times (DBFSP/PM/SDST) is studied. First, it is described by a mixed-integer linear programming model with the objective of minimizing the total flowtime. Second, we propose a Q-learning and learning mechanism co-driven approach, integrating it into the discrete grey wolf optimization algorithm (DGWO_Q). In the algorithm, the neighborhood search structure is adjusted using Q-learning based on dynamic feedback from the environment. The balance between exploration and exploitation can be improved by introducing learning mechanisms in the search phase that can guide the grey wolf as it approaches the prey. Furthermore, a differential hunting strategy is designed to prevent the algorithm from falling into local optima. Third, a heuristic that enhances the quality of the initial solution is proposed for the problem characteristics. Finally, the proposed DGWO_Q is compared with four conventional efficient algorithms in numerical experiments on 225 instances of different sizes. Experimental results show that the DGWO_Q algorithm demonstrates excellent performance across test cases of various scales, effectively reducing production cycle time, setup times and the impact of maintenance downtime on production efficiency. It provides an efficient intelligent optimization approach for solving the complex scheduling problem.
Details
  • 0
  • 1
  • 2
  • 3
  • 4
  • 5

Journal: IJIEC | Year: 2025 | Volume: 16 | Issue: 3 | Views: 677 | Reviews: 0

 
2.

A dynamic incentive mechanism for data sharing in manufacturing industry Pages 189-208 Right click to download the paper Download PDF

Authors: Ruihan Liu, Yang Yu, Min Huang

DOI: 10.5267/j.ijiec.2023.10.004

Keywords: Data sharing, Dynamic incentive mechanism, Evolutionary game, Networked evolutionary game, Q-Learning

Abstract:
Data sharing is a critical component in a blockchain traceability platform. Therefore, creating a reasonable incentive mechanism to ensure that all enterprises participate in data sharing is vital for blockchain platforms. Currently, many researchers employ evolutionary game theory to analyze problems related to data sharing. However, evolutionary game theory typically assumes that the population composed of enterprises is mixed uniformly. Enterprises in the manufacturing industry are not uniformly mixed, as they tend to have specific connections with each other due to the size of enterprises and volume of business. Therefore, a networked evolutionary game is introduced to solve this problem. Firstly, an incentive model for enterprises sharing data is established. Then, a scale-free network is employed to simulate the connections between enterprises. To comprehensively consider the individual and group benefits of enterprises in the game, this study designs a strategy update rule for networked evolutionary game based on Discrete Particle Swarm Optimization and Variable Neighborhood Descent algorithm. To tackle the challenge of determining reasonable incentive values in networked evolutionary games, this study proposes a dynamic incentive mechanism based on the Q-Learning algorithm. Finally, the experiments indicate that this method can successfully facilitate the stable involvement of enterprises in data sharing.
Details
  • 34
  • 1
  • 2
  • 3
  • 4
  • 5

Journal: IJIEC | Year: 2024 | Volume: 15 | Issue: 1 | Views: 1362 | Reviews: 0

 
3.

A convolutional deep reinforcement learning architecture for an emerging stock market analysis Pages 313-326 Right click to download the paper Download PDF

Authors: Anita Hadizadeh, Mohammad Jafar Tarokh, Majid Mirzaee Ghazani

DOI: 10.5267/j.dsl.2025.1.006

Keywords: Deep reinforcement learning, DDQN, Convolutional neural network, Stock Market Prediction, Q-learning, Overfitting Prevention

Abstract:
In the complex and dynamic stock market landscape, investors seek to optimize returns while minimizing risks associated with price volatility. Various innovative approaches have been proposed to achieve high profits by considering historical trends and social factors. Despite advancements, accurately predicting market dynamics remains a persistent challenge. This study introduces a novel deep reinforcement learning (DRL) architecture to forecast stock market returns effectively. Unlike traditional approaches requiring manual feature engineering, the proposed model leverages convolutional neural networks (CNNs) to directly process daily stock prices and financial indicators. The model addresses overfitting and data scarcity issues during training by replacing conventional Q-tables with convolutional layers. The optimization process minimizes the sum of squared errors, enhancing prediction accuracy. Experimental evaluations demonstrate the model's robustness, achieving a 67% improvement in directional accuracy over the buy-and-hold strategy across short-term and long-term horizons. These findings underscore the model’s adaptability and effectiveness in navigating complex market environments, offering a significant advancement in financial forecasting.
Details
  • 0
  • 1
  • 2
  • 3
  • 4
  • 5

Journal: DSL | Year: 2025 | Volume: 14 | Issue: 2 | Views: 2083 | Reviews: 0

 

® 2010-2026 GrowingScience.Com