OPTIMIZATION OF FLEET UTILIZATION AND WAITING TIME IN SUPPLY CHAIN AGENT-BASED SIMULATION USING REINFORCEMENT LEARNING
DOI:
https://doi.org/10.21776/Keywords:
Agent-based Model (ABM), Reinforcement Learning, Supply Chain, Fleet UtilizationAbstract
Inventory to transport transition was a critical operation that requires high efficiency in manufacturing. This study models the inventory transition of manufacturing plants in a supply chain network. The objective was to configure the minimum fleet utilization with fastest waiting time.  The configuration was performed using reinforcement learning assisted agent-based model (ABM) simulation. The ABM with fleet speed control have the best performance with average waiting time of 5.84 hours with lowest fleet utilization which surpasses other models. Lower fleet and waiting time provide rest periods for the driver. Therefore, performing speed control during transport improves human factor of the supply chain operation.References
E. Ramos, S. Dien, A. Gonzales, M. Chavez, and B. Hazen, “Supply chain cost research: a bibliometric mapping perspective,†Benchmarking An Int. J., vol. 28, no. 3, pp. 1083–1100, Mar. 2021, doi: 10.1108/BIJ-02-2020-0079.
Y. Masip MacÃa, P. RodrÃguez Machuca, A. A. RodrÃguez Soto, and R. Carmona Campos, “Green Hydrogen Value Chain in the Sustainability for Port Operations: Case Study in the Region of Valparaiso, Chile,†Sustainability, vol. 13, no. 24, p. 13681, Dec. 2021, doi: 10.3390/su132413681.
A. A. Karl, J. Micheluzzi, L. R. Leite, and C. R. Pereira, “Supply chain resilience and key performance indicators: a systematic literature review,†Production, vol. 28, Oct. 2018, doi: 10.1590/0103-6513.20180020.
K. Katsaliaki, P. Galetsi, and S. Kumar, “Supply chain disruptions and resilience: a major review and future research agenda,†Ann. Oper. Res., vol. 319, no. 1, pp. 965–1002, Dec. 2022, doi: 10.1007/s10479-020-03912-1.
S. Zighan, “Managing the great bullwhip effects caused by COVID-19,†J. Glob. Oper. Strateg. Sourc., vol. 15, no. 1, pp. 28–47, Feb. 2022, doi: 10.1108/JGOSS-02-2021-0017.
A. Essakly, M. Wichmann, and T. S. Spengler, “A reference framework for the holistic evaluation of Industry 4.0 solutions for small-and medium-sized enterprises,†IFAC-PapersOnLine, vol. 52, no. 13, pp. 427–432, 2019, doi: 10.1016/j.ifacol.2019.11.093.
M. Dorigatti, A. Guarnaschelli, O. Chiotti, and H. E. Salomone, “A service-oriented framework for agent-based simulations of collaborative supply chains,†Comput. Ind., vol. 83, pp. 92–107, Dec. 2016, doi: 10.1016/j.compind.2016.09.005.
D. Masad and J. Kazil, “OF THE 14th PYTHON IN SCIENCE CONF,†PROC, p. 53, 2015.
W. de Paula Ferreira, F. Armellini, and L. A. De Santa-Eulalia, “Simulation in industry 4.0: A state-of-the-art review,†Comput. Ind. Eng., vol. 149, p. 106868, Nov. 2020, doi: 10.1016/j.cie.2020.106868.
W. Trigueiro de Sousa Junior, J. A. Barra Montevechi, R. de Carvalho Miranda, and A. Teberga Campos, “Discrete simulation-based optimization methods for industrial engineering problems: A systematic literature review,†Comput. Ind. Eng., vol. 128, pp. 526–540, Feb. 2019, doi: 10.1016/j.cie.2018.12.073.
V. Iannino, C. Mocci, M. Vannocci, V. Colla, A. Caputo, and F. Ferraris, “An Event-Driven Agent-Based Simulation Model for Industrial Processes,†Appl. Sci., vol. 10, no. 12, p. 4343, Jun. 2020, doi: 10.3390/app10124343.
W. Joe and H. C. Lau, “Deep Reinforcement Learning Approach to Solve Dynamic Vehicle Routing Problem with Stochastic Customers,†Proc. Int. Conf. Autom. Plan. Sched., vol. 30, pp. 394–402, Jun. 2020, doi: 10.1609/icaps.v30i1.6685.
W. Pan and S. Q. Liu, “Deep reinforcement learning for the dynamic and uncertain vehicle routing problem,†Appl. Intell., vol. 53, no. 1, pp. 405–422, Jan. 2023, doi: 10.1007/s10489-022-03456-w.
G. Lin, M. Palopoli, and V. Dadwal, “From Causal Loop Diagrams to System Dynamics Models in a Data-Rich Ecosystem,†in Leveraging Data Science for Global Health, Cham: Springer International Publishing, 2020, pp. 77–98. doi: 10.1007/978-3-030-47994-7_6.
D. T. Matt, V. Modrák, and H. Zsifkovits, “Industry 4.0 for smes: Challenges, opportunities and requirements,†Ind. 4.0 SMEs Challenges, Oppor. Requir., pp. 1–401, Jan. 2020, doi: 10.1007/978-3-030-25425-4.
Z. He, K. P. Tran, S. Thomassey, X. Zeng, J. Xu, and C. Yi, “Multi-objective optimization of the textile manufacturing process using deep-Q-network based multi-agent reinforcement learning,†J. Manuf. Syst., vol. 62, pp. 939–949, Jan. 2022, doi: 10.1016/j.jmsy.2021.03.017.
M. Wallis, L., Paich, “AnyLogic: Simulation Modeling Software Tools & Solutions for Business.â€
N. Bäuerle and U. Rieder, “Markov Decision Processes,†Jahresbericht der Dtsch. Math., vol. 112, no. 4, pp. 217–243, Dec. 2010, doi: 10.1365/s13291-010-0007-2.
E. Sert, Y. Bar-Yam, and A. J. Morales, “Segregation dynamics with reinforcement learning and agent based modeling,†Sci. Rep., vol. 10, no. 1, p. 11771, Jul. 2020, doi: 10.1038/s41598-020-68447-8.
A. Ecoffet, J. Huizinga, J. Lehman, K. O. Stanley, and J. Clune, “Go-Explore: a New Approach for Hard-Exploration Problems,†Jan. 2019, doi: 10.48550/arxiv.1901.10995.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 JEMIS (Journal of Engineering & Management in Industrial System)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This work is licensed under aÂ
Creative Commons Attribution-NonCommercial 4.0 International License