Curriculum Learning in Job Shop Scheduling using Reinforcement Learning

Zur Kurzanzeige

dc.identifier.uri http://dx.doi.org/10.15488/13422
dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/13532
dc.contributor.author Waubert de Puiseau, Constantin eng
dc.contributor.author Tercan, Hasan eng
dc.contributor.author Meisen, Tobias eng
dc.contributor.editor Herberger, David
dc.contributor.editor Hübner, Marco
dc.contributor.editor Stich, Volker
dc.date.accessioned 2023-04-20T09:10:20Z
dc.date.available 2023-04-20T09:10:20Z
dc.date.issued 2023
dc.identifier.citation Waubert de Puiseau, C.; Tercan, H.; Meisen, T.: Curriculum Learning in Job Shop Scheduling using Reinforcement Learning. In: Herberger, D.; Hübner, M.; Stich, V. (Eds.): Proceedings of the Conference on Production Systems and Logistics: CPSL 2023 - 1. Hannover : publish-Ing., 2023, S. 34-43. DOI: https://doi.org/10.15488/13422 eng
dc.description.abstract Solving job shop scheduling problems (JSSPs) with a fixed strategy, such as a priority dispatching rule, may yield satisfactory results for several problem instances but, nevertheless, insufficient results for others. From this single-strategy perspective finding a near optimal solution to a specific JSSP varies in difficulty even if the machine setup remains the same. A recent intensively researched and promising method to deal with difficulty variability is Deep Reinforcement Learning (DRL), which dynamically adjusts an agent's planning strategy in response to difficult instances not only during training, but also when applied to new situations. In this paper, we further improve DLR as an underlying method by actively incorporating the variability of difficulty within the same problem size into the design of the learning process. We base our approach on a state-of-the-art methodology that solves JSSP by means of DRL and graph neural network embeddings. Our work supplements the training routine of the agent by a curriculum learning strategy that ranks the problem instances shown during training by a new metric of problem instance difficulty. Our results show that certain curricula lead to significantly better performances of the DRL solutions. Agents trained on these curricula beat the top performance of those trained on randomly distributed training data, reaching 3.2% shorter average makespans. eng
dc.language.iso eng eng
dc.publisher Hannover : publish-Ing.
dc.relation.ispartof Proceedings of the Conference on Production Systems and Logistics: CPSL 2023 - 1
dc.relation.ispartof 10.15488/13418
dc.rights CC BY 3.0 DE eng
dc.rights.uri http://creativecommons.org/licenses/by/3.0/de/ eng
dc.subject Konferenzschrift ger
dc.subject Job Shop Scheduling eng
dc.subject Reinforcement Learning eng
dc.subject Curriculum Learning eng
dc.subject Agent Based Systems eng
dc.subject Artificial Intelligence eng
dc.subject.ddc 620 | Ingenieurwissenschaften und Maschinenbau eng
dc.title Curriculum Learning in Job Shop Scheduling using Reinforcement Learning eng
dc.type BookPart eng
dc.type Text eng
dc.relation.essn 2701-6277
dc.bibliographicCitation.firstPage 34 eng
dc.bibliographicCitation.lastPage 43 eng
dc.description.version publishedVersion eng
tib.accessRights frei zug�nglich eng


Die Publikation erscheint in Sammlung(en):

Zur Kurzanzeige

 

Suche im Repositorium


Durchblättern

Mein Nutzer/innenkonto

Nutzungsstatistiken