Controlling a Vacuum Suction Cup Cluster using Simulation-Trained Reinforcement Learning Agents

Zur Kurzanzeige

dc.identifier.uri https://www.repo.uni-hannover.de/handle/123456789/12216
dc.identifier.uri https://doi.org/10.15488/12118
dc.contributor.author Winkler, Georg
dc.contributor.editor Herberger, David
dc.contributor.editor Hübner, Marco
dc.date.accessioned 2022-06-02T11:44:45Z
dc.date.issued 2022
dc.identifier.citation Winkler, G.: Controlling a Vacuum Suction Cup Cluster using Simulation-Trained Reinforcement Learning Agents. In: Herberger, D.; Hübner, M. (Eds.): Proceedings of the Conference on Production Systems and Logistics: CPSL 2022. Hannover : publish-Ing., 2022, S. 349-358. DOI: https://doi.org/10.15488/12118
dc.identifier.citation Winkler, G.: Controlling a Vacuum Suction Cup Cluster using Simulation-Trained Reinforcement Learning Agents. In: Herberger, D.; Hübner, M. (Eds.): Proceedings of the Conference on Production Systems and Logistics: CPSL 2022. Hannover : publish-Ing., 2022, S. 349-358. DOI: https://doi.org/10.15488/12118
dc.description.abstract Using compressed air in industrial processes is often accompanied by a poor cost-benefit ratio and a negative impact on the environmental footprint due to usual distribution inefficiencies. Compressed air-based systems are expensive regarding installation and lead to high running costs due to pricey maintenance requirements and low energy efficiency due to leakage. However, compressed air-based systems are indispensable for various industrial processes, like handling parts with Class A surface requirements such as outer skin sheets in automobile production. Most of those outer skin parts are solely handled by vacuum-based grippers to minimize any visible effect on the finished car. Fulfilling customer expectations and simultaneously reducing the running costs of decisive systems requires finding innovative strategies focused on using the precious resource of compressed air as efficiently as possible. This work presents a sim2real reinforcement learning approach to efficiently hold a workpiece attached to a vacuum suction cup cluster. In addition to pure energy-saving, reinforcement learning enables those agents to be trained without collecting extensive data beforehand. Furthermore, the sim2real approach makes it easy and parallelizable to examine numerous agents by training them in a simulation of the testing rig rather than at the testing rig itself. The possibility to train various agents fast additionally facilitates focusing on the robustness and simplicity of the found agents instead of only searching for strategies that work, making training an intelligent system scalable and effective. The resulting agents reduce the amount of energy necessary to hold the workpiece attached by more than 15% compared to a reference strategy without machine learning and by more than 99% compared to a conventional strategy. eng
dc.language.iso eng
dc.publisher Hannover : publish-Ing.
dc.relation.ispartof Proceedings of the Conference on Production Systems and Logistics: CPSL 2022
dc.relation.ispartof https://doi.org/10.15488/12314
dc.rights CC BY 3.0 DE
dc.rights.uri https://creativecommons.org/licenses/by/3.0/de/
dc.subject Reinforcement Learning eng
dc.subject Sim2Real eng
dc.subject Energy Efficiency eng
dc.subject Automotive eng
dc.subject Gripper eng
dc.subject Suction Cups eng
dc.subject Compressed Air eng
dc.subject vacuum-based handling eng
dc.subject car body shop eng
dc.subject Body-in-white eng
dc.subject Konferenzschrift ger
dc.subject.ddc 620 | Ingenieurwissenschaften und Maschinenbau
dc.title Controlling a Vacuum Suction Cup Cluster using Simulation-Trained Reinforcement Learning Agents eng
dc.type BookPart
dc.type Text
dc.relation.essn 2701-6277
dc.bibliographicCitation.firstPage 349
dc.bibliographicCitation.lastPage 358
dc.description.version publishedVersion
tib.accessRights frei zug�nglich


Die Publikation erscheint in Sammlung(en):

Zur Kurzanzeige

 

Suche im Repositorium


Durchblättern

Mein Nutzer/innenkonto

Nutzungsstatistiken