Please use this identifier to cite or link to this item: https://hdl.handle.net/11147/14549
Full metadata record
DC FieldValueLanguage
dc.contributor.authorGlass,A.-
dc.contributor.authorNoennig,J.R.-
dc.contributor.authorBek,B.-
dc.contributor.authorGlass,R.-
dc.contributor.authorMenges,E.K.-
dc.contributor.authorOkhrin,I.-
dc.contributor.authorJäkel,R.-
dc.date.accessioned2024-06-19T14:28:51Z-
dc.date.available2024-06-19T14:28:51Z-
dc.date.issued2023-
dc.identifier.isbn979-840070906-7-
dc.identifier.urihttps://doi.org/10.1145/3638209.3638213-
dc.identifier.urihttps://hdl.handle.net/11147/14549-
dc.description.abstractData-driven design for cities is improving the quality of everyday life of citizens and optimizes the usage of resources. A new aspect is artificial intelligence, which Smart Cities could greatly benefit from. A central problem for urban designers is the unavailability of data to make relevant decisions. Agent-based simulations enable a view of the dynamic properties of the urban system, generating data in its course. However, the simulation must remain sufficiently simple to remain in the realm of computability. The research question of this paper is: How can we make agents behave more realistically to analyze citizens' mobility behavior? To solve this problem, we first created a simulated virtual environment, where agents can move freely in a small part of a city, the harbor area in Hamburg, Germany. We assumed that happiness is a crucial motivating factor for the movement of citizens. A survey of 130 citizens provided the weights that govern the simulated environment and the happiness score assignation of places. As an AI method, we then used Reinforcement Learning as a general model and Q-learning as an algorithm to generate a baseline. Through randomly traversing the model environment a baseline was created. We are in the process of enhancing Reinforcement Learning with a Deep Q-Network to make the actors learn. Early experiments show a significant improvement over a tabular Q-learning approach. This paper contributes to the literature of urban planning, and data-driven architectural design. The main contribution is replacing the inefficient search for a global maximum of the happiness function, with an efficient local solution global maximum. This has implications for further research in the generation of synthetic data through simulations. © 2023 ACM.en_US
dc.language.isoenen_US
dc.publisherAssociation for Computing Machineryen_US
dc.relation.ispartofACM International Conference Proceeding Series -- 6th International Conference on Computational Intelligence and Intelligent Systems, CIIS 2023 -- 25 November 2023 through 27 November 2023 -- Tokyo -- 197807en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectagent-based modelingen_US
dc.subjectartificial intelligenceen_US
dc.subjectcity simulationsen_US
dc.subjectsmart citiesen_US
dc.subjectsynthetic dataen_US
dc.subjecturban designen_US
dc.titleInnovative Urban Design Simulation: Utilizing Agent-Based Modelling Through Reinforcement Learningen_US
dc.typeConference Objecten_US
dc.departmentIzmir Institute of Technologyen_US
dc.identifier.startpage20en_US
dc.identifier.endpage25en_US
dc.identifier.scopus2-s2.0-85187554957-
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.identifier.doi10.1145/3638209.3638213-
dc.authorscopusid57993372100-
dc.authorscopusid35734628100-
dc.authorscopusid58399185700-
dc.authorscopusid58934439300-
dc.authorscopusid58934343100-
dc.authorscopusid55398979600-
dc.authorscopusid59021779600-
item.grantfulltextnone-
item.languageiso639-1en-
item.openairetypeConference Object-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.fulltextNo Fulltext-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Show simple item record



CORE Recommender

Page view(s)

48
checked on Dec 23, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.