Please use this identifier to cite or link to this item:
https://hdl.handle.net/11147/14154
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Nalcakan, Y. | - |
dc.contributor.author | Bastanlar, Y. | - |
dc.date.accessioned | 2024-01-06T07:21:35Z | - |
dc.date.available | 2024-01-06T07:21:35Z | - |
dc.date.issued | 2023 | - |
dc.identifier.isbn | 9798350306590 | - |
dc.identifier.uri | https://doi.org/10.1109/ASYU58738.2023.10296634 | - |
dc.identifier.uri | https://hdl.handle.net/11147/14154 | - |
dc.description | 2023 Innovations in Intelligent Systems and Applications Conference, ASYU 2023 -- 11 October 2023 through 13 October 2023 -- 194153 | en_US |
dc.description.abstract | Prediction of lane-changing maneuvers of surrounding vehicles is important for autonomous vehicles to understand the scene properly. This research proposes a vision-based technique that only requires a single in-car RGB camera. The surrounding vehicles' maneuvers are classified as right/left lane-change or no lane change conforming to most lane change detection studies in the literature. The usual practice in previous studies is feeding individual video frames into CNN to extract features and afterward using an LSTM to classify the sequence of features. Differently, in our study, we exploit the power of ensembling the prediction results of two methods. The first one uses a small feature vector containing the image coordinates of the target vehicle and classifies it with an LSTM. The second method works with a simplified scene representation video (only the target vehicle and ego-lane highlighted) and it is based on a self-supervised contrastive video representation learning scheme. Since maneuver labeling is not required in the self-supervised learning step this enables the use of a relatively large dataset. After the self-supervised training, the model is fine-tuned with a labeled dataset. Our experimental study on a well-known lane change detection dataset reveals that both of the mentioned methods by themselves achieve state-of-the-art results and ensembling them increases the classification accuracy even more. © 2023 IEEE. | en_US |
dc.description.sponsorship | Türkiye Bilimsel ve Teknolojik Araştırma Kurumu, TÜBİTAK: 118C079 | en_US |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.ispartof | 2023 Innovations in Intelligent Systems and Applications Conference, ASYU 2023 | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | autonomous vehicle | en_US |
dc.subject | contrastive representation learning | en_US |
dc.subject | driver assistance systems | en_US |
dc.subject | lane change detection | en_US |
dc.subject | Automobile drivers | en_US |
dc.subject | Change detection | en_US |
dc.subject | Classification (of information) | en_US |
dc.subject | Large dataset | en_US |
dc.subject | Long short-term memory | en_US |
dc.subject | Autonomous Vehicles | en_US |
dc.subject | Change detection | en_US |
dc.subject | Contrastive representation learning | en_US |
dc.subject | Driver-assistance systems | en_US |
dc.subject | Image-based | en_US |
dc.subject | Lane change | en_US |
dc.subject | Lane change detection | en_US |
dc.subject | Lane changing maneuver | en_US |
dc.subject | Learning models | en_US |
dc.subject | Target vehicles | en_US |
dc.subject | Autonomous vehicles | en_US |
dc.title | Lane Change Detection with an Ensemble of Image-based and Video-based Deep Learning Models | en_US |
dc.type | Conference Object | en_US |
dc.institutionauthor | … | - |
dc.department | İzmir Institute of Technology | en_US |
dc.identifier.scopus | 2-s2.0-85178269588 | en_US |
dc.relation.publicationcategory | Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı | en_US |
dc.identifier.doi | 10.1109/ASYU58738.2023.10296634 | - |
dc.authorscopusid | 57205611298 | - |
dc.authorscopusid | 15833922000 | - |
dc.identifier.wosquality | N/A | - |
dc.identifier.scopusquality | N/A | - |
item.fulltext | No Fulltext | - |
item.grantfulltext | none | - |
item.languageiso639-1 | en | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.cerifentitytype | Publications | - |
item.openairetype | Conference Object | - |
crisitem.author.dept | 03.04. Department of Computer Engineering | - |
Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection |
CORE Recommender
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.