Please use this identifier to cite or link to this item: https://hdl.handle.net/11147/11404
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSezerer, Erhan-
dc.contributor.authorTekir, Selma-
dc.date.accessioned2021-11-06T09:48:29Z-
dc.date.available2021-11-06T09:48:29Z-
dc.date.issued2021-
dc.identifier.issn2076-3417-
dc.identifier.urihttps://doi.org/10.3390/app11178241-
dc.identifier.urihttps://hdl.handle.net/11147/11404-
dc.description.abstractOver the last few years, there has been an increase in the studies that consider experiential (visual) information by building multi-modal language models and representations. It is shown by several studies that language acquisition in humans starts with learning concrete concepts through images and then continues with learning abstract ideas through the text. In this work, the curriculum learning method is used to teach the model concrete/abstract concepts through images and their corresponding captions to accomplish multi-modal language modeling/representation. We use the BERT and Resnet-152 models on each modality and combine them using attentive pooling to perform pre-training on the newly constructed dataset, which is collected from the Wikimedia Commons based on concrete/abstract words. To show the performance of the proposed model, downstream tasks and ablation studies are performed. The contribution of this work is two-fold: A new dataset is constructed from Wikimedia Commons based on concrete/abstract words, and a new multi-modal pre-training approach based on curriculum learning is proposed. The results show that the proposed multi-modal pre-training approach contributes to the success of the model.en_US
dc.language.isoenen_US
dc.publisherMDPIen_US
dc.relation.ispartofApplied Sciencesen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectMulti-modal dataseten_US
dc.subjectWikimedia Commonsen_US
dc.subjectMulti-modal language modelen_US
dc.subjectConcretenessen_US
dc.subjectCurriculum learningen_US
dc.titleIncorporating Concreteness in Multi-Modal Language Models With Curriculum Learningen_US
dc.typeArticleen_US
dc.authorid0000-0002-0488-9682-
dc.departmentİzmir Institute of Technology. Computer Engineeringen_US
dc.identifier.volume11en_US
dc.identifier.issue17en_US
dc.identifier.wosWOS:000695573500001en_US
dc.identifier.scopus2-s2.0-85114487960en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.doi10.3390/app11178241-
dc.identifier.wosqualityQ2-
dc.identifier.scopusqualityQ3-
item.fulltextWith Fulltext-
item.openairetypeArticle-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.grantfulltextopen-
item.languageiso639-1en-
item.cerifentitytypePublications-
crisitem.author.dept03.04. Department of Computer Engineering-
crisitem.author.dept03.04. Department of Computer Engineering-
Appears in Collections:Computer Engineering / Bilgisayar Mühendisliği
Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Files in This Item:
File SizeFormat 
applsci-11-08241.pdf1 MBAdobe PDFView/Open
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

1
checked on Dec 20, 2024

WEB OF SCIENCETM
Citations

1
checked on Dec 21, 2024

Page view(s)

642
checked on Dec 23, 2024

Download(s)

142
checked on Dec 23, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.