Please use this identifier to cite or link to this item:
https://hdl.handle.net/11147/15426
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Oğul, İ.Ü. | - |
dc.contributor.author | Soygazi, F. | - |
dc.contributor.author | Bostanoğlu, B.E. | - |
dc.date.accessioned | 2025-03-25T22:55:22Z | - |
dc.date.available | 2025-03-25T22:55:22Z | - |
dc.date.issued | 2025 | - |
dc.identifier.issn | 2376-5992 | - |
dc.identifier.uri | https://doi.org/10.7717/PEERJ-CS.2662 | - |
dc.identifier.uri | https://hdl.handle.net/11147/15426 | - |
dc.description.abstract | Natural language inference (NLI) is a subfield of natural language processing (NLP) that aims to identify the contextual relationship between premise and hypothesis sentences. While high-resource languages like English benefit from robust and rich NLI datasets, creating similar datasets for low-resource languages is challenging due to the cost and complexity of manual annotation. Although translation of existing datasets offers a practical solution, direct translation of domain-specific datasets presents unique challenges, particularly in handling abbreviations, metric conversions, and cultural alignment. This study introduces a pipeline for translating a medical NLI dataset into Turkish, which is a low-resource language. Our approach employs fine-tuning the Llama-3.1 model with selected samples from the Medical Abbreviation dataset (MeDAL) to extract and resolve medical abbreviations. Consequently, NLI pairs are refined with extracted abbreviations and subjected to metric correction. Later, the processed sentences are then translated using Facebook’s No Language Left Behind (NLLB) translation model. To ensure quality, we conducted comprehensive evaluations using both machine learning models and medical expert review. Our results show that BERTurk achieved 75.17% accuracy on TurkMedNLI test data and 76.30% on the normalized test set, while BioBERTurk demonstrated comparable performance with 75.59% accuracy on test data and 72.29% on the normalized dataset. Medical experts further validated the translations through manual assessment of sampled sentences. This work demonstrates the effectiveness of large language models in adapting domain-specific datasets for low-resource languages, establishing a foundation for future research in multilingual biomedical NLP. Copyright 2025 Oğul et al. Distributed under Creative Commons CC-BY 4.0 | en_US |
dc.language.iso | en | en_US |
dc.publisher | PeerJ Inc. | en_US |
dc.relation.ispartof | PeerJ Computer Science | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Bert | en_US |
dc.subject | Language Translation | en_US |
dc.subject | Llama | en_US |
dc.subject | Llm | en_US |
dc.subject | Mednli | en_US |
dc.subject | Natural Language Inference | en_US |
dc.subject | Natural Language Processing | en_US |
dc.subject | Nllb | en_US |
dc.title | Turkmednli: a Turkish Medical Natural Language Inference Dataset Through Large Language Model Based Translation | en_US |
dc.type | Article | en_US |
dc.department | İzmir Institute of Technology | en_US |
dc.identifier.volume | 11 | en_US |
dc.identifier.scopus | 2-s2.0-85219134639 | - |
dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
dc.identifier.doi | 10.7717/PEERJ-CS.2662 | - |
dc.authorscopusid | 57195222455 | - |
dc.authorscopusid | 57220960947 | - |
dc.authorscopusid | 24478565000 | - |
dc.identifier.wosquality | Q2 | - |
dc.identifier.scopusquality | Q1 | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.languageiso639-1 | en | - |
item.openairetype | Article | - |
item.grantfulltext | none | - |
item.fulltext | No Fulltext | - |
item.cerifentitytype | Publications | - |
Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection |
CORE Recommender
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.