UPM Institutional Repository

Enhanced adverse drug event extraction using prefix-based multi-prompt tuning in transformer models


Citation

Modi, Salisu and Kasmiran, Khairul Azhar and Mohd Sharef, Nurfadhlina and Sharum, Mohd Yunus (2024) Enhanced adverse drug event extraction using prefix-based multi-prompt tuning in transformer models. International Journal on Informatics Visualization, 8 (3-2). pp. 1713-1719. ISSN 2549-9904; eISSN: 2549-9904

Abstract

Extracting mentions of adverse drug events and relationships between them is crucial for effective pharmacovigilance and drug safety surveillance. Recently, transformer-based models have significantly improved this task through fine-tuning. However, traditional fine-tuning of transformer models, especially those with many parameters, is resource-intensive, memory-inefficient, and often leaves a gap between pre-training and downstream task-specific objectives. Soft prompting is a lightweight approach that updates a trainable prompt to guide task-specific fine-tuning, showing comparable performance to traditional fine-tuning for large language models on simple tasks. However, its effectiveness on complex tasks like token-based sequence labeling requiring multiple predictions for a single input sequence remains underexplored, particularly in multi-task settings. In addition, using holistic prompts in multi-task learning settings may be biased to other subtasks. Additionally, some prompt tokens hurt the model prediction. This study proposes a prefix-based multi-prompt soft tuning method with attention-driven prompt token selection for tuning transformer models on multi-task dual sequence labelling for concept and relation extraction. We experimented with BERT and SciBERT models using frozen and unfrozen parameter strategies. Our approach achieved state-of-the-art performance on the n2c2 2018 and TAC 2017 datasets for adverse drug event extraction, with multi-prompt tuning in unfrozen models surpassing traditional fine-tuning. Moreover, it outperforms the largest clinical natural language processing model, GatorTron, on the n2c2 2018 dataset. This research highlights the potential of soft prompts in efficiently adapting large language models to complex downstream NLP tasks.


Download File

[img] Text
117021.pdf - Published Version
Restricted to Repository staff only

Download (3MB)
Official URL or Download Paper: https://joiv.org/index.php/joiv/article/view/3454

Additional Metadata

Item Type: Article
Divisions: Faculty of Computer Science and Information Technology
DOI Number: https://doi.org/10.62527/joiv.8.3-2.3454
Publisher: Politeknik Negeri Padang
Keywords: Adverse drug event; Fine-tuning; Multi-prompt; Multi-task; Soft prompt tuning
Depositing User: Ms. Zaimah Saiful Yazan
Date Deposited: 07 Jul 2025 08:07
Last Modified: 07 Jul 2025 08:07
Altmetrics: http://www.altmetric.com/details.php?domain=psasir.upm.edu.my&doi=10.62527/joiv.8.3-2.3454
URI: http://psasir.upm.edu.my/id/eprint/117021
Statistic Details: View Download Statistic

Actions (login required)

View Item View Item