UPM Institutional Repository

Transformer-based explainable framework for Human Activity Recognition in IoT-enabled smart homes


Citation

Vijayananth, Vishalini and Perumal, Thinagaran and Marlisah, Erzam (2026) Transformer-based explainable framework for Human Activity Recognition in IoT-enabled smart homes. IEEE Access, 14. pp. 21055-21071. ISSN 2169-3536

Abstract

The growing integration of the Internet of Things (IoT) devices in smart homes has enabled applications in comfort, energy efficiency, healthcare, and security. A key enabler of these applications is Human Activity Recognition (HAR), which enables smart systems to understand residents' activities and deliver personalised services. While single-resident HAR has been well studied, multi-resident scenarios are more complex due to collaborative and parallel activities, requiring effective modelling of temporal dependencies. Transformer-based models have shown promising results in this context as their self-attention can capture long-range temporal dependencies and allows parallel computation. However, standard attention uses both past and future information, causing information leakage in real-time HAR. It also treats all time steps equally, ignoring the fact that event correlations in smart homes naturally decay over time and require locality bias. Since these systems involve real residents, explainability is also crucial to ensure trust and transparency in activity classification. This paper proposes a Powerformer-based multi-output explainable framework incorporating a Weighted Causal Multi-Head Attention (WCMHA) mechanism with power-law decay to emphasise recent events while maintaining causal order, improving accuracy. A Multi-Output Temporal Local Interpretable Model-agnostic Explanations (MO-TLIME) technique is proposed, which identifies key sensors and time segments influencing each resident's activity. Evaluated on publicly available multi-resident datasets, the proposed model outperforms existing deep learning baselines in both accuracy and efficiency. Statistical validation and ablation studies confirm its effectiveness, while explainability analyses demonstrate that the model focuses on contextually relevant sensor data and temporal segments.


Download File

[img] Text
123445.pdf - Published Version
Available under License Creative Commons Attribution.

Download (2MB)
Official URL or Download Paper: https://ieeexplore.ieee.org/document/11372702/

Additional Metadata

Item Type: Article
Subject: Computer Science
Subject: Engineering
Subject: Information Science
Divisions: Faculty of Computer Science and Information Technology
DOI Number: https://doi.org/10.1109/ACCESS.2026.3661415
Publisher: Institute of Electrical and Electronics Engineers Inc.
Keywords: Explainable AI (XAI); Internet of Things (IoT); Local interpretable model-agnostic explanations (LIME); Multi-resident human activity recognition; Smart homes; Transformer
Depositing User: MS. HADIZAH NORDIN
Date Deposited: 10 Mar 2026 02:33
Last Modified: 10 Mar 2026 02:33
Altmetrics: http://www.altmetric.com/details.php?domain=psasir.upm.edu.my&doi=10.1109/ACCESS.2026.3661415
URI: http://psasir.upm.edu.my/id/eprint/123445
Statistic Details: View Download Statistic

Actions (login required)

View Item View Item