A federated explainable AI framework for smart agriculture: enhancing transparency, efficiency, and sustainability

Hassam Ahmed Tahir, Walaa Alayed, Waqar Ul Hassan

    Research output: Contribution to journalArticlepeer-review

    4 Downloads (Pure)

    Abstract

    This paper presents a comprehensive framework for integrating Explainable Artificial Intelligence (XAI) into smart agriculture to address challenges in transparency, interpretability, and trust associated with AI-driven decision-making. Leveraging techniques such as Shapley Additive Explanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM), the framework provides actionable insights into predictive maintenance, crop health monitoring, and resource optimization. A hybrid methodology combines IoT-based data acquisition, Federated Learning (FL), and multimodal feature analysis to ensure scalability and privacy preservation. Additionally, the study introduces a multi-context agricultural dataset and a novel interpretability-accuracy metric to evaluate XAI models’ adaptability across diverse agricultural settings. Experimental results demonstrate the proposed framework’s superiority in achieving an optimal balance between accuracy and interpretability, resource efficiency, and robust decision-making in precision agriculture. This approach fosters sustainable practices while addressing ethical and practical challenges in democratizing AI in agriculture.

    Original languageEnglish
    Pages (from-to)97567-97584
    Number of pages18
    JournalIEEE Access
    Volume13
    DOIs
    Publication statusPublished - 2025

    Keywords

    • AI
    • drone
    • federated learning
    • IoT
    • ML
    • smart agriculture
    • XAI

    Fingerprint

    Dive into the research topics of 'A federated explainable AI framework for smart agriculture: enhancing transparency, efficiency, and sustainability'. Together they form a unique fingerprint.

    Cite this