Abstract
This paper presents a comprehensive framework for integrating Explainable Artificial Intelligence (XAI) into smart agriculture to address challenges in transparency, interpretability, and trust associated with AI-driven decision-making. Leveraging techniques such as Shapley Additive Explanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Gradient-weighted Class Activation Mapping (Grad-CAM), the framework provides actionable insights into predictive maintenance, crop health monitoring, and resource optimization. A hybrid methodology combines IoT-based data acquisition, Federated Learning (FL), and multimodal feature analysis to ensure scalability and privacy preservation. Additionally, the study introduces a multi-context agricultural dataset and a novel interpretability-accuracy metric to evaluate XAI models’ adaptability across diverse agricultural settings. Experimental results demonstrate the proposed framework’s superiority in achieving an optimal balance between accuracy and interpretability, resource efficiency, and robust decision-making in precision agriculture. This approach fosters sustainable practices while addressing ethical and practical challenges in democratizing AI in agriculture.
Original language | English |
---|---|
Pages (from-to) | 97567-97584 |
Number of pages | 18 |
Journal | IEEE Access |
Volume | 13 |
DOIs | |
Publication status | Published - 2025 |
Keywords
- AI
- drone
- federated learning
- IoT
- ML
- smart agriculture
- XAI