Abstract
Resource management and task distribution in real-time have become increasingly challenging due to the growing use of latency-critical applications across dispersed edge-cloud infrastructures. Intelligent adaptable mechanisms capable of functioning effectively on resource-constrained edge devices and responding quickly to dynamic workload changes are required in these situations. In this work, we offer a learning-based system for autonomous resource allocation across the edge–cloud continuum that is both lightweight and scalable. Two models are presented: TinyDT, a small offline decision tree trained on state-action information retrieved from an adaptive baseline, and TinyXCS, an online rule-based classifier system that can adjust to runtime conditions. Both models are designed to operate on resource-constrained edge devices while minimizing memory overhead and inference latency. Our analysis demonstrates that TinyXCS and TinyDT outperform existing online and offline baselines in terms of throughput and latency, providing a reliable, power-efficient solution for next-generation edge intelligence.
| Original language | English |
|---|---|
| Article number | 381 |
| Journal | Cluster Computing |
| Volume | 28 |
| Issue number | 6 |
| DOIs | |
| Publication status | Published - Oct 2025 |
| Externally published | Yes |
Bibliographical note
Publisher Copyright:© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
Keywords
- Cloud computing
- Edge computing
- Internet of things (IoT)
- Tiny models
- Workload distribution