Abstract
The emergence of Big Data applications provides new challenges in data management such as processing and movement of masses of data. Volunteer computing has proven itself as a distributed paradigm that can fully support Big Data generation. This paradigm uses a large number of heterogeneous and unreliable Internet-connected hosts to provide Peta-scale computing power for scientific projects. With the increase in data size and number of devices that can potentially join a volunteer computing project, the host bandwidth can become a main hindrance to the analysis of the data generated by these projects, especially if the analysis is a concurrent approach based on either in-situ or in-transit processing. In this paper, we propose a bandwidth model for volunteer computing projects based on the real trace data taken from the Docking@Home project with more than 280,000 hosts over a 5-year period. We validate the proposed statistical model using model-based and simulation-based techniques. Our modeling provides us with valuable insights on the concurrent integration of data generation with in-situ and in-transit analysis in the volunteer computing paradigm.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 15th International Conference on Parallel and Distributed Computing, Applications and Technologies (PDCAT 2014), 9-11 December 2014, Hong Kong |
| Publisher | IEEE |
| Pages | 21-27 |
| Number of pages | 7 |
| ISBN (Print) | 9781479983346 |
| DOIs | |
| Publication status | Published - 2014 |
| Event | PDCAT (Conference) - Duration: 9 Dec 2014 → … |
Conference
| Conference | PDCAT (Conference) |
|---|---|
| Period | 9/12/14 → … |
Fingerprint
Dive into the research topics of 'Bandwidth modeling in large distributed systems for big data applications'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver