Random Slicing: Efficient and Scalable Data Placement for Large-scale Storage Systems
Appeared in ACM Transactions on Storage 10(3).
Abstract
The ever-growing amount of data requires highly scalable storage solutions. The most flexible approach is to use storage pools that can be expanded and scaled down by adding or removing storage devices. To make this approach usable, it is necessary to provide a solution to locate data items in such a dynamic environment. This article presents and evaluates the Random Slicing strategy, which incorporates lessons learned from table-based, rule-based, and pseudo-randomized hashing strategies and is able to provide a simple and efficient strategy that scales up to handle exascale data. Random Slicing keeps a small table with information about previous storage system insert and remove operations, drastically reducing the required amount of randomness while delivering a perfect load distribution.
Publication date:
July 2014
Authors:
Alberto Miranda
Sascha Effert
Yangwook Kang
Ethan L. Miller
Ivan Popov
Andre Brinkmann
Tom Friedetzky
Toni Cortes
Projects:
Ultra-Large Scale Storage
Available media
Full paper text: PDF
Bibtex entry
@article{miranda-tos14, author = {Alberto Miranda and Sascha Effert and Yangwook Kang and Ethan L. Miller and Ivan Popov and Andre Brinkmann and Tom Friedetzky and Toni Cortes}, title = {Random Slicing: Efficient and Scalable Data Placement for Large-scale Storage Systems}, journal = {ACM Transactions on Storage}, volume = {10}, number = {3}, month = jul, year = {2014}, }