Options
Hardware prefetchers for emerging parallel applications
Date Issued
22-10-2012
Author(s)
Panda, Biswabandan
Indian Institute of Technology, Madras
Abstract
Hardware prefetching has been studied in the past for multiprogrammed workloads as well as GPUs. Efficient hardware prefetchers like stream-based or GHB-based ones work well for multiprogrammed workloads because different programs get mapped to different cores and are run independently. Parallel applications, however, pose a different set of challenges. Multiple threads of a parallel application share data with each other which brings in coherency issues. Also, local prefetchers do not understand the irregular spread of misses across threads. In this paper, we propose a hardware prefetching framework for L1 D-Cache that targets parallel applications. We show how to make efficient prefetch requests to the L2 cache by studying and classifying the patterns of L1 misses across all the threads. Our preliminary results show an improvement of 7% in execution time on an average on the PARSEC benchmark suite. Copyright © 2012 by the Association for Computing Machinery, Inc. (ACM).
Subjects