Show simple item record

Thread Scheduling Mechanisms for Multiple-Context Parallel Processors

dc.date.accessioned2004-10-20T20:27:52Z
dc.date.accessioned2018-11-24T10:22:55Z
dc.date.available2004-10-20T20:27:52Z
dc.date.available2018-11-24T10:22:55Z
dc.date.issued1995-06-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7063
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7063
dc.description.abstractScheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.en_US
dc.format.extent3195889 bytes
dc.format.extent3161096 bytes
dc.language.isoen_US
dc.titleThread Scheduling Mechanisms for Multiple-Context Parallel Processorsen_US


Files in this item

FilesSizeFormatView
AITR-1545.pdf3.161Mbapplication/pdfView/Open
AITR-1545.ps3.195Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record