MxTasks: a novel processing model to support data processing on modern hardware
Loading...
Date
2023
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The hardware landscape has changed rapidly in recent years. Modern hardware in today's servers is characterized by many CPU cores, multiple sockets, and vast amounts of main memory structured in NUMA hierarchies.
In order to benefit from these highly parallel systems, the software has to adapt and actively engage with newly available features.
However, the processing models forming the foundation for many performance-oriented applications have remained essentially unchanged.
Threads, which serve as the central processing abstractions, can be considered a "black box" that hardly allows any transparency between the application and the system underneath.
On the one hand, applications are aware of the knowledge that could assist the system in optimizing the execution, such as accessed data objects and access patterns.
On the other hand, the limited opportunities for information exchange cause operating systems to make assumptions about the applications' intentions to optimize their execution, e.g., for local data access.
Applications, on the contrary, implement optimizations tailored to specific situations, such as sophisticated synchronization mechanisms and hardware-conscious data structures.
This work presents MxTasking, a task-based runtime environment that assists the design of data structures and applications for contemporary hardware.
MxTasking rethinks the interfaces between performance-oriented applications and the execution substrate, streamlining the information exchange between both layers.
By breaking patterns of processing models designed with past generations of hardware in mind, MxTasking creates novel opportunities to manage resources in a hardware- and application-conscious way.
Accordingly, we question the granularity of "conventional" threads and show that fine-granular MxTasks are a viable abstraction unit for characterizing and optimizing the execution in a general way.
Using various demonstrators in the context of database management systems, we illustrate the practical benefits and explore how challenges like memory access latencies and error-prone synchronization of concurrency can be addressed straightforwardly and effectively.