Posts by Collection

portfolio

SimNet: Accurate and High-Performance Computer Architecture Simulation using Machine Learning


While discrete-event simulators are essential tools for architecture research, design, and development, their practicality is limited by an extremely long time-to-solution for realistic applications under investigation. This work describes a concerted effort, where machine learning (ML) is used to accelerate discrete-event simulation. First, an ML-based instruction latency prediction framework that accounts for both static instruction properties and dynamic processor states is constructed. Then, a GPU-accelerated parallel simulator is implemented based on the proposed instruction latency predictor, and its simulation accuracy and throughput are validated and evaluated against a state-of-the-art simulator. Leveraging modern GPUs, the ML-based simulator outperforms traditional simulators significantly.
Paper Presentation video

Scalable Triangle Counting with GPUs


Triangle counting is a graph algorithm that calculates the number of triangles involving each vertex in a graph. Triangle counting gains popularity from social network analysis, where it is exploited to detect communities and measure the corresponding cohesiveness. In this project, we proposes and implements the H-INDEX based approach for triangle counting that avoids the preprocessing step of sorting the neighbor list. For better memory access pattern, we further introduce interleaved format for the hash bucket storage. Taken together, H-INDEX achieves beyond 100 billion TEPS computing rate for some graphs. This work was awarded the graphchallenge champion in IEEE HPEC 2019.
Slides Source code

A Framework for Graph Sampling and Random Walk on GPUs


Many applications require to learn, mine, analyze and visualize large-scale graphs. These graphs are often too large to be addressed efficiently using conventional graph processing technologies. Many applications have requirements to analyze, transform, visualize and learn large scale graphs. These graphs are often too large to be addressed efficiently using conventional graph processing technologies. Recent literatures convey that graph sampling/random walk could be an efficient solution. In this paper, we propose, to the best of our knowledge, the first GPU-based framework for graph sampling/random walk. First, our framework provides a generic API which allows users to implement a wide range of sampling and random walk algorithms with ease. Second, offloading this framework on GPU, we introduce warp-centric parallel selection, and two optimizations for collision migration. Third, towards supporting graphs that exceed GPU memory capacity, we introduce efficient data transfer optimizations for out-of-memory sampling, such as workload-aware scheduling and batched multi-instance sampling. In its entirety, our framework constantly outperforms the state-of-the-art projects. First, our framework provides a generic API which allows users to implement a wide range of sampling and random walk algorithms with ease. Second, offloading this framework on GPU, we introduce warp-centric parallel selection, and two novel optimizations for collision migration. Third, towards supporting graphs that exceed the GPU memory capacity, we introduce efficient data transfer optimizations for out-of-memory and multi-GPU sampling, such as workload-aware scheduling and batched multi-instance sampling. Taken together, our framework constantly outperforms the state of the art projects in addition to the capability of supporting a wide range of sampling and random walk algorithms.
Paper Source code Presentation-video

publications

talks

teaching