AISys Lab. ECE, Seoul National University

Welcome to AISys!

Accelerated Intelligent Systems Lab (AISys) is affiliated with ECE, Seoul National University. We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing.

Hiring

AISys Lab is currently looking for talented students (graduate students, undergraduate interns). Please contact leejinho at snu dot ac dot kr if you are interested.

석박사 신입생 및 학부생 인턴을 상시 선발하고 있습니다. 관심있는 학생은 leejinho at snu dot ac dot kr 로 연락 바랍니다.

News

2024

Jun. 2024 Our paper GraNNDis: Fast Distributed Graph Neural Network Training Framework for Multi-Server Clusters has been accepted to PACT 2024. Congratulations!
May. 2024 Our paper titled DataFreeShield: Defending Adversarial Attacks without Training Data has been accepted to ICML 2024. Cheers to your endeavor!
Mar. 2024 Our paper titled PID-Comm: A Fast and Flexible Collective Communication Framework for Commodity Processing-in-DIMMs has been accepted to ISCA 2024. Congratulations to authors and see you at Buenos Aires!
Mar. 2024 Received the best paper award honorable mention from HPCA 2024. Congratulations to the authors of "Smart-Infinity"!
Feb. 2024 Our paper titled PeerAiD: Improving Adversarial Distillation from a Specialized Peer Tutor has been accepted to CVPR 2024. Congratulations to authors!
Feb. 2024 Our paper A Case for In-Memory Random Scatter-Gather for Fast Graph Processing has been accepted to IEEE CAL. Congratulations!

2023

Nov. 2023 We got a paper accpeted in PPoPP 2024: AGAThA: Fast and Efficient GPU Acceleration of Guided Sequence Alignment for Long Read Mapping. Conguratulations to authors and see you at Edinburgh Ü
Nov. 2023 Our paper Pipette: Automatic Fine-grained Large Language Model Training Configurator for Real-World Clusters has been accepted at DATE 2024. Congratulations!
Oct. 2023 We got a paper accepted in HPCA 2024: Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System. Congratulations to authors!
Sep. 2023 Jinho Lee joined ECE, Seoul Nationl University as an assistant professor.

Research Topics

We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing, especially on FPGAs and GPUs. Some of the on-going research topics are listed below. However, you’re free to bring your own exciting topic.

AI Accelerators

AI Accelerators

With no doubt the most popular accelerator for AI nowadays is GPU. However the world is heading towards the next step: AI-specific accelerators. There is much room to improve in terms of accelerator designs. For example, optimizing dataflow, utilizing sparse network structure, or processing-in-memory techniques.

Distributed Deep Learning

Distributed Deep Learning

To utilize multiple devices (i.e., GPUs) for high-speed DNN training, it’s common to employ distributed learning. There are still many ways to improve current distributed learning methods: Devising a new communication algorithm, smartly pipelining the jobs, or changing the ways that devices synchronize.

Data-Free NN Compression

Data-Free NN Compression

Multiple model compression techniques have been suggested these days to reduce the computation burden from the nature of DNNs. Most of them utilize original training data to compensate for accuracy losses. However, the original data is usually inaccessible due to privacy or copyright issues. To this end, our research focuses on compressing neural networks without the original dataset.