AISys Lab. ECE, Seoul National University

Welcome to AISys!

Accelerated Intelligent Systems Lab (AISys) is affiliated with ECE, Seoul National University. We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing.

Hiring

AISys Lab is currently looking for talented students (graduate students, undergraduate interns). Please contact leejinho at snu dot ac dot kr if you are interested.

석박사 신입생 및 학부생 인턴을 상시 선발하고 있습니다. 관심있는 학생은 leejinho at snu dot ac dot kr 로 연락 바랍니다.

News

2025

Apr. 2025 Our paper G^3SA: A GPU-Accelerated Gold Standard Genetics Library for End-to-End Sequence Alignment has been accepted to ICS 2025. Congratulations to authors!
Jan. 2025 Welcome to our lab, Dain Kwon and Jun Sung!

2024

Dec. 2024 Our paper MimiQ: Low-Bit Data-Free Quantization of Vision Transformers with Encouraging Inter-Head Attention Similarity has been accepted to AAAI 2025. Congratulations to authors!
Nov. 2024 Our paper Piccolo: Large-Scale Graph Processing with Fine-Grained In-Memory Scatter-Gather has been accepted to HPCA 2025. Congratulations!
Aug. 2024 Sukjin Kim joined the lab. Welcome aboard!
Jun. 2024 Our paper GraNNDis: Fast Distributed Graph Neural Network Training Framework for Multi-Server Clusters has been accepted to PACT 2024. Congratulations!
May. 2024 Our paper titled DataFreeShield: Defending Adversarial Attacks without Training Data has been accepted to ICML 2024. Cheers to your endeavor!
Mar. 2024 Our paper titled PID-Comm: A Fast and Flexible Collective Communication Framework for Commodity Processing-in-DIMMs has been accepted to ISCA 2024. Congratulations to authors and see you at Buenos Aires!
Mar. 2024 Received the best paper award honorable mention from HPCA 2024. Congratulations to the authors of "Smart-Infinity"!
Mar. 2024 A warm welcome to Changmin Shin, Hunseong Lim, Si Ung Noh, and Sunjong Park!

Research Topics

We conduct research on system and architectural issues for accelerating various applications such as deep learning, compression algorithms and graph processing, especially on FPGAs and GPUs. Some of the on-going research topics are listed below. However, you’re free to bring your own exciting topic.

AI Accelerators

AI Accelerators

With no doubt the most popular accelerator for AI nowadays is GPU. However the world is heading towards the next step: AI-specific accelerators. There is much room to improve in terms of accelerator designs. For example, optimizing dataflow, utilizing sparse network structure, or processing-in-memory techniques.

Distributed Deep Learning

Distributed Deep Learning

To utilize multiple devices (i.e., GPUs) for high-speed DNN training, it’s common to employ distributed learning. There are still many ways to improve current distributed learning methods: Devising a new communication algorithm, smartly pipelining the jobs, or changing the ways that devices synchronize.

Data-Free NN Compression

Data-Free NN Compression

Multiple model compression techniques have been suggested these days to reduce the computation burden from the nature of DNNs. Most of them utilize original training data to compensate for accuracy losses. However, the original data is usually inaccessible due to privacy or copyright issues. To this end, our research focuses on compressing neural networks without the original dataset.