Publications AITRICS' innovative research takes the lead in advancements in medical artificial intelligence. All AAAI ACL ACS Acute and Critical Care AISTATS arXiv BMJ Health & Care Informatics CHIL Computer Vision&Image Understanding Critical Care CVPR ECCV EMNLP ICASSP ICCV ICLR ICML IEEE IJCAI INTERSPEECH JCDD JMIR Journal Clinical Medicine MLHC NAACL NeurIPS SaTML Scientific Reports Sensors COLM EACL Title Content Search 121 ICLR Representational Continuity for Unsupervised Continual Learning ICLR 2022 Representational Continuity for Unsupervised Continual Learning Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent CL advanc... 120 ICML Set Based Stochastic Subsampling ICML 2022 Set Based Stochastic Subsampling Bruno Andreis, Seanie Lee, A. Tuan Nguyen, Juho Lee, Eunho Yang, Sung Ju Hwang Deep models are designed to operate on huge volumes of high dimensional data such as images. In order to reduce the volume of data these models must process,... 119 ICLR Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning ICLR 2022 Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning Seanie Lee, Hae Beom Lee, Juho Lee, Sung Ju Hwang Multilingual models jointly pretrained on multiple languages have achieved remarkable performance on various multilingual downstream tasks. Mor... 118 ICML Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations ICML 2022 Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations Jaehyeong Jo, Seul Lee, Sung Ju Hwang Generating graph-structured data requires learning the underlying distribution of graphs. Yet, this is a challenging problem, and the pre... 117 AAAI Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing AAAI 2022 Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang The Mixup scheme suggests mixing a pair of samples to create an augmented training sample and has gained considerab... 116 ICLR Rethinking the Representational Continuity: Towards Unsupervised Continual Learning ICLR 2022 Rethinking the Representational Continuity: Towards Unsupervised Continual Learning Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. ... 115 SaTML Rethinking the Entropy of Instance in Adversarial Training SaTML 2022 Rethinking the Entropy of Instance in Adversarial Training Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang Adversarial training, which minimizes the loss of adversarially-perturbed training examples, has been extensively studied as a solution to improv... 114 CHIL Real-Time Seizure Detection using EEG: A Comprehensive Comparison of Recent Approaches under a Realistic Setting CHIL 2022 Real-Time Seizure Detection using EEG: A Comprehensive Comparison of Recent Approaches under a Realistic Setting Kwanhyung Lee, Hyewon Jeong, Seyun Kim, Donghwa Yang, Hoon-Chul Kang, Edward Choi The Mixup scheme suggests mixing a pair of samples to create an augmented ... 113 ICLR Online Hyperparameter Meta-Learning with Hypergradient Distillation ICLR 2022 Online Hyperparameter Meta-Learning with Hypergradient Distillation Hae Beom Lee, Hayeon Lee, Jaewoong Shin, Eunho Yang, Timothy Hospedales, Sung Ju Hwang Many gradient-based meta-learning methods assume a set of parameters that do not participate in inner-optim... 112 ICLR Online Coreset Selection for Rehearsal-based Continual Learning ICLR 2022 Online Coreset Selection for Rehearsal-based Continual Learning Jaehong Yoon, Divyam Madaan, Eunho Yang, Sung Ju Hwang A dataset is a shred of crucial evidence to describe a task. However, each data point in the dataset does not have the same potential, as some of... 6 7 8 9 10