Publications AITRICS' innovative research takes the lead in advancements in medical artificial intelligence. All AAAI ACL ACS Acute and Critical Care AISTATS arXiv BMJ Health & Care Informatics CHIL Computer Vision&Image Understanding Critical Care CVPR ECCV EMNLP ICASSP ICCV ICLR ICML IEEE IJCAI INTERSPEECH JCDD JMIR Journal Clinical Medicine MLHC NAACL NeurIPS SaTML Scientific Reports Sensors Title Content Search 123 ICLR Skill-based Meta-Reinforcement Learning ICLR 2022 Skill-based Meta-Reinforcement Learning Taewook Nam, Shao-Hua Sun, Karl Pertsch, Sung Ju Hwang, Joseph J Lim While deep reinforcement learning methods have shown impressive results in robot learning, their sample inefficiency makes the learning of complex, long-... 122 NeurIPS Set-based Meta-Interpolation for Few-Task Meta-Learning NeurIPS 2022 Set-based Meta-Interpolation for Few-Task Meta-Learning Seanie Lee, Bruno Andreis, Kenji Kawaguchi, Juho Lee, Sung Ju Hwang Meta-learning approaches enable machine learning systems to adapt to new tasks given few examples by leveraging knowledge from related tasks. ... 121 ICLR Representational Continuity for Unsupervised Continual Learning ICLR 2022 Representational Continuity for Unsupervised Continual Learning Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. However, recent CL advanc... 120 ICML Set Based Stochastic Subsampling ICML 2022 Set Based Stochastic Subsampling Bruno Andreis, Seanie Lee, A. Tuan Nguyen, Juho Lee, Eunho Yang, Sung Ju Hwang Deep models are designed to operate on huge volumes of high dimensional data such as images. In order to reduce the volume of data these models must process,... 119 ICLR Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning ICLR 2022 Sequential Reptile: Inter-Task Gradient Alignment for Multilingual Learning Seanie Lee, Hae Beom Lee, Juho Lee, Sung Ju Hwang Multilingual models jointly pretrained on multiple languages have achieved remarkable performance on various multilingual downstream tasks. Mor... 118 ICML Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations ICML 2022 Score-based Generative Modeling of Graphs via the System of Stochastic Differential Equations Jaehyeong Jo, Seul Lee, Sung Ju Hwang Generating graph-structured data requires learning the underlying distribution of graphs. Yet, this is a challenging problem, and the pre... 117 AAAI Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing AAAI 2022 Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang The Mixup scheme suggests mixing a pair of samples to create an augmented training sample and has gained considerab... 116 ICLR Rethinking the Representational Continuity: Towards Unsupervised Continual Learning ICLR 2022 Rethinking the Representational Continuity: Towards Unsupervised Continual Learning Divyam Madaan, Jaehong Yoon, Yuanchun Li, Yunxin Liu, Sung Ju Hwang Continual learning (CL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge. ... 115 SaTML Rethinking the Entropy of Instance in Adversarial Training SaTML 2022 Rethinking the Entropy of Instance in Adversarial Training Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang Adversarial training, which minimizes the loss of adversarially-perturbed training examples, has been extensively studied as a solution to improv... 114 CHIL Real-Time Seizure Detection using EEG: A Comprehensive Comparison of Recent Approaches under a Realistic Setting CHIL 2022 Real-Time Seizure Detection using EEG: A Comprehensive Comparison of Recent Approaches under a Realistic Setting Kwanhyung Lee, Hyewon Jeong, Seyun Kim, Donghwa Yang, Hoon-Chul Kang, Edward Choi The Mixup scheme suggests mixing a pair of samples to create an augmented ... 6 7 8 9 10