Publications AITRICS' innovative research takes the lead in advancements in medical artificial intelligence. All AAAI ACL ACS Acute and Critical Care AISTATS arXiv BMJ Health & Care Informatics CHIL Computer Vision&Image Understanding Critical Care CVPR ECCV EMNLP ICASSP ICCV ICLR ICML IEEE IJCAI INTERSPEECH JCDD JMIR Journal Clinical Medicine MLHC NAACL NeurIPS SaTML Scientific Reports Sensors COLM EACL Title Content Search 81 ACL Learning to Perturb Word Embeddings for Out-of-distribution QA ACL 2021 Learning to Perturb Word Embeddings for Out-of-distribution QA Seanie Lee, Minki Kang, Juho Lee and Sung Ju Hwang QA models based on pretrained language models have achieved remarkable performance on various benchmark datasets. However, QA models do not generalize well ... 80 ICML Learning to Generate Noise for Multi-Attack Robustness ICML 2021 Learning to Generate Noise for Multi-Attack Robustness Divyam Madaan, Jinwoo Shin and Sung Ju Hwang Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the... 79 IEEE Learning How Long to Wait: Adaptively-Constrained Monotonic Multihead Attention for Streaming ASR IEEE 2021 Learning How Long to Wait: Adaptively-Constrained Monotonic Multihead Attention for Streaming ASR Jaeyun Song; Hajin Shim; Eunho Yang Monotonic Multihead Attention, which allows multiple heads to learn their own alignments per head, shows great performance on simultane... 78 ICML Large-Scale Meta-Learning with Continual Trajectory Shifting ICML 2021 Large-Scale Meta-Learning with Continual Trajectory Shifting JaeWoong Shin, Hae Beom Lee, Boqing Gong and Sung Ju Hwang Meta-learning of shared initialization parameters has shown to be highly effective in solving few-shot learning tasks. However, extending the framewo... 77 NeurIPS Hardware-Adaptive Efficient Latency Predictor for NAS via Meta-Learning NeurIPS 2021 Hardware-Adaptive Efficient Latency Predictor for NAS via Meta-Learning Hayeon Lee, Sewoong Lee, Song Chong and Sung Ju Hwang For deployment, neural architecture search should be hardware-aware, in order to satisfy the device-specific constraints (e.g., memory usage... 76 AAAI GTA: Graph Truncated Attention for Retrosynthesis AAAI 2021 GTA: Graph Truncated Attention for Retrosynthesis Seung-Woo Seo, You Young Song, June Yong Yang, Seohui Bae, Hankook Lee, Jinwoo Shin, Sung Ju Hwang, and Eunho Yang Retrosynthesis is the task of predicting reactant molecules from a given product molecule and is, import... 75 NeurIPS Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation NeurIPS 2021 Hit and Lead Discovery with Explorative RL and Fragment-based Molecule Generation Soojung Yang, Doyeong Hwang, Seul Lee, Seongok Ryu and Sung Ju Hwang Recently, utilizing reinforcement learning (RL) to generate molecules with desired properties has been highlighted ... 74 ICLR FedMix: Approximation of Mixup under Mean Augmented Federated Learning ICLR 2021 FedMix: Approximation of Mixup under Mean Augmented Federated Learning Tehrim Yoon, Sumin Shin, Sung Ju Hwang and Eunho Yang Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device, thus preserving priv... 73 ICLR Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning ICLR 2021 (formerly ICML 2020 Workshop - Best Student Paper Award) Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning Wonyong Jeong, Jaehong Yoon, Eunho Yang and Sung Ju Hwang While existing federated learning approaches mostl... 72 ICML Federated Continual Learning with Weighted Inter-client Transfer ICML 2021 Federated Continual Learning with Weighted Inter-client Transfer Jaehong Yoon, Wonyong Jeong, Giwoong Lee, Eunho Yang, Sung Ju Hwang There has been a surge of interest in continual learning and federated learning, both of which are important in deep neural n... 11 12 13 14 15