AITRICS
Technology
With the world’s most reliable, trustable,
interpretable state-of-art tech,
We can change the way AI is developed
and utilized.


Transfer Learning
Improving generalization performance and efficient data learning by transferring knowledge acquired from a trained model to a new problem
Meta-Learning
Enabling AI models to quickly adapt to new tasks by training a model to generalize to various tasks
Data Augmentation & Perturbation
Improving the generalization performance by increasing or seemingly expanding the number of training samples through data augmentation and perturbation
Neural Network Compression
Effectively reducing the memory and computation cost of neural network models through network weight reduction, bit compression, and knowledge distillation
Related Paper
ICML 2020
Self-supervised Label Augmentation via Input Transformations
ICML 2020
Cost-effective Interactive Attention Learning with Neural Attention Processes
ICML 2020
Adversarial Neural Pruning with Latent Vulnerability Suppression
ICLR 2019
Learning to Propagate Labels: Transductive Propagation Network for Few-shot Learning
arXiv 2018
Adaptive Network Sparsification via Dependent Variational Beta-Bernoulli Dropout


Interpretable ML
Providing a basis for the prediction result at the sample and model levels in an interpretable form
Uncertainty Modeling / Quantification
Improving prediction reliability by modeling and quantifying the uncertainty of model knowledge and prediction
Adversarially-Robust ML
Learning a robust model from hostile attacks that make the model's prediction results to be inconsistent
Privacy-Preserving ML
Protecting privacy-sensitive data and learning securely
Related Paper
ICML 2020 Workshop on Uncertainty and Robustness in Deep Learning
A benchmark study on reliable molecular supervised learning via Bayesian learning
ACS 2020
Comprehensive Study on Molecular Supervised Learning with Graph Neural Networks
Critical Care 2019
A Deep Learning Model for Real-time Mortality Prediction in Critically ill Children
NeurIPS 2018
Uncertainty-Aware Attention for Reliable Interpretation and Prediction


Meta-Learning
Training an artificial intelligence model to adapt quickly to new tasks by learning to generalize to a variety of tasks
Neural Architecture Search
Learning algorithm automatically consider constraints of data, and search for an optimal network.
Bayesian Optimization
Automatically searching for hyperparameters of black box model through Bayesian inference
Related Paper
ICML 2019
Learning What and Where to Transfer
NeurIPS 2017 Workshop on Bayesian Optimization
Learning to Transfer Initializations for Bayesian Hyperparameter Optimization