Ming Yin

Welcome to my homepage!

Jeff & Judy Henley Hall

Santa Barbara, CA 93106

ming_yin at ucsb dot edu

:smiley: I am a PhD student with the Department of Computer Science and the Department of Statistics and Applied Probability at UC, Santa Barbara, in the Statistical Machine Learning group, advised by Dr. Yu-Xiang Wang. I am also co-advised by Dr. J.S. Rao. I am currently pursuing PhDs in both Departments. Prior to UCSB, I got my B.S. from Applied Math at University of Science and Technology of China. I have also spent time at Princeton University and Amazon AWS AI Research.

I am fond of the broad area of machine learning, e.g. reinforcement learning, large scale optimization and statistics. My current research primarily focuses on building statistical foundations for offline reinforcement learning. I enjoy understanding the theoretical ground of different algorithms that are of practical importance. Recently, I start to think about how to appropriately apply deep models to make RL practical.

The primary goal of the webpage is to share my thoughts and summarize the ideas of my research and some topics I am interested in. Please check the Blog page! For the list of papers, go to Publications page.

In my spare time, I enjoy doing sports (mostly basketball and soccer), traveling, and going on hikes with friends and family. Besides, I like listening to classical music and playing piano (just a little bit).

News

Nov 21, 2022 Instance-dependent Offline RL paper accpted to AAAI-23! Congrats to my JHU coauthors :sparkles:
Sep 16, 2022 Had a wonderful summer at AWS AI with Rasool Fakoor (and also Alex Smola) :smile:
Aug 29, 2022 Happy to be part of the PC for the 3rd (Launchpad) Offline RL Workshop at NeurIPS 2022.
Aug 16, 2022 Happy to be a reviewer for ICLR-2023, AAAI-23 and AISTATS-23.
May 15, 2022 Low-switching RL paper accepted to ICML22 and Offline SSP paper accepted to UAI22!
Mar 31, 2022 Find this thought-provoking blog by John Schulman! Great attitude!!
Mar 30, 2022 Happy to be a reviewer for NeurIPS-2022.
Feb 13, 2022 Happy to review for the new venue Transactions on Machine Learning Research (TMLR)!
Jan 21, 2022 Optimal Offline Linear Representation RL paper accepted to ICLR-2022! :sparkles: :sparkles:
Dec 19, 2021 Happy to be a reviewer for ICML-2022.
Sep 29, 2021 Three papers accepted to neurips 2021!
Aug 20, 2021 I participate in the 2nd Offline DeepRL workshop (NeurIPS 21) as part of the PC.
Jul 12, 2021 Happy to be a reviewer for ICLR-2022, AISTATS-2022.
Jun 25, 2021 There is a new blog to explain the new Optimal Uniform OPE paper!
May 13, 2021 I participate in the RL theory workshop (ICML 21) as part of the PC.
Apr 5, 2021 Happy to be a reviewer for NeurIPS-2021.
Jan 28, 2021 Oral acceptance of Uniform OPE paper to AISTATS-2021! :sparkles: :smile:

Selected publications

  1. Preprint
    Offline Reinforcement Learning with Differentiable Function Approximation is Provably Efficient
    Yin, Ming, Wang, Mengdi, and Wang, Yu-Xiang
    arXiv preprint arXiv:2210.00750 2022
  2. ICLR
    Near-optimal Offline Reinforcement Learning with Linear Representation: Leveraging Variance Information with Pessimism
    Yin, Ming, Duan, Yaqi, Wang, Mengdi, and Wang, Yu-Xiang
    International Conference on Learning Representations, 2022
  3. UAI Spotlight
    Offline Stochastic Shortest Path: Learning, Evaluation and Towards Optimality
    Yin, Ming*, Chen, Wenjing*, Wang, Mengdi, and Wang, Yu-Xiang
    Uncertainty in Artificial Intelligence, 2022
  4. NeurIPS
    Towards Instance-optimal Offline Reinforcement Learning with Pessimism
    Yin, Ming, and Wang, Yu-Xiang
    Advances in Neural Information Processing Systems, 2021
  5. NeurIPS
    Optimal Uniform OPE and Model-based Offline Reinforcement Learning in Time-Homogeneous, Reward-Free and Task-Agnostic Settings
    Yin, Ming, and Wang, Yu-Xiang
    Advances in Neural Information Processing Systems (Short version at ICML RL Theory Workshop), 2021
  6. NeurIPS
    Near-Optimal Offline Reinforcement Learning via Double Variance Reduction
    Yin, Ming, Bai, Yu, and Wang, Yu-Xiang
    Advances in Neural Information Processing Systems (Short version at ICML RL Theory Workshop), 2021
  7. AISTATS Oral Presentation
    Near-optimal Provable Uniform Convergence in Offline Policy Evaluation for Reinforcement Learning
    Yin, Ming, Bai, Yu, and Wang, Yu-Xiang
    International Conference on Artificial Intelligence and Statistics (Short version at Neurips 2020 Offline RL Workshop), 2021
  8. AISTATS
    Asymptotically Efficient Off-policy Evaluation for Tabular Reinforcement Learning
    Yin, Ming, and Wang, Yu-Xiang
    International Conference on Artificial Intelligence and Statistics, 2020