I'm a first year Ph.D student at École polytechnique fédérale de Lausanne working on Theory of Deep Learning, Statistical Physics of Computation, and Optimization under the supervision of Prof. Lenka Zdeborová and Prof. Florent Krzakala . I am also a part of the ELLIS Ph.D program, with Prof. Lenka Zdeborová at EPFL as my main supervisor and Prof. Bernhard Schölkopf at MPI-IS Tübingen as my secondary (host) supervisor.
During the summer of 2022, I was an intern under Prof. Simon Lacoste-Julien and Prof. Yoshua Bengio at the Montreal Institute for Learning Algorithms, where I worked on the design and analysis of Optimization, Sampling, and Trajectory Inference algorithms. Previously, I obtained a dual-degree (Btech-MTech) in Computer Science, working under the supervision of Prof. Martin Jaggi at the MLO lab, EPFL and Prof. Piyush Rai at IIT Kanpur, India.
During my time at IIT Kanpur and EPFL, I've had the fortune of working under an amazing set of supervisors, mentors, and collaborators (in no particular order): Martin Jaggi and multiple members of the MLO lab, Piyush Rai, Abhishek Kumar, Vinay Namboodiri , Arnout Devos (PhD student under Prof. Matthias Grossglauser), and Arthur Jacot (Phd student under Prof. Clement Hongler).
I also maintain an interest in Theoretical Physics, Theoretical Computer Science, and Causality.
BTech-Mtech (Dual Degree) in Computer Science and Engineering, 2017
Indian Institute of Technology, Kanpur
Yatin Dandi, Aniket Das, Soumye Singhal, Vinay P. Namboodiri, Piyush Rai
2020 Winter Conference on Applications of Computer Vision (WACV '20)
Yatin Dandi, Homanga Bharadhwaj, Abhishek Kumar, Piyush Rai
AAAI Conference on Artificial Intelligence (AAAI-21), NeurIPS 2020 Workshop: Self-Supervised Learning - Theory and Practice
Work done under Prof. Piyush Rai, IIT Kanpur , and Abhishek Kumar, Google research.
Arnout Devos*, Yatin Dandi*
Pre-registration workshop, NeurIPS (2020). Full paper published in Proceedings of Machine Learning Research (PMLR).
Work done under Prof. Matthias Grossglauser at the INDY lab, EPFL.
Yatin Dandi*, Luis Barba*, Martin Jaggi
AAAI Conference on Artificial Intelligence (AAAI-22), FL-ICML 2021 : International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2021.
Work done under Prof. Martin Jaggi at the MLO lab, EPFL.
Avinandan Bose*, Aniket Das*, Yatin Dandi, Piyush Rai
The Symbiosis of Deep Learning and Differential Equations: DLDE Workshop, NeurIPS 2021.
Work done under Prof. Piyush Rai at IIT Kanpur, India.
Yatin Dandi, Arthur Jacot
Work done under Prof. Clement Hongler at the CSFT lab, EPFL.
Yatin Dandi, Anastasia Koloskova, Martin Jaggi, Sebastian U. Stich
The Symbiosis of Deep Learning and Differential Equations: DLDE Workshop, NeurIPS 2021.
Work done under Prof. Martin Jaggi at the MLO lab, EPFL.
head over to my CV for details
Research project under Prof. Matthias Grossglauser. Studied the theory of causal inference, surveyed recent works on causal inference and out of distribution generalization for several classes of machine learning models, analyzed the relationship between out of distribution generalization, meta-learning and causal inference, introduced a new paradigm of "out of task distribution generalization".
Research project under Tatjana Chavdarova . Analyzed the continuous time limit of stochastic variants of first order algorithms for differentiable games using the theory of Stochastic Differential Equations (SDEs), derived first order approximations for second order algorithms such as SGA (Symplectic Gradient Adjustment) for n-player games.
Research project under Prof. Piyush Rai. Studied various approaches for learning disentangled representations of sequential data such as using using new adversarial loss terms, factorized hierarchical priors and exploiting the probabilistic model and architecture of the LSTM based autoencoder to promote disentanglement, implemented a Variational Autoencoder model for disentangling of time invariant content and dynamics in sequential data (Mandt et al.) using Pytorch and experimented with modifications in the probabilistic model.
Implemented Deep Q-Learning and Policy Gradient methods for Atari Games using PyTorch and OpenAI Gym along with various classical RL methods using Numpy such as Dynamic Programming (Policy and Value iteration), Monte Carlo (Epsilon-greedy and off-policy), TD Learning (Q-Learning and SARSA) and Q-Learning with Function Approximation. Presently studying state of the art variants of actor-critic methods.
Project under Programming Club, IIT Kanpur. Studied various encoder-decoder based architectures for image captioning and implemented the model described in Show, Attend and Tell (Xu et al.2015) using Tensorflow. Used MS COCO dataset for training and evaluation.
Research project under Prof. Nisheeth Srivastava. The aim is to study the effect of variation of multiple physical parameters in animated situations to determine the underlying cause of inference of emotion and social situations without language.