>> With Yosheb Getachew, Yujia Jin, Aaron Sidford, and Kevin Tian (2023). ", "Collection of new upper and lower sample complexity bounds for solving average-reward MDPs. SODA 2023: 4667-4767. CV (last updated 01-2022): PDF Contact. Student Intranet. missouri noodling association president cnn. With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli. pdf, Sequential Matrix Completion. 9-21. Contact: dwoodruf (at) cs (dot) cmu (dot) edu or dpwoodru (at) gmail (dot) com CV (updated July, 2021) [pdf] [talk] [poster] Google Scholar Digital Library; Russell Lyons and Yuval Peres. Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. Contact. Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. Conference Publications 2023 The Complexity of Infinite-Horizon General-Sum Stochastic Games With Yujia Jin, Vidya Muthukumar, Aaron Sidford To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv) 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration With Yair Carmon, Aaron Sidford joins Stanford's Management Science & Engineering department, launching new winter class CS 269G / MS&E 313: "Almost Linear Time Graph Algorithms." Overview This class will introduce the theoretical foundations of discrete mathematics and algorithms. Symposium on Foundations of Computer Science (FOCS), 2020, Efficiently Solving MDPs with Stochastic Mirror Descent 475 Via Ortega I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford; 18(223):142, 2018. 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. he Complexity of Infinite-Horizon General-Sum Stochastic Games, Yujia Jin, Vidya Muthukumar, Aaron Sidford, Innovations in Theoretical Computer Science (ITCS 202, air Carmon, Danielle Hausler, Arun Jambulapati, and Yujia Jin, Advances in Neural Information Processing Systems (NeurIPS 2022), Moses Charikar, Zhihao Jiang, and Kirankumar Shiragur, Advances in Neural Information Processing Systems (NeurIPS 202, n Symposium on Foundations of Computer Science (FOCS 2022) (, International Conference on Machine Learning (ICML 2022) (, Conference on Learning Theory (COLT 2022) (, International Colloquium on Automata, Languages and Programming (ICALP 2022) (, In Symposium on Theory of Computing (STOC 2022) (, In Symposium on Discrete Algorithms (SODA 2022) (, In Advances in Neural Information Processing Systems (NeurIPS 2021) (, In Conference on Learning Theory (COLT 2021) (, In International Conference on Machine Learning (ICML 2021) (, In Symposium on Theory of Computing (STOC 2021) (, In Symposium on Discrete Algorithms (SODA 2021) (, In Innovations in Theoretical Computer Science (ITCS 2021) (, In Conference on Neural Information Processing Systems (NeurIPS 2020) (, In Symposium on Foundations of Computer Science (FOCS 2020) (, In International Conference on Artificial Intelligence and Statistics (AISTATS 2020) (, In International Conference on Machine Learning (ICML 2020) (, In Conference on Learning Theory (COLT 2020) (, In Symposium on Theory of Computing (STOC 2020) (, In International Conference on Algorithmic Learning Theory (ALT 2020) (, In Symposium on Discrete Algorithms (SODA 2020) (, In Conference on Neural Information Processing Systems (NeurIPS 2019) (, In Symposium on Foundations of Computer Science (FOCS 2019) (, In Conference on Learning Theory (COLT 2019) (, In Symposium on Theory of Computing (STOC 2019) (, In Symposium on Discrete Algorithms (SODA 2019) (, In Conference on Neural Information Processing Systems (NeurIPS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2018) (, In Conference on Learning Theory (COLT 2018) (, In Symposium on Discrete Algorithms (SODA 2018) (, In Innovations in Theoretical Computer Science (ITCS 2018) (, In Symposium on Foundations of Computer Science (FOCS 2017) (, In International Conference on Machine Learning (ICML 2017) (, In Symposium on Theory of Computing (STOC 2017) (, In Symposium on Foundations of Computer Science (FOCS 2016) (, In Symposium on Theory of Computing (STOC 2016) (, In Conference on Learning Theory (COLT 2016) (, In International Conference on Machine Learning (ICML 2016) (, In International Conference on Machine Learning (ICML 2016). Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 [pdf] [poster] ", "Faster algorithms for separable minimax, finite-sum and separable finite-sum minimax. In Symposium on Theory of Computing (STOC 2020) (arXiv), Constant Girth Approximation for Directed Graphs in Subquadratic Time, With Shiri Chechik, Yang P. Liu, and Omer Rotem, Leverage Score Sampling for Faster Accelerated Regression and ERM, With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli, In International Conference on Algorithmic Learning Theory (ALT 2020) (arXiv), Near-optimal Approximate Discrete and Continuous Submodular Function Minimization, In Symposium on Discrete Algorithms (SODA 2020) (arXiv), Fast and Space Efficient Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos, In Conference on Neural Information Processing Systems (NeurIPS 2019), Complexity of Highly Parallel Non-Smooth Convex Optimization, With Sbastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, A Direct (1/) Iteration Parallel Algorithm for Optimal Transport, In Conference on Neural Information Processing Systems (NeurIPS 2019) (arXiv), A General Framework for Efficient Symmetric Property Estimation, With Moses Charikar and Kirankumar Shiragur, Parallel Reachability in Almost Linear Work and Square Root Depth, In Symposium on Foundations of Computer Science (FOCS 2019) (arXiv), With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong, Deterministic Approximation of Random Walks in Small Space, With Jack Murtagh, Omer Reingold, and Salil P. Vadhan, In International Workshop on Randomization and Computation (RANDOM 2019), A Rank-1 Sketch for Matrix Multiplicative Weights, With Yair Carmon, John C. Duchi, and Kevin Tian, In Conference on Learning Theory (COLT 2019) (arXiv), Near-optimal method for highly smooth convex optimization, Efficient profile maximum likelihood for universal symmetric property estimation, In Symposium on Theory of Computing (STOC 2019) (arXiv), Memory-sample tradeoffs for linear regression with small error, Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications, With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi, In Symposium on Discrete Algorithms (SODA 2019) (arXiv), Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression, In Conference on Neural Information Processing Systems (NeurIPS 2018) (arXiv), Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model, With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye, Coordinate Methods for Accelerating Regression and Faster Approximate Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2018), Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations, With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao, In Symposium on Foundations of Computer Science (FOCS 2018) (arXiv), Efficient Convex Optimization with Membership Oracles, In Conference on Learning Theory (COLT 2018) (arXiv), Accelerating Stochastic Gradient Descent for Least Squares Regression, With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli, Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners. % This is the academic homepage of Yang Liu (I publish under Yang P. Liu). My research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high dimensional geometry (CV). with Aaron Sidford I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. Faculty Spotlight: Aaron Sidford. Summer 2022: I am currently a research scientist intern at DeepMind in London. with Yair Carmon, Aaron Sidford and Kevin Tian July 8, 2022. [last name]@stanford.edu where [last name]=sidford. [pdf] with Aaron Sidford Aleksander Mdry; Generalized preconditioning and network flow problems Outdated CV [as of Dec'19] Students I am very lucky to advise the following Ph.D. students: Siddartha Devic (co-advised with Aleksandra Korolova . " Geometric median in nearly linear time ." In Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, Pp. Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games 2013. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission . which is why I created a Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. /Creator (Apache FOP Version 1.0) We establish lower bounds on the complexity of finding $$-stationary points of smooth, non-convex high-dimensional functions using first-order methods. My long term goal is to bring robots into human-centered domains such as homes and hospitals. In each setting we provide faster exact and approximate algorithms. SHUFE, where I was fortunate Anup B. Rao. Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, Sushant Sachdeva, Online Edge Coloring via Tree Recurrences and Correlation Decay, STOC 2022 The design of algorithms is traditionally a discrete endeavor. My broad research interest is in theoretical computer science and my focus is on fundamental mathematical problems in data science at the intersection of computer science, statistics, optimization, biology and economics. In Symposium on Discrete Algorithms (SODA 2018) (arXiv), Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes, Efficient (n/) Spectral Sketches for the Laplacian and its Pseudoinverse, Stability of the Lanczos Method for Matrix Function Approximation. We provide a generic technique for constructing families of submodular functions to obtain lower bounds for submodular function minimization (SFM). Stability of the Lanczos Method for Matrix Function Approximation Cameron Musco, Christopher Musco, Aaron Sidford ACM-SIAM Symposium on Discrete Algorithms (SODA) 2018. I am a senior researcher in the Algorithms group at Microsoft Research Redmond. David P. Woodruff . ", "Team-convex-optimization for solving discounted and average-reward MDPs! Secured intranet portal for faculty, staff and students. Before attending Stanford, I graduated from MIT in May 2018. Conference of Learning Theory (COLT), 2022, RECAPP: Crafting a More Efficient Catalyst for Convex Optimization ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods Their, This "Cited by" count includes citations to the following articles in Scholar. Mail Code. Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford Page 1 of 5 Aaron Sidford Assistant Professor of Management Science and Engineering and of Computer Science CONTACT INFORMATION Administrative Contact Jackie Nguyen - Administrative Associate It was released on november 10, 2017. the Operations Research group. MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f Call (225) 687-7590 or park nicollet dermatology wayzata today! Slides from my talk at ITCS. Yair Carmon. I hope you enjoy the content as much as I enjoyed teaching the class and if you have questions or feedback on the note, feel free to email me. "t a","H Google Scholar, The Complexity of Infinite-Horizon General-Sum Stochastic Games, The Complexity of Optimizing Single and Multi-player Games, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions, On the Sample Complexity for Average-reward Markov Decision Processes, Stochastic Methods for Matrix Games and its Applications, Acceleration with a Ball Optimization Oracle, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, The Complexity of Infinite-Horizon General-Sum Stochastic Games This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. ", "A new Catalyst framework with relaxed error condition for faster finite-sum and minimax solvers. resume/cv; publications. Another research focus are optimization algorithms. with Kevin Tian and Aaron Sidford The paper, Efficient Convex Optimization Requires Superlinear Memory, was co-authored with Stanford professor Gregory Valiant as well as current Stanford student Annie Marsden and alumnus Vatsal Sharan. Stanford, CA 94305 We present an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second . I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. Aaron Sidford is an assistant professor in the departments of Management Science and Engineering and Computer Science at Stanford University. en_US: dc.format.extent: 266 pages: en_US: dc.language.iso: eng: en_US: dc.publisher: Massachusetts Institute of Technology: en_US: dc.rights: M.I.T. with Sepehr Assadi, Arun Jambulapati, Aaron Sidford and Kevin Tian 2023. . DOI: 10.1109/FOCS.2016.69 Corpus ID: 3311; Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More @article{Cohen2016FasterAF, title={Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More}, author={Michael B. Cohen and Jonathan A. Kelner and John Peebles and Richard Peng and Aaron Sidford and Adrian Vladu}, journal . Navajo Math Circles Instructor. International Colloquium on Automata, Languages, and Programming (ICALP), 2022, Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods ", "A low-bias low-cost estimator of subproblem solution suffices for acceleration! Before attending Stanford, I graduated from MIT in May 2018. 2019 (and hopefully 2022 onwards Covid permitting) For more information please watch this and please consider donating here! The authors of most papers are ordered alphabetically. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. Yin Tat Lee and Aaron Sidford; An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. << 113 * 2016: The system can't perform the operation now. I am affiliated with the Stanford Theory Group and Stanford Operations Research Group. Abstract. Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, Di Wang: Minimum Cost Flows, MDPs, and 1 -Regression in Nearly Linear Time for Dense Instances. rl1 Yujia Jin. Aaron's research interests lie in optimization, the theory of computation, and the . Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions >> Deeparnab Chakrabarty, Andrei Graur, Haotian Jiang, Aaron Sidford. /Length 11 0 R ", "We characterize when solving the max \(\min_{x}\max_{i\in[n]}f_i(x)\) is (not) harder than solving the average \(\min_{x}\frac{1}{n}\sum_{i\in[n]}f_i(x)\). I maintain a mailing list for my graduate students and the broader Stanford community that it is interested in the work of my research group. Semantic parsing on Freebase from question-answer pairs. International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG