Mixing times for Markov chains is an active area of research in modern probability and it lies at the interface of mathematics, statistical physics and theoretical computer science. The mixing time of a Markov chain is defined to be the time it takes to come close to equilibrium. There is a variety of techniques used to estimate mixing times, coming from probability, representation theory and spectral theory. In this mini course I will focus on probabilistic techniques and in particular, I will present some recent results (see references below) on connections between mixing times and hitting times of large sets.

**Prerequisites:**
It would be helpful to be familiar with Chapters 4(mixing definitions) and 12(spectral methods) from the book Mixing Times for Markov Chains by D. Levin, Y. Peres and E. Wilmer.

**Further reading:**

- David Aldous (1982). Some Inequalities for Reversible Markov Chains.
- R. Basu, J. Hermon and Y. Peres (2017). Characterization of cutoff for reversible Markov chains.
- S. Griffiths, R. J. Kang, R. Oliveira and V. Patel (2014). Tight inequalities among set hitting times in Markov chains.
- R. Oliveira (2012). Mixing and hitting times for finite Markov chains.
- Y. Peres and P. Sousi (2012). Mixing Times are Hitting Times of Large Sets.
- P. Sousi and P. Winkler (2014). Mixing Times and Moving Targets.