Consider the following $n \times n$ matrix $ \left( \begin{matrix} 1/2 & 1/2 & 0 & 0 & 0 & 0 \\ 1/2 & 0 & 1/2 & 0 & 0 & 0 \\ 0 & 1/2 & 0 & 1/2 & 0 & 0 \\ \vdots & \ & \ddots & & \ddots & \\ 0 & \cdots & 0 & 1/2 & 0 & 1/2 \\ 0 & \cdots & 0 & 0 & 1/2 & 1/2 \end{matrix} \right) $
It has its largest eigenvalue at $1$, and the gap between $1$ and the second largest eigenvalue is on the order of $1/n^2$. The other day, someone offered me the following heuristic argument for why, for large $n$, this gap cannot be larger than $c/n^2$ for some constant $c>0$:
This is the transition matrix of a Markov chain on $n$ nodes which, except at the endpoints, moves left and right with equal probability. Its stationary distribution is uniform. Now the second eigenvalue measures the time until you get close to the stationary distribution, and observe that by the central limit theorem, as $n$ gets large, a particle which begins at the midpoint will take on the order of $n^2$ steps until it puts a constant probability on order of $n$ nodes. Consequently, the gap cannot be larger than on the order of $1/n^2$.
Intuitively, this makes sense. I would like to ask:
Question: Is there a way to make the above argument rigorous?
One difficulty is that the smallest eigenvalue also affects mixing time, but we can perhaps get rid of this difficulty by taking a convex combination with the identity. A more serious problem seems to be the boundary effects, but perhaps one can argue that they "go away" as $n \rightarrow \infty$. I'm not sure how to argue any of this, and I'd be very interested in seeing a formalization of this argument.