Let us suppose we have a prime $p > 3$ and two primitive roots $g_1, g_2$ of $p$ with index functions $\nu_1$ and $\nu_2$ (so that $g_1^{\nu_1(n)} = n$). Also suppose that $ef = p-1$ is a (nontrivial) factorization.
In short, my question is: Why is it 'obvious' that the set $\{n \mid \nu_1(n) \equiv 0 \pmod e \}$ is the same as the set $\{ n \mid \nu_2(n) \equiv 0 \pmod e\}$?
The rest is just for context and motivation.
This comes up in a standard proof of the prime-modulus-only case of Dirichlet's Theorem on the density of primes in arithmetic progressions, but my justification of why this fact is true has always bugged me. Like many number theoretic arguments, it's sort of a trick. In particular, we need that $\sum_{(\nu(n) \equiv 0 \mod e)} \zeta^n$ is independent of the primitive root we use, where $\zeta = e^{2\pi i / p}$ is the first primitive $p$th root of unity and $\nu(n)$ denotes the index of $n$ relative to whatever primitive root we're using. One way to show that this is true is to just realize that the sum $\displaystyle \frac{1}{e}\sum_{x = 0}^{p-1} \zeta^{x^e}$ gives us the sum we want and is independent of $\zeta$. But there's no intuition behind this statement to me, really.
It's not true that shifted equivalence classes are also permutations of each other here, i.e. that the set $\{ n | \nu(n) \equiv j \pmod e \}$ for $j \neq 0$ will be independent of the primitive root in general. But the sort of trick that I know above doesn't shed light onto this. (Why doesn't the trick work here? If you boil it down, the trick relies on the fact that $\nu(x^e) = e \nu(x) \equiv 0 \pmod e$, and this doesn't work so well with $\nu(x^{e + 1})$.)