Without getting into quibbles about the terminology 'credible interval', I believe it
is entirely reasonable to compare the prior and posterior distributions in
the way you suggest.
Suppose you are seeking a Bayesian interval estimate for parameter $\theta,$ considered (in the Bayesian manner) as a random variable. Maybe
$\theta$ is the success probability in a binomial process and you
are considering the distribution $\mathsf{Beta}(330,270)$ as the prior.
What might lead to this choice? Maybe, according to prior experience or belief,
you think that $\theta$ is "above 0.5, but not likely above 0.6." Then
this is a reasonable prior on several grounds: its mean, median, and mode
are all about $0.55.$ Also, this distribution puts about 95% of its
probability in the interval $(0.51, 0.59)$.
330/(270+330)
## 0.55 # mean
qbeta(.5, 330, 270)
## 0.5500556 # median
qbeta(c(.025,.975), 330, 270)
## 0.5100824 0.5896018
Later, after observing the binomial process through 1000 trials and counting
620 successes, we combine a likelihood for these data with the prior
distribution to obtain the posterior distribution $\mathsf{Beta}(960, 650).$
The posterior has mean about $0.594.$ Also, it puts about 95% of its probability
in $(0.57, 0.62),$ which is the 95% Bayesian posterior credible interval.
The data have changed our view: the success probability seems higher now than
we supposed when we chose the prior. It is only natural, perhaps inevitable, to compare the 95% probability
interval $(0.51, 0.59)$ from the prior distribution with the 95% probability interval $(0.57, 0.62)$ from the posterior distribution. [I will leave the
discussion whether it is proper to call
both intervals 'credible' intervals up to the definitions of various
textbooks on Bayesian inference.]