I assume that $X_1$ and $X_2$ are independently chi-squared with degrees of
freedom $k_1$ and $K_2,$ respectively. Then $F$ has Snedecor's F-distribution
with $k_1$ and $k_2$ degrees of freedom. Wikipedia on 'F distribution' says
that $E(F) = k_2/(k_2 - 2),$ for $k_2 > 2.$ So, "not exactly."
The F-statistic (or "variance ratio" statistic) $F = S_1^2/S_2^2,$ where
$S_1^2$ and $S_2^2$ are variances of independent samples from
two normal populations, is often used to test whether the population
variances are equal ($H_0: \sigma_1^2 = \sigma_2^2$) or not
($H_a: \sigma_1^2 \ne \sigma_2^2$). In that context, one often
says, roughly, that $H_0$ is true if $S_1^2/S_2^2 \approx 1.$
But that is different from saying that $E(F) = 1$ under $H_0.$
If you have something else in mind, please edit your Question.
Reality check: A brief simulation in R statistical software with a million
realizations of $Q_1 \sim Chisq(10)$ and independently $Q_2 \sim Chisq(8):$
m = 10^6; k1 = 10; k2 = 8
q1 = rchisq(m, k1); q2 = rchisq(m, k2)
f = (q1/k1)/(q2/k2)
mean(f)
## 1.334032 # aprx E(F) = 1.3333
k2/(k2-2)
## 1.333333
mean(1/(q2/k2))
## 1.333556
This illustrates the Wikipedia formula for $E(F)$. Just from the simulation,
it does seem that $E[1/(Q_2/k_2)]$ is the same. But, under my interpretation of your question, I do not see
a reasonable scenario in which these means would be exactly unity.
(Yes, close to 1 for very large $k_2$, but not exactly 1.)