Let me take some time to answer this question, because I think many may ask this here and there, so we have good reference.
Let us consider the time series:
$X(t) = 2 + 3t + Z(t) $
where Z(t) are gaussian white noises from $\mathcal{N}(0,1)$.
Is $X(t)$ stationary? No it is not.
Why or why not? If it would be stationary or quasi-stationary there would be a stable trajectory (in the most trivial case a stable point) where any state of $X$ would correlate to itself after the same period of time. In other words you have a closed stable trajectory (in the trivial case a fix point). In your case the system is linear instable and the trajectory linearly infinitly increasing. The fluctuation term does not matter, it just builds a tube around the trajectory. There is no state which you could correlalte to itself after certain time.
Is $Y(t) = X(t) - X(t-1)$ stationary? No it is not.
Why or why not? The arguments are similar the previous case, although you introduce in this case a one-step memory. So your system reminds one step a prior in time $(t-1)$ but this is not sufficient. Because it does not ensure a closed and stable trajectory where every state would correlate to itself after a certain period of time $\Delta t$, it only says that there exists a memory (simliar a Markov chain first/second order). Because of the memory there is certain co-variance but not constant variance (better said an autocovariance) over all states.
Let $V(t)= \frac{1}{2q+1}\sum_{j=-q}^q X(t-j)$
What is the mean and auto-covariance function of $V(t)$?
Now here you allow an extent from $(t-1)$ and corresponding one step memory to the complete history of the system $(t-j)$ over all times from $-q$ ($q$-past) to $q$ ($q$-now or -future). This is the prerequisite that the system could be stationary, given a stability (convergence) against any Gaussian fluctuation. That means if you could show that every state correlates to itself you may conclude (quasi-)stionararity. The metrics for such correlation is the AUTO-covariance that means the covariance to itself. It tests like scan over your states ($[-p,p]$) and checks whether every state corelates to istlef after a certain period of time (the period). Intuitively said it multiplies every state with certain other state temporaly shifted and sums up by means. The general receipt is simple and here comprehensive as well, with the expectation value:
$E[V_t] = \mu_t$
you obtain autocovariance $C_{VV}(t,s)$ (attention index $V$ to $V$):
$C_{VV}(t,s) = E[(V_t - \mu_t)(V_s - \mu_s)] = E[V_t V_s] - \mu_t \mu_s.\,$
Using this cookbook you can build the mean and autocovariance of $V(t)$.
Next, your approach to fluctutation is correct.
How do I compute the expectation of $X(t)$? I am not sure which $X(t)$ you mean but the procedure to calculate the mean is basic stochastic see here and denoted as above $E[X_t] = \mu_t$. However, you should reasonably check in advance whether your $X(t)$ can have a mean: if your system (like in 1) would grow for endless time, then building means out of your observations or equation would not make sense, except you will gain the information that there is no mean over infinity. So if you do not have a constant mean over infinity of time the trajectory can not be stable.
How do I show the statastical properties are constant or not constant? If you take your equations 1-3 and calculate according to receipt given above the mean and the autocovariance, and the result you obtain turns to be time invariant ($t$ vanishes, independent of time, constant) then you have it.
For your assistance also find here the conditions for stability when calculating the autocovariance >>>
Hope this helps you with all references and clarification.