1
$\begingroup$

I know if I want to calculate autocorrelation of a random process , I have this rule :

$ R_X (t_1 , t_2) = E \{ X(t_1)X^*(t_2) \} $ .

In my cource I had this example :

$ X (t ) = A cos(2πft + Θ) $

A: constant. Θ: uniform in [0, 2π].

Find the autocorrelation of X.

in this case we but :

$ R_X (t_1 , t_2 ) = E [ A cos(2πft_1 + Θ) A cos(2πft_2 + Θ)] = A E [cos(2π (t_1 + t_2 ) + 2Θ) + cos(2πf (t_1 − t_2 ))] $

and he didn't say any thing about probability density function , so how he solved the example like this :

$= A cos(2πf (t1 − t 2 )) + A E [cos(2π (t1 + t 2 ) + 2Θ)]$

$E [cos(2π (t1 + t 2 ) + 2Θ)]=\frac{1}{2π}∫_{0} ^{2π}cos(2πf (t1 + t 2 ) + 2θ )dθ = 0.$

$RX (t_1 , t_2 ) = A cos(2πf (t_1 − t_2 ))$

so how can explain to my these questions :

1. why $ A E[ A cos(2πf (t_1 − t_2 )) ]=cos(2πf (t_1 − t_2 )) $ . 2. what I must conceder the PDF f_X(x) to solve $E [cos(2π (t1 + t 2 ) + 2Θ)]$ .

  • 0
    thanks @Michael-Chernick now I understand it. it is [uniform distribution](http://en.wikipedia.org/wiki/Uniform_distribution) which his probability function is $\frac{1}{a-b}$ over the area [a,b] which is here [0,2π] .2012-07-16

2 Answers 2

0

I didn't check the calculations to see if the computations are right. But the distribution for X(t) is determined by the definition you have for X(t). The only random component is theta which is uniform on [0, 2 pi]. Keep in mind that the random component theta is the same for each t and the variation in X(t) is only due to the value of t in the cosine function.

  • 0
    Yes 1/2 pi is the density for theta on[0,2 pi].2012-07-16
1

There are several typographical errors in your question and the work that you have shown. Some of these make your results nonsensical: e.g. your $R_X(t_1, t_1) = A$ can be negative since $A$ is not restricted to being a positive constant, and even if $A$ were restricted to be a positive constant, the process $Y(t) = - X(t)$, which should have autocorrelation function $R_Y(t_1, t_2) = R_X(t_1, t_2)$, would instead have the unusual property that $R_Y(t_1, t_2) = -R_X(t_1, t_2)$.

$\begin{align*} R_X(t_1, t_2) &= E\left[A\cos(2\pi ft_1 + \Theta)A\cos(2\pi ft_2 + \Theta)\right]\\ &= A^2 E\left[\cos(2\pi ft_1 + \Theta)\cos(2\pi ft_2 + \Theta)\right]\\ &= \frac{1}{2}A^2E\left[\cos(2\pi f(t_1 + t_2) + 2\Theta) +\cos(2\pi f(t_1 - t_2))\right]\\ &= \frac{1}{2}A^2\cos(2\pi f(t_1 - t_2)) + E\left[\cos(2\pi f(t_1 + t_2))\cos(2\Theta) - \sin(2\pi f(t_1 + t_2))\sin(2\Theta)\right]\\ &= \frac{1}{2}A^2\cos(2\pi f(t_1 - t_2)) + \cos(2\pi f(t_1 + t_2))E[\cos(2\Theta)] - \sin(2\pi f(t_1 + t_2))E[\sin(2\Theta)] \end{align*}$ and so $R_X(t_1, t_2) = \frac{1}{2}A^2\cos(2\pi f(t_1 - t_2))$ for any random variable $\Theta$ with the property that $E[\cos(2\Theta)] = E[\sin(2\Theta)] = 0$. One such random variable is uniformly distributed on $[0, 2\pi)$ which is the most common assumption in such cases (and which is the one your instructor used), but many other distributions will also give $E[\cos(2\Theta)] = E[\sin(2\Theta)] = 0$. For example, if $\Theta$ is a discrete random variable taking on the four values $0, \frac{\pi}{2}, \pi, \frac{3\pi}{2}$ with equal probability $\frac{1}{4}$, then we have $E[\cos(2\Theta)] = E[\sin(2\Theta)] = 0$. Remember this last case if and when you have occasion to study a digital modulation method called quaternary phase-shift keying or QPSK.