2
$\begingroup$

This is a completely practical problem in application.

Considering $$ A \sin(\phi + \phi_{0}) = \sum_{k=1}^{N} A_k \sin(\phi + \phi_k), \forall \phi \in \mathbb{R} $$ where $\phi$ is a deterministic variable on the real line, $$ A_k \sim N (\mu_{A}, \sigma_{A}) $$ and

$ \phi_k, k=1..N$ have a certain identical distribution (here I further simplified the assumption).

And it is also with further assumption that $ \mu_{A}>0$ and $ \sigma_\phi < \mu_{A}$. Furthermore, random variables $A_k$, $\phi_k$, $k = 1..N$ are jointly independent.

Roughly speaking, the summation represents a superposition of N first harmonics whose amplitude and argument have random behaviors. My interest is to know how the components influence the overall harmonic, in terms of distribution.

Since it holds for every $\phi$, it can be simplified into: $$ A e^{i \phi_0} = \sum_{k=1}^{N} A_k e^{i \phi_k} $$

So, I would like to know how I can derive the distribution of $ A $ and $\phi_0$ from above equation. If precise distribution cannot be found, an approximate solution would also be grateful. As for approximation, one may consider that most $3\sigma_\phi < \mu_A$, so the negative values can be truncated off, and only the positive values are under consideration.

  • 0
    How large (small) is $N$?2011-07-13
  • 0
    @fedja it is a tricky part. I would say not larger than 10, it approximately lies on the rang 2 to 6.2011-07-13
  • 0
    I wouldn't hurt to mention independence if that was intended. And if it wasn't, then you've got to give us more information.2011-07-13
  • 0
    @MichaelHardy you are absolutely right. They are indeed pairwisely independent.2011-07-14
  • 0
    Not *jointly* independent? Then the distributions of $A$ and $\phi_0$ may not be uniquely determined.2011-07-14
  • 0
    @DidierPiau thanks for your comment. To be more precisely, they are jointly independent. sorry for the previous misleading description.2011-07-14
  • 0
    Another (minor) remark: at first sight the rôle of $\phi$ is unclear but it seems that $A$ and $\phi_0$ are in fact defined as the modulus and the argument of the sum over $k$ of the random variables $A_k\mathrm{e}^{\mathrm{i}\phi_k}$. In a formula, $A\mathrm{e}^{\mathrm{i}\phi_0}=A_1\mathrm{e}^{\mathrm{i}\phi_1}+\cdots+A_N\text{e}^{\mathrm{i}\phi_N}$. If this is correct, you might wish to modify your post.2011-07-14
  • 0
    @DidierPiau hmm..$\phi$ is a variable whose domain is the real line. Roughly speaking, the summation represents a superposition of a series of first harmonics (1 to N).2011-07-14
  • 0
    Either you want the equation to hold for a single given $\phi$, and then $A$ and $\phi_0$ are not uniquely defined since $B_1\sin(\psi_1)=B_2\sin(\psi_2)$ does not imply that $B_1=B_2$ and $\psi_1=\psi_2$. Or (as I first thought), you want the equation to hold for every $\phi$, and then my last comment applies. Make up your mind... :-)2011-07-14
  • 0
    @DidierPiau yes, it should be hold for all $\phi$. And I just realized how helpful your last comment is after i replied you. That's embarrassing. I will try to work on it, though it still looks complicated to be solved. :-(2011-07-14
  • 0
    Don't be (embarrassed). Do (modify your post accordingly).2011-07-14
  • 0
    It seems that $E(A\cos\phi_0)=E(A\sin\phi_0)=0$ and $E(A^2)=N\sigma_A^2+N(1-\mathrm{e}^{-\sigma_{\phi}^2})\mu_A^2$.2011-07-14
  • 0
    Aaaargh... now you assume that the $\phi_k$ are i.i.d. This completely modifies the behaviour of the sum! (Really, you should not change your question like that...)2011-07-14
  • 0
    @DidierPiau err..I'm sorry for the modification. It is simply because that change would further simplify the problem. I believe the previous assumption is more closed to the reality. It is preferred if one could work out with the previous assumption. My interest is more lied on the pattern of density function, rather than only mean and variance.2011-07-14

2 Answers 2

1

I don't think the statement "sum of harmonics" is quite accurate. In fact, writing $\omega t$ for $\phi$, the given relation is $$A\sin(\omega t + \phi_0) = \sum_{k = 1}^N A_k\sin(\omega t + \phi_k)$$ which is saying that the sum of $N$ sinusoidal signals of the same radian frequency $\omega$ radians/second but different amplitudes and phases is a sinusoid at the same radian frequency: there are no harmonics involved. So, $A$ and $\phi_0$ are random variables related to the given $A_k$'s and $\phi_k$ through $$ A\cos(\phi_0) = \sum_{k=1}^N A_k\cos(\phi_k); ~~ A\sin(\phi_0) = \sum_{k=1}^N A_k\sin(\phi_k). $$ In some applications that I have encountered, the $A_k$'s have Rayleigh distributions and the $\phi_k$'s are uniformly distributed on $[0, 2\pi)$ instead of just having identical (not necessarily uniform) distributions so that $A_k\cos(\phi_k)$ and $A_k\sin(\phi_k)$ are iid Gaussian random variables, their sums are independent Gaussian random variables, and so $A$ and $\phi_0$ are respectively Rayleigh and $U[0,2\pi)$ random variables.

But here the $A_k$'s are Gaussian random variables so the simpler result described above does not apply. However, if $\mu_k \gg 3\sigma_{\phi}$, the Gaussian density of $A_k$ might be modeled as roughly resembling a Rayleigh density, and so something might still be useable.

0

As a practical matter we face this with the various waves on the right having different, but similar, frequencies. All the $\phi$'s are within $5-10\%$ of each other and the $A_k$ are the same. In that case, they "walk past" each other randomly. We ignore the relative phases and figure they are random variables with zero mean and known standard deviation, the power of each signal, and take the variance of the sum to be the sum of the variances.

I think you can do the same here. The variance of one sine function is $\frac{1}{2}$. The variance of the sum of two of the same frequency is the average of $|\sin \phi_1+\sin \phi_2|^2=\sin ^2 \phi_1+\sin ^2 \phi_2 + 2\sin \phi_1 \sin \phi_2=1$ when averaged over $\phi_2-\phi_1$. So if the phases are random, I would just say the variances add nicely.

  • 0
    thanks for your answer. Could you explain further why $A_{k}$ are same? Though they are identically distributed, they are independent. So the values of $A_{i}$ only have same distribution, and that is all I know. Without considering the phase, the sum of $A_{i}$ should be n-th fold of $A_{i}$. However, in the case of phase involved, it is not that easy.2011-07-14
  • 0
    I was just stating that in our application the $A_k$ are the same. Your mileage may vary. I suspect you can blindly proceed to add the variances, but my calculation does not prove that.2011-07-14