Suppose I have two series: A and B.
Both series are mean 0, and have similar (unknown) volatility.
I want to show the 'difference' between the series (i.e. how big one of them is than the other).
But, assume, that either the volatility of the series change a lot or I have a cluster of A's and B's that have different volatilities.
So, I want to take into account potential different levels of A and B.
Thus, I have decided to use a ratio A/B, as it fits quite good to my needs.
- Shows how much A is bigger than B
- Handles different levels of A and B (15/10 and 1.5/1 is the same)
But, there is a problem when A and B change signs:
Here, two series that move 'together' are A and B (left y axis). The third series is the ratio A/B (right axis)
As you can see there is a huge jump at around December 4th. What happened:
- till December 4th, A was positive but B was negative
- say A = 0.2, B = -0.5. Ratio = -0.4
- A becomes negative, -0.1
- Ratio jumps to 0.2
I can understand why this happens and that this is not a 'mistake' or a 'flaw'.
One way to deal with this problem would be to scale both A and B by adding constant = min(A, B). But, I do not know the range of A and B, and thus can not ex ante set the scaling value.
What would be an alternative method to show this kind of a relationship:
- Is the Series A bigger than B?
- By how much? (it doesn't have to exactly answer this question. It is mostly a relative value to show if A is bigger than B, more than it was in the previous period)
- Should handle changing levels of A and B (thus A - B will not work, as 15-10=5 will be much bigger than 1.5-1=0.5. For this exercise they should be the same)
- Should handle negative A and B numbers
- Ideally final number is centered around 1 (when A = B, report 1). But is not necessary.
