12
$\begingroup$

I came across a definition for a "small set" (of the state space) $A \subset \Omega$: there exists a $\delta > 0$ and a measure $\mu$ such that $p^{(k)}(x, \cdot) \geq \delta \mu (\cdot)$ for every $x \in A$. In this case, they say that $A$ has lag $k$.

I have no intuition for this and I can't find anything anywhere that explains this with some examples. Can anyone tell me what it means? Why is it important?

  • 0
    This definition might depend on the specific context. Can you give the reference of the paper or textbook you are reading?2011-11-26
  • 0
    Since the word "small" is too generic, my guess is that that the definition is not meant to make sense in isolation. Perhaps the author wanted to define a term to be used only in that particular book/chapter/paper.2011-11-26
  • 1
    @Srivatsan as Didier wrote, such concept is indeed often used in the theory of Markov Chains, as well as petite sets which happen to be different.2011-11-26

4 Answers 4

8

This is actually an important and much used concept in the study of Markov chains. The number $\delta$ is meaningful because one assumes $\mu$ is a probability measure (and not only a measure). Then $\delta$ is used to evaluate the rate of the loss of memory of the initial state by the chain.

  • 1
    I wonder if you can advise a book where such concepts as small sets, petite sets etc. are clearly explained, not only defined.2011-11-26
  • 0
    A book often referred to in this context is the monography *General Irreducible Markov Chains and Non-Negative Operators* by Elsa Nummelin (unfortunately, I cannot be more precise, having not read it myself).2011-11-26
  • 1
    Actually I just skimmed through a student's research project, written under the supervision of Jeffrey Rosenthal, which explains this stuff quite competently. Search for *Olga Chilina, Spring 2006* on [Rosenthal's website](http://probability.ca/jeff/grad.html).2011-11-26
  • 0
    Thanks for the link. Have you read by any chance new edition of 'Markov Chains and Stochastic Stability' - if it's worth to read after I read the first edition? There is no new version in my library, so I was thinking to buy it (and that's not the only one book I would like to buy)2011-11-26
  • 0
    @Ilya, the content is almost the same, plus some bibliographic updates and a foreword by Peter Glynn.2011-11-26
  • 0
    That's it? Thank you, perhaps I will wait with buying it then.2011-11-27
  • 0
    let us [continue this discussion in chat](http://chat.stackexchange.com/rooms/1867/discussion-between-ilya-and-didier-piau)2011-11-27
  • 0
    discussion is a bit long, so I moved it to chat: hope you don't mind. However, you may not receive a notification - it happens sometimes and to avoid it I send you this extra post.2011-11-27
  • 0
    @konpsych Surely you are aware of one of the guiding principles of the site, which is that **if you have a new question, post it as a new question**, not hidden in the comments to a post fully answering a 6+ years old question.2018-01-31
  • 0
    @konpsych You are entitled to your, rather peculiar, view of the situation, naturally (and your little word play about "no examples in any answer", to avoid mentioning that a book is mentioned in the comments, was duly noted). Of course, I see no reason to delete my comment.2018-02-04
2

Yes you can find a lot related topic at Jeffery Rosenthal's paper, especially the "General State Markov chain and MCMC algorithm".

Another useful reference is Nummelin's paper called "MC for MCMC". This is more intuitively than Rosenthal and when I write paper about this topic this summer, I find it is really useful.

  • 0
    please refer to this paper, it discuss a lot for small set:http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&sqi=2&ved=0CGYQFjAD&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.30.8109%26rep%3Drep1%26type%3Dps&ei=9NgeUO6oIaaJywGw3oCwCg&usg=AFQjCNGKL9_srAqK96CTz6r2XJrdTbbyqQ&sig2=NLpnfAmT48OgjFbR_aGUhA2012-08-05
2

If $A$ is "small," then the chain probabilistically regenerates/forgets its past once it enters $A$. Also, smallness has nothing to do with its measure or anything like that; there are many examples of chains with the entire state space that’s small.

Background: if a transition kernel is of the form $$ p^{(1)}(x,B) = \nu(B) \quad{(*)} $$ for all $x$ and $B$, then the chain elements are independent. This is because the transition doesn't depend on where it's coming from: $x$.

With a small set, we're only looking at $x \in A$. If $A$ is $k=1$ small, then $p^{(1)}(x,\cdot) \ge \delta \nu(\cdot)$ for all $x \in A$, and we can write the transition kernel as follows: \begin{align*} p^{(1)}(x,\cdot) &= \delta \nu(\cdot) + p^{(1)}(x,\cdot) -\delta \nu(\cdot)\\ &= \delta \nu(\cdot) + (1-\delta) \frac{p^{(1)}(x,\cdot) -\delta \nu(\cdot)}{1-\delta} \\ &= \delta \nu(\cdot) + (1-\delta)K(x, \cdot). \end{align*}

This is a discrete mixture, so with probability $\delta$ you're transitioning with $\nu$ and forgetting the past, and with probability $1-\delta$, you're transitioning with something that takes into account where you're coming from and not forgetting the past. The smallness property gives us the nonnegativity of $K$.

As @Did mentions, "$\delta$ is used to evaluate the rate of the loss of memory of the initial state by the chain." You can see that if $\delta = 1$, then we get equation $(*)$.

Other things: If $k > 1$, then it takes longer to forget, and if we’re talking about “petiteness” then it’s the same idea but with a random number of steps.

0

Small/Petite sets are usually used to prove recurrence of a Harris chains in a general state space. For example when you have a process that has random jumps, the probability that the chain will return to a single point in state space is 0, yet it can be certain that it will return to a closed interval in that space and it remains to show that this closed interval is small/petite to complete the proof of recurrence.

In section 5.3.5 of this paper there is a construction of a measure $\mu$ that has the properties of definition. Notice that the equivalent of $k$ is a random variable that also needs to be specified.