Exercise 1 (Multiplying on the right) Let \(P\) be an irreducible transition matrix on a finite state space \(S\) of size \(n\). Let \(h :S \to \mathbb{R}\) be a function on \(S\), and regard \(h\) as a column vector. Call a function \(h\) harmonic if \(Ph = h\).


Solution.

\[\begin{eqnarray*} M =& &h(s) \\ &=& (Ph)_s \\ &=& \sum_j p_{sj} h(j) \\ &=& p_{sz}h(z) + \sum_{j\not=z} p_{sj} h(j) \\ &\leq& p_{sz}h(z) + (1-p_{sz})M. \end{eqnarray*}\]
Rearranging, we obtain \[ p_{sz}M \leq p_{sz}h(z),\] so that \(h(z) \geq M\) and since \(M\) is the maximum, we have the desired equality.

Since \(P\) is irreducible, for every \(z \in S\), there is an \(n \geq 1\) such that \(p_{sz}(n) >0\). Since we also have \[P^n h = h,\] the previous argument with \(P^n\) taking the place of \(P\) applies to show that \(h(z) = M\).


Exercise 2 Consider the Markov chain \(X\) on five states \(\{1,2,3,4,5\}\) started at \(1\) with transition matrix given by
P <- matrix(c(1/4, 1/4, 1/2, 0,0, 
              1/4, 1/8,1/8,0,1/2, 
              1/4,0,0, 1/2, 1/4, 
              0,0,0,1/2, 1/2, 1/3, 1/3, 0, 0, 1/3), nrow =5)
P <-t(P)
P
##           [,1]      [,2]  [,3] [,4]      [,5]
## [1,] 0.2500000 0.2500000 0.500  0.0 0.0000000
## [2,] 0.2500000 0.1250000 0.125  0.0 0.5000000
## [3,] 0.2500000 0.0000000 0.000  0.5 0.2500000
## [4,] 0.0000000 0.0000000 0.000  0.5 0.5000000
## [5,] 0.3333333 0.3333333 0.000  0.0 0.3333333

For each \(s \in \{1,2,3,4,5\}\), let \(T^s = \inf \{ n \geq 1: X(n)=s | X(0)=s\}\).


Solution. It is easy to verify that \(P\) is irreducible and aperiodic, since \(P^3\) has all positive entries:
Q = P  %*% P%*%P
Q
##           [,1]      [,2]       [,3]      [,4]      [,5]
## [1,] 0.2083333 0.1575521 0.13671875 0.2031250 0.2942708
## [2,] 0.2560764 0.1903212 0.17643229 0.1015625 0.2756076
## [3,] 0.2152778 0.1657986 0.09114583 0.1875000 0.3402778
## [4,] 0.2222222 0.2013889 0.10416667 0.1250000 0.3472222
## [5,] 0.2731481 0.1915509 0.16840278 0.1041667 0.2627315

Thus we know that the stationary distribution \(\pi\) satisfies

\[\pi_s = (\mathbb{E} T^s)^{-1}.\]

We easily compute \(\pi\).

eigen(t(P))
## eigen() decomposition
## $values
## [1]  1.0000000+0.0000000i  0.3362035+0.2981548i  0.3362035-0.2981548i
## [4] -0.2320368+0.0647526i -0.2320368-0.0647526i
## 
## $vectors
##               [,1]                  [,2]                  [,3]
## [1,] -0.5124053+0i -0.2980155+0.1574620i -0.2980155-0.1574620i
## [2,] -0.3877662+0i -0.2746034-0.1061630i -0.2746034+0.1061630i
## [3,] -0.3046734+0i -0.2085874+0.3796865i -0.2085874-0.3796865i
## [4,] -0.3046734+0i  0.6367273+0.0000000i  0.6367273+0.0000000i
## [5,] -0.6335822+0i  0.1444790-0.4309855i  0.1444790+0.4309855i
##                       [,4]                  [,5]
## [1,] -0.3520380-0.0196264i -0.3520380+0.0196264i
## [2,]  0.3405425+0.3764344i  0.3405425-0.3764344i
## [3,]  0.5751293+0.0000000i  0.5751293+0.0000000i
## [4,] -0.3897784-0.0344780i -0.3897784+0.0344780i
## [5,] -0.1738554-0.3223300i -0.1738554+0.3223300i
stat = eigen(t(P))$vectors[,1]
stat = stat/sum(stat)
stat
## [1] 0.2390953+0i 0.1809370+0i 0.1421648+0i 0.1421648+0i 0.2956381+0i
Exercise 3 By running simulations, verify the central limit theorem for renewal processes, in the case where the inter-arrival times are given by a gamma distribution with shape \(n=2\) and rate \(\lambda=3\).


Solution. The gamma distibution is available in R via; rgamma(n, shape, rate = 1, scale = 1/rate). For this question, we can also see it as the sum of two independent exponentials of rate \(\lambda = 3\). The following code is adapted from the excessive life code; it gives the number of arrivals by time \(t\):
number<- function(t){
  arr  = rexp(1,3) + rexp(1,3)
  inter = arr
  while(arr[length(arr)] <t){
    inter <- c(inter, (rexp(1,3) + rexp(1,3))  )
    arr <- c(arr, arr[length(arr)] + inter[length(inter)])
}
length(arr)
}

Now we code the normalized quantity in the central limit theorem

nz <- function(t){
vary = 2*(1/9)
    mu = 2*(1/3)
ratio=  ( number(t) - (t/mu)  )/ (   ( (t* vary)/ mu^3)^(0.5)   )
  ratio
}

We call \(nz(300)\) for a large number of times and plot a histogram and compare with the density of the standard normal.

calls = replicate(5000, nz(300))
hist(calls, prob=TRUE)
curve(dnorm(x), add=TRUE)


Exercise 4 Check (by pen and paper), the law of large numbers and the central limit theorem for renewal processes, for the special case where the renewal process is a Poisson process.

Solution. Let \(N\) be a Poisson counting process with intensity \(\lambda\); then the inter-arrival times are exponential with rate \(\lambda = \mu^{-1}\), which have variance \(\mu^2\) We know that \(N(t)\) is a random variable with Poisson mean \(\lambda t\). We already have a proof that \(N(t)/t \to \lambda\), but now we can also verify that, without a limit, we have \[ \mathbb{E} N(t) /t = \lambda.\]

In addition, the sum of independent Poissons is again Poisson, thus \(N(n)\) has the same law as the sum of \(n\) independent Poisson random variables \(X_i\) with mean \(\lambda\), and variance \(\lambda\). Considering, first only integer times, \(n\), we have that

\[ \frac{N(n) -n\lambda}{\sqrt{n \lambda}}\]

has the same law as \[ \frac{ \sum_{i=1} ^n X_i - n \lambda}{\sqrt{n \lambda}},\] which the regular central limit assures us converges in distirbution to a standard normal.

Let \([t]\) denote the integer part of \(t\), so that \([1.5] = 1\). Notice that

\[ N([t] -1) \leq N(t) \leq N( [t] +1)\] so that standard arguments give the result for general \(t\).


Exercise 5 Prove that if \(s\) is a recurrent state of a Markov chain that is started at \(s\), then with probability one, it must return to that state for infinitely many \(n \in \mathbb{Z}^{+}\)


Solution. Let \(T_i\) be the \(i\)th return time to \(s\), so that \(T_0 = 0\). We already know that \(T_1 < \infty\) with probability one. By the Strong Markov property, the same holds for all the \(T_i\); in particular,

\[\mathbb{P}(T_1 < \infty, T_2 < \infty) = (\mathbb{P}(T_1 < \infty))^2.\] Hence

\[ \mathbb{P} \big( \bigcap _{i=0} ^{\infty} \{ T_i < \infty \} \big) = 1.\]


Exercise 6 Prove that if an irreducible Markov chain has a recurrent state, then all the states must also be recurrent.


Solution. Let \(s\in S\) be the given recurrent state of the Markov chain \(X\), so that we know that

\[T^s = \inf \{ n \geq 1: X_n=s | X_0 = s\}\]

is finite with probability one. Let \(t \in S\) be another state. By irreducibility, there exists \(n \geq 1\), so that \(p_{st}(n) >0\) and in \(n\) steps it is possible to reach \(t\) from \(s\), and on this event, we still must have with the chain will return to \(s\). Thus by the strong Markov property, \(T^{ts}\), the time it takes for a chain starting at \(t\) to reach \(s\) is finite with probability one.

One way to return to \(t\) starting at \(t\), is to go to \(s\) and then from \(s\) to go to \(t\). Thus it suffices to argue that \(T^{st}\) is also finite with probability one. Since each time the chain returns to \(s\), we have an independent nonzero chance of reaching \(t\) again, by the strong Markov property, this will eventually happen with probability \(1\).