```{solution} * As suggested, let $h$ be a harmonic function obtaining its maximum value at $h(s) = M$. Suppose $p_{sz} >0$. Since $Ph = h$, we have \begin{eqnarray*} M =& &h(s) \\ &=& (Ph)_s \\ &=& \sum_j p_{sj} h(j) \\ &=& p_{sz}h(z) + \sum_{j\not=z} p_{sj} h(j) \\ &\leq& p_{sz}h(z) + (1-p_{sz})M. \end{eqnarray*} Rearranging, we obtain $$ p_{sz}M \leq p_{sz}h(z),$$ so that $h(z) \geq M$ and since $M$ is the maximum, we have the desired equality. Since $P$ is irreducible, for every $z \in S$, there is an $n \geq 1$ such that $p_{sz}(n) >0$. Since we also have $$P^n h = h,$$ the previous argument with $P^n$ taking the place of $P$ applies to show that $h(z) = M$. * Notice any solution to $(P-I)v = 0$ must be harmonic and thus must be a constant function/vector, so that the nullspace of $P-I$ is generated by the constant (column) vector $(1,1,1)$. * By the rank-nullity theorem, the matrix $P_I$ has rank $n-1$, and thus its transpose, $P^t -I$ also has a kernel of dimension $1$. Notice that $v = vP$ if and only $P^tv^t = v^t$; that is, $v$ is a solution to $vP = v$ if and only if $v^t$ is the nullspace of $P^t - I$. Hence the set subspace of solutions to $v P = v$ has dimension $1$. * By definiton, a stationary distribution $\pi$ satisfies $\pi = \pi P$ and $\pi$ has all nonnegative entries that sum to $1$; thus if $\pi'$ is another stationary distribution, since the solution space has dimension $1$, we must that $c \pi = \pi'$--summing both sides yields that $c=1$. ```

```{exercise} Consider the Markov chain $X$ on five states $\{1,2,3,4,5\}$ started at $1$ with transition matrix given by ``` ```{r} P <- matrix(c(1/4, 1/4, 1/2, 0,0, 1/4, 1/8,1/8,0,1/2, 1/4,0,0, 1/2, 1/4, 0,0,0,1/2, 1/2, 1/3, 1/3, 0, 0, 1/3), nrow =5) P <-t(P) P ``` For each $s \in \{1,2,3,4,5\}$, let $T^s = \inf \{ n \geq 1: X(n)=s | X(0)=s\}$. * By running simulations, estimate $\mathbb{E} T^s$, for each $s$. * Using our theory of Markov chains, compute *exactly* $\mathbb{E} T^s$, for each $s$.

```{solution} It is easy to verify that $P$ is irreducible and aperiodic, since $P^3$ has all positive entries: ``` ```{r} Q = P %*% P%*%P Q ``` Thus we know that the stationary distribution $\pi$ satisfies $$\pi_s = (\mathbb{E} T^s)^{-1}.$$ We easily compute $\pi$. ```{r} eigen(t(P)) stat = eigen(t(P))$vectors[,1] stat = stat/sum(stat) stat ``` ```{exercise} By running simulations, verify the central limit theorem for renewal processes, in the case where the inter-arrival times are given by a gamma distribution with shape $n=2$ and rate $\lambda=3$. ```

```{solution} The gamma distibution is available in R via; rgamma(n, shape, rate = 1, scale = 1/rate). For this question, we can also see it as the sum of two independent exponentials of rate $\lambda = 3$. The following code is adapted from the excessive life code; it gives the number of arrivals by time $t$: ``` ```{r} number<- function(t){ arr = rexp(1,3) + rexp(1,3) inter = arr while(arr[length(arr)]

```{exercise} Prove that if $s$ is a recurrent state of a Markov chain that is started at $s$, then with probability one, it must return to that state for infinitely many $n \in \mathbb{Z}^{+}$ ```

```{solution} Let $T_i$ be the $i$th return time to $s$, so that $T_0 = 0$. We already know that $T_1 < \infty$ with probability one. By the Strong Markov property, the same holds for all the $T_i$; in particular, $$\mathbb{P}(T_1 < \infty, T_2 < \infty) = (\mathbb{P}(T_1 < \infty))^2.$$ Hence $$ \mathbb{P} \big( \bigcap _{i=0} ^{\infty} \{ T_i < \infty \} \big) = 1.$$ ```

```{exercise} Prove that if an irreducible Markov chain has a recurrent state, then all the states must also be recurrent. ```

```{solution} Let $s\in S$ be the given recurrent state of the Markov chain $X$, so that we know that $$T^s = \inf \{ n \geq 1: X_n=s | X_0 = s\}$$ is finite with probability one. Let $t \in S$ be another state. By irreducibility, there exists $n \geq 1$, so that $p_{st}(n) >0$ and in $n$ steps it is possible to reach $t$ from $s$, and on this event, we still must have with the chain will return to $s$. Thus by the strong Markov property, $T^{ts}$, the time it takes for a chain starting at $t$ to reach $s$ is finite with probability one. *One* way to return to $t$ starting at $t$, is to go to $s$ and then from $s$ to go to $t$. Thus it suffices to argue that $T^{st}$ is also finite with probability one. Since each time the chain returns to $s$, we have an independent nonzero chance of reaching $t$ again, by the strong Markov property, this will eventually happen with probability $1$. ```

* Version: `r format(Sys.time(), '%d %B %Y')` * [Rmd Source](https://tsoo-math.github.io/ucl/QHW6-sols.Rmd)