Exercise 1 (Exponential random random variables) Let \(E_1, \ldots, E_n\) be independent exponential random variables with rates \(\lambda_1, \ldots, \lambda_n\) Show that \[\mathbb{P} ( E_1 = \min (E_1, \ldots, E_n)) = \frac{\lambda_1}{ \lambda_1 + \cdots + \lambda_n}.\]

Solution. Let \(E = \min(E_2, \ldots, E_n)\). Observe that \(E_1\) is the mininum of them all if and only if \(E_1\) is the mininum of \(E_1\) and \(E\). However, since the \(E_i\) are independent, \(E\) is an exponential random variable with rate \(\lambda_2 + \cdots + \lambda_n\). Since \(E_1\) is independent of \(E\), we already know how to handle this case of two exponentials from the lecture notes; we have that \[\mathbb{P}( E_1 = \min(E_1, E)) = \frac{\lambda_1}{ \lambda_1 + (\lambda_2 + \cdots + \lambda_n )},\] as desired.


Exercise 2 (Q matrices) Consider the continuous-time Markov chain \(X\) with three state \(\{1,2,3\}\) with \(Q\) matrix given by
Q <- matrix(c(-6,3,3, 2,-3,1, 2,7,-9), nrow =3)
Q = t(Q)
Q
##      [,1] [,2] [,3]
## [1,]   -6    3    3
## [2,]    2   -3    1
## [3,]    2    7   -9

Solution.

library("pracma")
## Warning: package 'pracma' was built under R version 4.0.3
Y = t(Q)
stat=nullspace(Y)
stat = stat/ sum(stat)
stat
##      [,1]
## [1,] 0.25
## [2,] 0.60
## [3,] 0.15

First we code one jump of the chain keeping track of how much time it takes. Since we are handed a Q matrix, we code a competing clocks version. We will replace the diagonal values of \(Q\) with positive numbers that we will not use in order to facilitate that competing clocks. We also introduce Inf, which R takes to be infinity.

Q[1,1] <-99
Q[2,2] <-99
Q[3,3] <-99
Q
##      [,1] [,2] [,3]
## [1,]   99    3    3
## [2,]    2   99    1
## [3,]    2    7   99
jump <- function(i){
  q = Q[i,]
  clocks = c( rexp(1,q[1]), rexp(1,q[2]), rexp(1,q[3])  )
clocks[i] = Inf
j=which.min(clocks)
time = clocks[j]

c(j,time)
}
runMC <- function(x){
  i = x[1]
  t = x[2]
  totaltime=0
  while(totaltime < t){
    new = jump(i)
  totaltime = new[2] + totaltime
  if(totaltime < t ){ i <- new[1] }
  }
i
}
y = replicate(500, runMC(c(1,123)))
av= c( sum(y==1)/500, sum(y==2)/500, sum(y==3)/500)
av
## [1] 0.266 0.592 0.142
av[3] - stat[3]
## [1] -0.008
av-stat
##        [,1]
## [1,]  0.016
## [2,] -0.008
## [3,] -0.008

Exercise 3 (Stationary measures) Let \(P\) be a transition matrix semigroup for an irreducible continuous-time Markov chain on a finite number of states \(A\) with the stationary measure \(\pi\). Let \(Q\) be the generator. Let \(M\) be transition matrix for the corresponding jump chain with the corresponding stationary measure \(\hat{\pi}\). Show that

\[\hat{\pi}_i = \frac{q_{ii} \pi_i}{\sum_j q_{jj} \pi_j }.\]

Solution. Recall that \[ m_{ij} = \frac{q_{ij}}{-q_{ii}}\] for \(i \not = j\) and \(m_{ii}=0\) on the diagonal. Thus we have

\[\begin{eqnarray*} (\hat{\pi} M)_j &=& \frac{1}{\sum_k q_{kk} \pi_k} \sum_i q_{ii} \pi_i m_{ij} \\ &=& \frac{1}{\sum_k q_{kk} \pi_k} \sum_{i\not = j} -\pi_i q_{ij}. \end{eqnarray*}\]

Since \(\pi\) is stationary, we know that \(\pi Q = 0\) and \[\sum_{i} \pi_i q_{ij} = 0.\] Hence \[ \frac{1}{\sum_k q_{kk} \pi_k} \sum_{i\not = j} -\pi_i q_{ij} = \frac{q_{jj} \pi_j}{\sum_k q_{kk} \pi_k} = \hat{\pi}_j,\]

as desired.