Exercise 1 (Integrals)
Let X≥0 be a continuous random variable with finite first moment. Prove that EX=∫∞0P(X>t)dt=∫∞0[1−FX(t)]dt Hint: use a double integral.
Let X and Y be nonnegative independent continuous random variables. Prove that for t>0, we have P(XY>t)=∫∞0P(X>ty)fY(y)dy, where fY is the probability density function for Y.
Using the previous results prove that E(XY)=(EX)(EY), assuming all the expectations are finite.
Solution.
EX=∫∞0xf(x)dx=∫∞0f(x)[∫x01dt]dx=∫∞0∫∞tf(x)dxdt=∫∞0P(X>t)dt; here we note that the interchange in order of integration is permissible since everything is positive and EX<∞.
Since X and Y are independent, we have P(XY>t)=∫∞0∫∞t/yfX(x)fY(y)dxdy=∫∞0P(X>t/y)fY(y)dy, as desired.
By the previous results, EXY=∫∞0P(XY>t)dt=∫∞0∫∞0P(X>t/y)fY(y)dydt=∫∞0fY(y)[∫∞0P(X>t/y)dt]dy=∫∞0yfY(y)dy⋅∫∞0P(X>t)dt=EY(EX).
Solution. We will star both sides of the given equation by F to obtain ϕ∗F=H∗F+H∗m∗F=H∗(F+m∗F). Since we know from our first renewal equation that m=F+m∗F we obtain ϕ∗F=H∗m from which the desired result follows.
Exercise 3 (Excess life) With the usual notation, let E be the excess life of a renewal process with renewal function m and F for the cumulative distribution of the inter-arrival times.
By conditioning on the first arrival, show that P(E(t)>y)=∫t0P(E(t−x)>y)dF(x)+∫∞t+ydF(x)
Apply the general theorem on renewal equations to obtain that
P(E(t)≤y)=F(t+y)−∫t0[1−F(t+y−x)]dm(x).
Solution.
\phi(x) = \mathbb{P}(E(t) > y | X_1 = x)
we have for t < x, we have \phi(x)=0 if x-t \leq y and \phi(x)=1 if x-t >y. If t \geq x = X_1, then E(t) has the same law as E(t-x) and \phi(x) = \mathbb{P}(E(t-x)). Thus
\mathbb{P}(E(t) >y) = \mathbb{E}\phi(X_1) = \int_0 ^t \mathbb{P}(E(t-x))dF(x) + \int_{t+y} ^{\infty} 1\cdot dF(x), as desired.
\psi(t) = 1- F(t+y) + (\psi*F)(t).
Thus by our general theorem on renewal equations, we have
\psi(t) = 1- F(t+y) + \int_0 ^t [1-F(t+y-x)] dm(x) and rearranging gives the desired result.
Since 0 \leq 1- F \leq 1 is non-increasing and
\begin{eqnarray*} \int_0 ^{\infty} k_y(x)dx &=& \int_0 ^{\infty} [1-F(x+y)] dx \\ &=& \mu - \int_0 ^y [1-F(x)] dx \\ &\leq& \mathbb{E}X_1 = \mu < \infty, \end{eqnarray*}
the key renewal theorem applies, and gives that
\begin{eqnarray*} \lim_{t \to \infty} \int_0 ^t [1-F(t+y-x)] dm(x) &=& \frac{1}{\mu} \int_0 ^{\infty} [1-F(x+y)]dx\\ &=& \frac{1}{\mu} \Big( \mu - \int_0 ^y [1-F(x)] dx \Big). \end{eqnarray*}
Now, remember, mind your surroundings, and we still owe the universe a one minus, so the desired conclusion follows.
Exercise 4 (Random tiles) I have two types of tiles, one of length \pi and another of length \sqrt{2}. Suppose that I tile the half line [0, \infty), via the following procedure, I pick one of two types of tiles with equal probability, then I can place it, starting at the origin. I continue this procedure indefinitly, and independently.
Suppose that I pick a large t, is it equally likely that it would be covered the tile types?
Run a simulation to estimate the probability that t is covered by tile of length \pi.
Solution.
If we pick some large t, from our experience with size-biasing, we know it is more likely that we will land in the longer tile of length \pi
The following code, excutes the tiling procedure up to time t and returns the type of tile. We till label the \pi tile as type-1 and the \sqrt{2} tile as type-0.
tile <- function(t){
types = rbinom(1,1,0.5)
tlength = sqrt(2) * (1-types[1]) + pi*types[1]
while(t > tlength){
types = c(types, rbinom(1,1,0.5))
tlength <- tlength + sqrt(2) * (1-types[length(types)]) + pi*types[length(types)]
}
types[length(types)]
}
Now for some large t, we sample from our function a large number of times, and find the average number of times that t land in a tile of length \pi.
y = replicate(1000, tile(111))
mean(y)
## [1] 0.688
You might guess that this probability is:
pi/(pi + sqrt(2))
## [1] 0.68958