Statistics 650 Applied Stochastic Processes
Spring 2018 MW 5-6:15pm , Mth 0409A directory containing old final exams and sample
exam problems, and also a list of topics we emphasized in this term's course, can be
found here.
Please fill out the on-line Evaluation form on this Course and teacher at http://CourseEvalUM.umd.edu. Thank you.
Instructor: Eric Slud, Statistics Program, Math. Dept.
Office: Mth 2314, x5-5469, email evs@math.umd.edu
Office hours: M10:30-11:30 am, W1:30 or by appointment.
Required Course Text:
R. Durrett, Essentials of Stochastic processes,
1999, 2nd ed. 2010, 3rd ed. 2016 Springer.
This is a good book, with good intuitive explanations and many
interesting problems.
The beta 2nd edition is available free online and also here.
Recommended Texts: Karlin, S. and Taylor, H.,
A First Course in Stochastic Processes,
2nd ed. 1975,
Academic Press.
This is an excellent standard book, with thorough treatment of all important topics and
a broad selection of applications. However, its problems are (much) too hard.
Additional Recommended Course Texts (used in Spring 2016):
R. Serfozo, Basics of Applied Stochastic Processes,
2009 Springer.
M. Lefebvre, Applied Stochastic Processes,
2007 Springer.
Both of these books can be freely downloaded as e-books (through the University of
Maryland libraries) by currently registered University of Maryland College Park students, who
can also purchase hard-copies for $25.00. (Inquire at the EPSL Library for details.)
Both of these two books are well-written mathematical treatments of our subject, but
the first book is written with slightly more advanced prerequisite, assuming measure-theoretic
probability. I may draw some problems from these books (but will reproduce them in a convenient form).
Updated Homework Solutions
Info about Mid-term Test
Info about Final Examination
Sample Final-Exam Problems
Course Coverage: The core material of the course consists of Chapters 1-4 in
the Serfozo book, with considerable skipping of more advanced topics, and Chapters 3, 5,
and 6 in the Lefebvre book: the primary topics are Discrete Time Markov Chains, Poisson
Processes, Continuous Time Markov Chains, and Renewal Processes. See detailed Course Outline below. The primary applications are population and
queueing processes, but many other special applied examples occur in both books.
Overview: This course is about Stochastic Processes, or
time-evolving collections of random variables, primarily about the discrete-state
continuous time Markov chains which arise in applications in a variety of disciplines.
For the first part of the course, both the random variables and the time index-set are discrete:
in this setting, our object of study is discrete-time discrete-state Markov chains.
Examples of "states" which arise in applications include the size of a population or a
waiting-line, or the state ("in control" versus "out of control") of a manufacturing process,
or other indicators such as "married" or "employed" etc. for an individual. "Markov chains" are
time-evolving systems whose future trajectory does not depend on their past history,
given their current state. But many of the most interesting applications involve the generalization
of the same ideas to continuous time. Prerequisite: Stat 410, or Math 410 plus one semester
of probability theory. Course requirements and Grading: there will be 8 graded homework sets (one every 1½ weeks) which together will count 50% of the course grade.
In all homework problems, your solution will be graded correct only if you
fully justify your reasoning. There will also be an in-class test and a final examination,
which will respectively count 20% and 30% toward the overall course grade. Assignment 1.
(First 1.5 weeks of course, HW due Fri., Feb. 2).
Assignment 2.
(Second 2 weeks of course, HW due Fri., Feb. 16, 11:59pm).
Assignment 3.
(Next 2 weeks of course, HW due Fri., March 2, 11:59pm).
Assignment 4.
(Next 2 weeks of course, HW due Wed., March 14).
Assignment 5.
(Next 2 weeks of course, HW due Fri., March 30, 11:59pm.).
(I) Suppose that Xt for t ≥ 0 is a homogeneous
continuous-time Markov chain with Assignment 6.
(Next 2 weeks of course, HW due Fri., April 13, 6pm.).
In connection with problems (A)-(B) below, note that a continuous-time
irreducible HMC is recurrent if and only if its embedded discrete-time chain is
null-recurrent. (The reason for that is that recurrence is equivalent to saying
that with probability 1 the chain starting from state i
returns to i infinitely often, and this has to do only with
the sequence of successive states entered by the chain and not at all with the
time it takes to make the transitions. So this holds even if the chain is explosive
and makes infitinely many jumps in finite time.) (A) Show that the finite-state continuous-time Markov Chain X(t) is
positive-recurrent (B) Give an example, in terms of the continuous-time ladder chain X(t)
with state-space (C) Find the exact transition probabilities Pt(a,b) of the HMC with 3x3
intensity matrix Assignment 7.
(Next 2 weeks of course, HW due Fri., April 27, 11:59pm.).
(I) Suppose that Y ~ F is a service-time random variable, and that N(t)
is an independent (II) Show that it follows from the Blackwell Renewal Theorem for a continuous-time irreducible
null- Assignment 8.
(Next 2 weeks of course, HW due Fri., May 11, 4:30pm.).
(A) (10 points) Suppose that Xk+1 = ∑1 ≤ j ≤
Xk
Zk,j is a Branching process, where Zk,j are all
iid (B) (20 points) Let Xn be a discrete-time Birth-death HMC with
0 < λk ≤ μk for all k ≥ 1, and
λ0=1.
(C) (10 points) Suppose that Xn is a fair random walk on the d-dimensional
Cartesian product S of the (D) (10 points) If Yn is a Homogeneous Markov Chain
(discrete-time) on the state-space (E) (25 points) Consider an M/G/1 queueing system with Poisson(0.5) arrivals and
Uniform[0,1] service (1). Handout on a non-Markovian example constructed
by lumping 2 of the states in a 3-state Homogeneous Markov Chain. (2). Handout of 5 pages from Durrett's "Essentials of Stochastic Processes" book
containing proofs for equivalent conditions to positive-recurrence,
for aperiodic irreducible HMC's. (3). Handout clarifying two loose ends from recent classes: (1) the
condition for existence of a stationary probability distribution for recurrent Birth-Death chains, and (2) the
notational verification as in Durrett that the "cycle trick" provides a stationary disatribution for an
irreducible positive-recurrent HMC. (4). Handout on existence of invariant vectors for countable-state irreducible
transient Homogeneous Markov Chains. (5). Handout on the uniqueness of solutions of the
Kolmogoroff forward and backward equations when the rates q_i of leaving states are uniformly bounded. (5). A Sample Test containing short-answer problems like the ones to appear on the April 2
in-class test can be found here. Note that this sample test does
not have Poisson process problems, although they are in scope for the Spring 2018 mid-term. (6). Handout on the main limit theorems of renewal theory,
taken from 2 pages of the Grimmett and Stirzaker book, "Probability and Random Processes".
Mid-Term Test
A mid-term test will be given in-class on Monday, April 2, 2018. It will be easier
than the HW problems, consisting of short-answer questions emphasizing definitions and
the application of results of Theorems. Sample problems can be found here.
On the test, you will not be permitted to use your books, but you are allowed
to bring up to two notebook sheets of notes for reference.
Sample problems and information about specific coverage of the test can be
found here. Sample Final Exam
You can find sample final-exam problems here. By all
means try them and ask questions about them, either in class or in
the Review Session to be held Friday, May 11, in our classroom from 4:30--6pm. The Final Examination will be held from 4 to 6pm on Wednesday, May 16, 2018 in the
regular classroom. It will have 5 or 6 problems. The coverage is cumulative of
all material from the Durrett book assigned and covered in class throughout the
semester. Also, you will be allowed 2 2-sided notebook sheets of reference material
for the exam. But the exam will otherwise be closed-book. There will be a review session on Friday, May 11, 4:30-6pm, for the Final Exam, in
the regular classroom. The UMCP
Math Department home page. The University of
Maryland home page. My home
page. © Eric V Slud,
May 11, 2018.
Probability theory material needed throughout this course includes joint probability laws,
probability mass functions and densities, conditional expectations, moment generating functions,
and an understanding of the various kinds of probabilistic convergence, including the Law of
Large Numbers.
Various technical tools developed in the course and used in an essential way include: Laplace
transforms and moment generating functions, methods of solving recursions and systems of
difference equations, ergodic and renewal theorems for Markov chains, and (discrete-time)
martingales.
Homework
Homeworks are to be handed in during class on the indicated due-dates. Although hard-copies are
preferable, I will accept electronically submitted papers. Often I will post solutions
or sketches of solutions on or shortly after the due date. Late papers
will lose at least 15% credit (the more the later they are).
Assignments, including any changes and hints, will continually be posted here, and old HW
assignments and (partial) solutions will be posted here.
Read the Review of Probability Chapter plus Sections 1.1 to 1.3.
Then solve and hand in the following problems (8 in all):
(1) Which of the following two functions F(x,y) can possibly be the joint
distribution function P(X ≤ x, Y ≤ y) of a pair of jointly distributed
random variables X, Y. For each part, if F is a proper joint d.f.,
find the marginal density functions for X and Y.
(i) F(x,y) = 1 - e-x-y if x,y ≥ 0,
and = 0 otherwise.
(ii) F(x,y) = 1 - e-x - x e-y if 0 ≤ x ≤ y,
1 - e-y - y e-y if 0 ≤ y ≤ x,
and = 0 otherwise.
(2) Show that it is not possible to load [assign unequal probabilities to the possible outcomes
1,..,6 for each of] two dice in such a way that when they are tossed independently, the total
number of dots showing is equally likely to be any of the integers 2, 3, ..., 12.
(3) In the game of craps when the "shooter's point is 4", his objective is to throw a sum of 4 dots showing
with two [independent, fair] dice before a sum of 7 dots appears, in repeated tosses of the two dice. Calculate in
three different ways the probability x that he succeeds:
(i) Consider the outcome of the
first roll of the dice, to conclude that x = 3/36 + (27/36) x and solve.
(ii) Let Ak be the event that the sum is k and justify why
x = P(A4) / P( A4 ∪ A7).
(iii) Calculate directly
and then sum up over k the probabilities that 4 occurs before 7 and this happens on the
k'th trial.
(4) For two independent random variables X ~ Uniform[0,4] and Y ~ Uniform[2,5],
find (a) E(Y-X | Y > X) , and (b) the joint
probability distribution of (min(X,Y), Y - min(X,Y)). Note that the second
component random variable in this pair has a probability atom at 0, i.e. a positive
probability of being equal to 0.
plus 4 more problems about Markov chain basics:
Lefebvre book
#12 p.145 and #13 p.151,
and Durrett (online Beta 2nd ed. version), #1.7 and 1.8(c),(d) pp. 87-88.
Read Chapter 1 Sections 1.3 to 1.5.
Then solve and hand in the following problems (7 in all):
(1) Prove the next-to-last statement in the proof of (1.2) [mistakenly cited as "(4.1)"] on
page 16, namely:
prove that for all i, j, k ∈ S and all
m, n ≥ 1 , P(Xm+n=j | Xm=k,
X0=i) = P(Xm+n=j | Xm=k).
(2) Suppose that (Xk, k=0,1,2, ...) is a temporally homogeneous Markov
chain on an irreducible state-space S with a finite number K
of states and with K x K transition matrix Q with entries
Qij = p(i,j). Prove that all entries of Qn for
n ≥ K are strictly positive under the additional assumption that
Qii > 0 for all i ∈ S, but not (in general) without that
assumption.
(3) Suppose that (Xk, k=0,1,2, ...) is a temporally homogeneous Markov
chain on state-space S equal to the nonnegative integers. Suppose that
p(i,j)=0 whenever j > i+20, that p(i,i+1)>0 for all i,
and that for all n ≥ 10, p(n,0) ≥ 1/n1/2.
Prove from these assumptions that
(a) for all initial states i, Pr(min({k ≥ 1: Xk
∈ {0,1,...,9}}) < ∞) | X0=i) = 1, and that therefore
(b) the Markov chain is irreducible and recurrent.
plus problems # 1.13 and 1.15 on pp.92-93 and 1.47 on p.98 of Durrett,
and Question #20 on pp.147-148 of Lefebvre.
Read the rest of Chapter 1 of Durrett.
Then solve and hand in the following problems (7 in all):
(1) (a) Prove that the Branching Process Markov Chain on {0,1,2,...} with offspring distribution p(k) = 1/4
for k=1,2 and p(0) = 1/2 has all states transient other than {0} which is the
only irreducible closed class. (b) Prove that if the transitions from 0 in the chain
in (a) are modified so that p(0,1) = .2 and p(0,0) = 0.8, then the chain
becomes irreducible and positively recurrent. If you can, find an upper bound on
E0(T0).
(2) Recall the discussion early in the course about the (random) time T(1,2,3) to
first see 1,2,3 appear successively in an iid sequence of uniform{0,1,...,9} random
digits. (a) Formulate the sequence of successive triplets seen as a Markov Chain on
{0,1,..,9}3, and show that this chain is
irreducible aperiodic and not reversible. (b) Find E(T(1,2,3)). I do not specify an
initial state because I have in mind the expected waiting time from the beginning of generating the
iid sequence of random digits. In the context of the Markov Chain that you define in part (a), you
can take as your initial state any triplet not ending in 1,2,or 3.
plus problems # 1.52 and 1.54 on pp.99-100, 1.63 and 1.67 on pp.102-103, and 1.75 on p.105
of Durrett.
Read Chapter 2 of Durrett. Begin Chapter 4 by reading all of Section 4.1.
After reading Chapter 2, solve and hand in the following problems (8 in all, worth 9 points each):
problems # 2.14, 2.17, 2.24, 2.35, 2.38, 2.43, 2.48, 2.56 on pp.129-136 of Durrett.
Read Chapter 4 of Durrett through Section 4.3.
Solve and hand in the following problems (6 in all, worth at total of 65 points):
problems # 4.3, 4.8, 4.22, 4.28, 4.29 of Durrett each worth 10 points, plus 1 additional problem worth
15 points:
states S = {1,2,3} and transition rates qab = b
for all a,b ∈ S.
(a) Find Eb(Tb).
(b) Find the limit for large t of
P1(X(t) = b).
(c) Find the limit for large
integer n of E1(τn/n), where
τ0 = 0 and
for k ≥ 0, τk+1 =
inf{ t> τk: X(t) ≠ X(τk).
Read the rest of Chapter 4 of Durrett, and also Sections 3.1 and 3.2.
Solve and hand in the following problems (7 in all):
problems # 4.24, 4.32, 3.9, and 3.13 of Durrett, plus 3 additional problems:
For clarity in these problems, assume that all Markov chains
are irreducible and in (A) interpret "positive-recurrent" to mean that the expected
time to re-enter any specified state is finite.
if its discrete-time embedded chain X(τk)
is positive-recurrent.
S = {0,1,2,...}, with rates qk,0 = ak,
qk,k+1 = bk and all other rates qij = 0, of
a positively
recurrent continuous time chain which has null-recurrent embedded discrete-time
chain X(τk).
Q which has entries 1 off the diagonal and -2 on the diagonal,
by solving the forward or
backward differential equation system explicitly.
Read the rest of Chapter 3 of Durrett, and also Section 5.2.
Solve and hand in the following problems (7 in all):
problem # 3.18, 3.20 in Durrett [and for 3.20 derive a renewal equation for the desired probability
and use the Key Renewal Theorem], and problems 4, 14 and 19 in Serfozo's book pp. 157-160, plus the
following 2 extra problems:
Poisson(λ) process. Let M denote the number of jumps in
N(t) viewed as new queueing-system
arrivals over the course of the service-time Y.
(a) Find the probability generating function of the nonnegative-integer-valued random variable M,
and use it to find the mean and variance of M.
(b) Use the result in (a) together with what you know
about branching-process extinction
probabilities to give another proof that the M/G/1 queue has
probability 1 of emptying
out regardless of the length of its initial waiting-line when λ < μ,
using the idea of Example 1.51 covered in class.
recurrent HMC X(t) on a countable state-space S that for any states
a, b ∈ S, limt → ∞ pt(a,b) = 0.
Read the rest of Chapter 5 of Durrett, and also Ch. 6.
Solve and hand in the following problems (5 in all, worth a total of 75 points):
(nondegenerate) integer-valued random variables with mean
μ, where X0=1. Show
that Xk/μk is a
martingale, and conclude that
Xk converges to 0 with probability 1 if μ ≤ 1. Hint:
Use the fact
that Xk is both an HMC and converges to a finite-valued random
variable X with mean ≤ 1.
(a). Show that this HMC is irreducible and recurrent.
(b). If λk =
μk for all k ≥ 1, and Nk(0) is the number of j ∈ {1,...,k} for
which Xj=0, then
show
E0(Xn) = E0(Nn-1(0)) + 1 for all n ≥ 1,
where N0(0) is defined = 0.
(c). Use (b) or another method to develop a
formula for the limit for large n of E0(Nn+k(0)-Nn(0)).
(d). Still under the assumption that λk = μk for k ≥ 1,
find P1( Xn hits M before 0 ).
(positive and negative) integers, i.e. the set of vectors of integers
of length d, with each
coordinate of Xk+1- Xk equal to ± 1 with
probability 1/2, independently of the other coordinates.
Prove that Xn visits all
2d orthants of S infinitely often, with probability 1, even though (it can be shown,
but you should not show it for this exercise) that for all d ≥ 3, {
Xn } is an irreducible transient HMC.
S = {0,1,2,... } with transitions p(k,0) ≥ (1/k)
∑j: j ≥ k (j-k) p(k,j) for all k > 100 and p(x,x+1) > 0
whenever x ≥ 0, then show that the chain Yn is irreducible
recurrent.
This problem has been corrected (inserting the term j-k inside the sum
over j ≥ k) at a relatively late stage.
So if you have already spent time on the problem and
hand in a fully developed counterexample to
the incorrect older statement, you will get full
credit. Otherwise do the problem in its current version
times (in hours), with the additional protocol or rule of operation that
whenever there are at least 2 persons
waiting on line (in addition to the one currently in
service), the doors of the service facility close so that
no new arrivals can enter the system
until the system empties out (i.e. until no one is left waiting and the
last person's service
time is complete), at which time the doors re-open.
(a). Find the long-term proportion of the time during which the system is busy.
(b). Find the long-term proportion of arriving customers who are allowed to enter the system.
(c). Find the average number of hours that an arriving customer who enters the system spends
until that
person's service is completed.
(d). Suppose that the system's owners receive $10 from each customer who enters and completes
his service
in less than 1 hour and $5 from each customer who enters and completes his
service in > 1 hour. Derive
a renewal equation for the expected income of the service
facility during the first t hours of operation (for
real values of t ≥ 0).
Handouts
Final Examination
STAT 650 SYLLABUS / COURSE OUTLINE
0. Probability Review.
( Durrett Appendix A, Lefebvre, Chapter 1; Serfozo, Sec.1.22 & Ch.6.)
2 Lectures
(a) Probability spaces, countable additivity.
(b) Conditional expectation, change-of-variable and transformations of r.v.'s.
1. General Definitions and Properties of Stochastic Processes
( Lefebvre, Chapter 2.) 1 Lecture
(a) Spaces of trajectories (infinite sequences of states).
(b) Independent increments, stationarity and wide-sense stationarity, Markov
property, ergodicity
2. Discrete-time Discrete-State Markov Chains.
( Durrett Chapter 1) 5 Lectures
(a) Markov property. Examples of Markov & non-Markov random sequences.
(b) Multistep transition probabilities. Chapman-Kolmogorov equation.
(c) "First-step analysis" and branching processes
(d) Classification of states.
(e) Notions of limiting behavior. Reducibility. Recurrence. Steady state.
(f) Time reversibility, exit distributions, and other topics.
3. Poisson Processes.
( Durrett Chapter 2) 4 Lectures
(a) Equivalent Definitions of Poisson Process
(b) Nonhomogeneous Poisson, Compound Poisson, Superposition and Thinning
4. Renewal Processes.
( Durrett Chapter 3) 3 Lectures
(a) Laws of Large Numbers, Renewal Equation.
(b) Queueing theory examples. Asymptotic behavior of renewal reward processes.
5. Continuous time Markov Chains.
( Durrett Chapter 4) 4 Lectures
(a) Kolmogorov Forward and backward equations. Bith-death processes.
(b) Embedded discrete chains.
(c) Limiting behavior and stationary distributions.
6. Martingales & Applications.
( Durrett Chapter 5) 2 Lectures
(a) Definitions and Optional Sampling Theorem.
(b) Expectation calculations in examples.
7. Miscellaneous Applications.
( Durrett Ch.6, Secs. 4.4-4.5.)
3 Lectures
(a) Brownian Motion.
(b) Finance and Black-Scholes Formula.
(c) Reversibility in Queueing & Markov Chain Monte Carlo.
Important Dates, Spring 2018
IN-CLASS FINAL EXAM: NOTE 4pm instead of 5pm.