Lecture Topics by date for STAT 700 fropm Bickel & Doksum Fall 2022 ======================================================. LECTURE 1. (8/29/22) General overview & introduction: probability models. Statistics vs Probability; Large-sample asymptotics vs Finite-sample optimal properties Basic Framework of Statistics: Data Structure, Family of Probability Models, Statistic (=function of Data). Identifiability and consistent estimation. LECTURE 2. (8/31/22) Functions of single random variables (probability integral theorem). Basic idea of stochastic simulation. (Khintchine) Weak Law of Large Numbers. Distinction between identifiability and consistent estimation of parameters. Glivenko-Cantelli Theorem to identifiy distribution function. LECTURE 3. (9/2/22) More on identifiability. Examples. Independent Identically Distributed Data. Identifiability based on moments. Nonidentifiability as redundant parameterization, as in one-way ANOVA with both means and blockwise fixed effects. LECTURE 4. (9/7/22) Notions of convergence for random variables and probability distributions. Consistency (LLN) and limiting distributions (CLT). More on Identifiability of Parameters in Statistical problems. Examples with unknown functions as parameters. LECTURE 5. (9/9/22) Discussion of identifiability of normal (2-component) mixtures. Review of Continuous Mapping and Slutsky Theorems. LECTURE 6. (9/12/22) Conclusion (with handout) of identifiability of normal mixtures. Further explanation of Continuous Mapping and Slutsky Theorems, by way of Hammersley and Skorohod embeddding theorems. LECTURE 7. (9/14/22) Mixtures as a hierarchical model. Intro to Bayesian analysis of statistical models. Examples of posteriors, posterior mean estimates. Definition of Conjugate Priors. LECTURE 8. (9/16/22) More on Bayesian thinking about parameters. Intro to Bayesian and non-Bayesian decision theory. Conjugate priors. LECTURE 9. (9/19/22) Change of variables in verifying Dirichlet distribution derived from Gamma observations with same scale. Verification of Dirichlet as conjugate prior family for multinomial data. LECTURE 10. (9/21/22) Decision theory as game theory, simple definitions of pure and mixed (randomized) strategies, definition of risk function, Bayes and minimax and admissibility. Decision theory examples: estimation and hypothesis testing LECTURE 11. (9/23/22) Examples of calculating Bayes and minimax estimates in setting of affine function of Xbar for Normal(mu,1) with parameter space |mu| <= 1, squared error loss. LECTURE 12. (9/26/22) [Covered by Xiaoyu Zhou] Explanation of connection between admissible (randomized) statistical decision rules and lower/inner boundary points on feasible risk region. Convex geometry and supporting hyperplane theorem to explain in this setting (finite states of nature) why admissible procedures are Bayes. LECTURE 13. (9/28/22) Optimal Prediction (of outcome Y in terms of data X, with mean-square and mean absolute error loss functions as examples. LECTURE 14. (9/30/22) Prediction, continued. Convex loss implying nonrandomized predictions are enough. Connection with Bayesian priors and summary statistics for posterior distribution. Relation to Multivariate Normal and regression. LECTURE 15. (10/3/22) Sufficiency, definitions and factorization theorem and examples. LECTURE 16. (10/5/22) More on sufficiency. Example of Order-statistics in iid samples. Minimal sufficiency, example of verification. LECTURE 17. (10/7/22) Minimal sufficiency, further examples: rank statistics, modification when densities are assumed symmetric about 0. LECTURE 18. (10/10/22) Completing the verification that randomized decision rules depending on sufficient statistics form a complete class. LECTURE 19. (10/12/22) Rao-Blackwell Theorem, complete-class under squared-error loss, and example with Unif[0,theta] sample data. LECTURE 20. (10/14/22) Definition of completeness, UMVUEs and examples. LECTURE 21. (10/17/22) Exponential families -- formal introduction, definitions of natural parameter space, means, variances, MLEs. LECTURE 22. (10/19/22) Exponential family property preserved eunder iid samples. Distribution of sufficient statistic, including exponential family proprerty for it. LECTURE 23. (10/21/22) Dominated convergence theorem used to differentiate under integral sign to establish relation of derivatives of A(.) function to mean and variance of sufficient statistic. LECTURE 24. (10/24/22) Convexity of natural parameter space in exponential family. Convexity of A(.), log-concavity of likelihood. LECTURE 25. (10/26/22) Definition of curved exponential families. Example where sufficient statistic is not complete. Conjugate priors for exponential families. LECTURE 26. (10/28/22) Verifying conjugate priors are proper. Rank of a canonical exponential family: equivalent forms, including 1-to-1 mapping eta <-> E(T(X)). (10/31/22 Review Session, 11/2/22 In-Class Test) LECTURE 27. (11/4/22) Maximum Likelihood Estimation (from Sec.2.2.2) and Generalized Method of Moments Estimation (from Example 2.1.2 on p.101). LECTURE 28. (11/7/22) Numerical Maximization, Newton-Raphson Algorithm (Sec 2.4.2-3), and Maximum Likelihood Asymptotics (Heuristic large-sample theory). LECTURE 29. (11/9/22) Maximum Likelihood in Canonical Multiparameter Exponential Families (Sec.2.3). LECTURE 30. (11/11/22) Information (=Cramer-Rao) Inequality, Sec. 3.4.2 LECTURE 31. (11/14/22) Relation between Cramer-Rao Inequality Minimizers and Exponential Families. LECTURE 32. (11/16/22) Characterizing statistical models that attain the Information Inequality bound exactly in finite samples. LECTURE 33. (11/18/22) Introduction to Hypothesis Testing. Definitions and Neyman-Pearson formulation with type I and type II error. LECTURE 34. (11/21/22) More on Neyman-Pearson formulation. Simple and Composite hypotheses. Definition of power. Bayesian decision theoretic formulation. Distinction between significance level and size. LECTURE 35. (11/28/22) Neyman-Pearson Lemma, statement. Neyman-Pearson-type tests of simple versus simple hypotheses. Auxiliary randomization to LECTURE 36. (11/30/22) Conclusion of proof for necessary and sufficient condition for test of simple versus simple hypotheses to be most powerful. LECTURE 37. (12/2/22) Most powerful tests for simple null versus compsite one-sided alternatives. LECTURE 38. (12/5/22) Notion of Monotone Likelihood Ratio family of statistical models. Karlin-Rubin Theorem giving UMP one-sided tests of composite null versus composite alternative. LECTURE 39. (12/7/22) Conclusion of proof of Karlin-Rubin Theorem, monotonicity of power function. P-values. LECTURE 40. (12/9/22) Further hy[pothesis testing examples. Final Lecture on 12/12 and Review Session on 12/14 were review for Final Exam.