Home: MSc | Curriculum: Honours | Curriculum: M.Phill/PhD | Curriculum: MPSc | Curriculum: Current | Curriculum: Archive

*
**********

M. Stat. – 501
Advanced Statistical Inference
Full marks – 75
(Examination 60, Tutorial/Terminal 11.25, and Attendance 3.75)
Number of Lectures – Minimum 45
(Duration of Examination: 4 Hours)

****

Aim of the course
Aim of the course is to equip students in statistical inference covering the theory and the applications at an advanced level.
Objectives of the Course
 
This course is designed to aid the interpretation of data that are subject to appreciable haphazard variability and to give a comprehensive Statistical basis for the analysis of such data, excluding considerations specific to particular subject matter.
 
This course will give students a view of the nature of advanced statistical methods and to nurture advanced statistical thinking.
Learning Outcomes
After completing this course students will be able to
 
know how attain the complex inferential targets using general statistical inference,
 
handle any challenging statistical inference.

****

Course Contents
Sufficiency and Unbiasedness: Different types of statistical models, parametric, semi-parametric and non-parametric models, Group and exponential families of distribution, Sufficiency – minimal sufficient, Completeness and their relations – applications.  Lehman – Scheffe Theorem, UMVU estimates, LMVU estimates, Necessary and sufficient condition of UMVU.
Asymptotic Estimation: Different types of consistency and their relations, Asymptotic normality, Asymptotic efficiency, properties of U and V statistics. Different examples and applications of U and V statistics. Bootstrap bias and standard error.
Parametric Estimation: Bayes decision and estimators. Invariance, equivariance and Pitman estimators. Minimaxity and admissibility. Loss function optimality. Maximum, Quasi-maximum and conditional likelihood methods of estimation. Asymptotically efficient estimators. Confidence region. Fiducial & Tolerance limits. Bayesian and Bootstrap intervals. EM algorithm.
Non-parametric and Robust Estimation: Distribution estimators. Density estimators. Different concepts of robustness. Statistical functional differentiability & asymptotic normality L-M-R-estimators. Robustness Vs. efficiency. Variance estimation. Robust estimation of multivariate location functional and scatter matrix.
Composite Hypotheses: Review of simple hypothesis and test criteria, generalized Neyman Pearson lemma. Locally UMPU test. Similar region & Neyman structure. Sufficient statistics and SR test. MPSR test. UMPSR test. Asymptotic efficiency of a test.
Sequential Test:  Review of SPRT, OC and ASN functions. SPRT for three hypotheses. Sobel and Wald test. Armitage method for composite hypothesis. Wald theory of weight function. Cox’s theorem. Sequential t‑test, Sequential c2 test, Asymptotic Sequential t‑test, Sequential analysis of variance, Sequential Multivariate Analysis.
Non-Parametric Test: Introduction. ARE and Robustness of a non-parametric test.  McNemar test in 2×2 contingency analyses. Cox & Stuart test for trend. Cramer’s contingency coefficient. Cochran test for related observations. ARE of Mann-Whitney test and Sign test. Kruskal-Wallis test & CRS design. Square rank test for variances. Quantile test. Friedman test. Kolmogorov one sample & two samples test.

****

Main Books:
1)
Zakkula Govindarajulu (2004) Sequential Statistics. World Scientific Publishing Co. Pte. Ltd. London
2)
Efron, B. & Tibshirani, R.J. (1993): An Introduction to Bootstrap.  [Chernick]
3)
George Casella and Roger L. Berger (2002): Statistical Inference, 2nd Ed., Thomson Learning Asia and China Machine Press. [Solution]
4)
Lehman, E.L. (1986): Testing Statistical Hypotheses, 2nd Ed., Wiley, N.Y.
5)
Lehman, E.L. (1989): Theory of Point Estimation, 2nd Ed., Wiley, N.Y.
6)
Rohatgi, V.K. & Ehsanes Saleh, A.K.M. (2001): An Introduction to Probability and Statistics. John Wiley and Sons, N.Y.
7)
Silverman, B.W. (1986): Density Estimation for Statistics and Data Analysis, Chapman & Hall, London.
8)
Zacks, S (1971): Theory of Statistical Inference, Wiley, N.Y.
Books Recommended: 
9)
Ashraf Ali, M. (1974): Theory of Statistics, Nilkhet, Dhaka. [Bromek, Olive, Liero]
10)
Barnet, V. (1982): Comparative Statistical Inference, 2nd Ed., Wiley, N.Y.
11)
Efron, B. (1984): The Jackknife, the Bootstrap and Other Re-sampling Plans.
12)
Fraser, DHS (1985): Structure of Inference, Chapman and Hall, N.Y
13)
Gibbons, J.D., and Chakraborti, S. (1992): Non-Parametric Statistical Inference, Marcell-Dekkar, N.Y.
14)
Kalbfleisch, J.: Probability & Statistical Inference Vol.2, Springer-Verlag, N.Y.
15)
Noreen, E.W. (1982): Computer‑Intensive Methods for Testing Hypothesis. [Roff]
16)
Schervish (1995): Theory of Statistics, Springer Verlag, N.Y.
17)
Shao, J. & Tu, D. (2000): Jackknife and Bootstrap, Springer-Verlag, N.Y.
18)
Wetherill, G.B. & Glazebrook (1975): Sequential Methods in Statistics, 3rd ed., Chapman and Hall, London [Fu]