000 05346nam a22005175i 4500
001 978-1-84628-195-2
003 DE-He213
005 20161121230931.0
007 cr nn 008mamaa
008 100301s2005 xxk| s |||| 0|eng d
020 _a9781846281952
_9978-1-84628-195-2
024 7 _a10.1007/1-84628-195-4
_2doi
050 4 _aQA276-280
072 7 _aPBT
_2bicssc
072 7 _aMAT029000
_2bisacsh
082 0 4 _a519.5
_223
100 1 _aLongford, Nicholas T.
_eauthor.
245 1 0 _aMissing Data and Small-Area Estimation
_h[electronic resource] :
_bModern Analytical Equipment for the Survey Statistician /
_cby Nicholas T. Longford.
264 1 _aLondon :
_bSpringer London,
_c2005.
300 _aXVI, 360 p. 45 illus.
_bonline resource.
336 _atext
_btxt
_2rdacontent
337 _acomputer
_bc
_2rdamedia
338 _aonline resource
_bcr
_2rdacarrier
347 _atext file
_bPDF
_2rda
490 1 _aStatistics for Social Science and Public Policy
505 0 _aMissing data -- Prologue -- Describing incompleteness -- Single imputation and related methods -- Multiple imputation -- Case studies -- Small-area estimation -- Models for small areas -- Using auxiliary information -- Using small-area estimators -- Case studies -- Combining estimators -- Model selection.
520 _aThis book develops methods for two key problems in the analysis of large-scale surveys: dealing with incomplete data and making inferences about sparsely represented subdomains. The presentation is committed to two particular methods, multiple imputation for missing data and multivariate composition for small-area estimation. The methods are presented as developments of established approaches by attending to their deficiencies. Thus the change to more efficient methods can be gradual, sensitive to the management priorities in large research organisations and multidisciplinary teams and to other reasons for inertia. The typical setting of each problem is addressed first, and then the constituency of the applications is widened to reinforce the view that the general method is essential for modern survey analysis. The general tone of the book is not "from theory to practice," but "from current practice to better practice." The third part of the book, a single chapter, presents a method for efficient estimation under model uncertainty. It is inspired by the solution for small-area estimation and is an example of "from good practice to better theory." A strength of the presentation is chapters of case studies, one for each problem. Whenever possible, turning to examples and illustrations is preferred to the theoretical argument. The book is suitable for graduate students and researchers who are acquainted with the fundamentals of sampling theory and have a good grounding in statistical computing, or in conjunction with an intensive period of learning and establishing one's own a modern computing and graphical environment that would serve the reader for most of the analytical work in the future. While some analysts might regard data imperfections and deficiencies, such as nonresponse and limited sample size, as someone else's failure that bars effective and valid analysis, this book presents them as respectable analytical and inferential challenges, opportunities to harness the computing power into service of high-quality socially relevant statistics. Overriding in this approach is the general principle—to do the best, for the consumer of statistical information, that can be done with what is available. The reputation that government statistics is a rigid procedure-based and operation-centred activity, distant from the mainstream of statistical theory and practice, is refuted most resolutely. After leaving De Montfort University in 2004 where he was a Senior Research Fellow in Statistics, Nick Longford founded the statistical research and consulting company SNTL in Leicester, England. He was awarded the first Campion Fellowship (2000–02) for methodological research in United Kingdom government statistics. He has served as Associate Editor of the Journal of the Royal Statistical Society, Series A, and the Journal of Educational and Behavioral Statistics and as an Editor of the Journal of Multivariate Analysis. He is a member of the Editorial Board of the British Journal of Mathematical and Statistical Psychology. He is the author of two other monographs, Random Coefficient Models (Oxford University Press, 1993) and Models for Uncertainty in Educational Testing (Springer-Verlag, 1995).
650 0 _aStatistics.
650 0 _aEpidemiology.
650 0 _aComputer simulation.
650 0 _aPsychometrics.
650 1 4 _aStatistics.
650 2 4 _aStatistical Theory and Methods.
650 2 4 _aStatistics for Social Science, Behavorial Science, Education, Public Policy, and Law.
650 2 4 _aSimulation and Modeling.
650 2 4 _aEpidemiology.
650 2 4 _aStatistics and Computing/Statistics Programs.
650 2 4 _aPsychometrics.
710 2 _aSpringerLink (Online service)
773 0 _tSpringer eBooks
776 0 8 _iPrinted edition:
_z9781852337605
830 0 _aStatistics for Social Science and Public Policy
856 4 0 _uhttp://dx.doi.org/10.1007/1-84628-195-4
912 _aZDB-2-SMA
950 _aMathematics and Statistics (Springer-11649)
999 _c506091
_d506091