- ホーム
- > 洋書
- > 英文書
- > Business / Economics
Full Description
Sample surveys is a major source of statistical information in economics and finance. When policymakers, researchers, and analysts need targeted data-based evidence, surveys can be deployed quickly, at relatively low cost, tailored to specific policy questions and used to capture information that other data sources often miss—subjective dimensions such as perceptions, expectations, values, and attitudes. From economics and finance to social and living conditions, surveys generate robust, decision-ready insights. And when based on representative samples, they yield accurate estimates for many finite populations, from households and individuals to businesses or financial institutions.
At the same time, the survey environment is changing rapidly. The growth of administrative and digital data sources, combined with survey fatigue and stronger concerns about confidentiality, has led to declining response rates and increasing operational difficulties, challenging the relevance of the survey instrument and raising questions about how surveys should be designed and used.
Even so, surveys are not obsolete, but they must evolve within a multi-source statistical ecosystem. For example, mixed-mode data collection can better match respondent preferences; greater use of administrative records can shorten questionnaires and improve precision; and modern data science tools—including AI—can support more efficient production and analysis.
This book provides a practical guide to survey work from design to analysis. It presents the theoretical foundations of probability-based sampling and inference, while emphasising the real constraints of survey operations—budget, time, staffing and respondent burden. It is intended for readers who need both methodological grounding and workable approaches for producing accurate estimates in statistical practice.
Key Features:
Promote sample surveys as a core data source for research and decision-making in economics and finance.
Serve as a practical handbook for academics and practitioners, covering both survey design and analysis methods.
Illustrate key theoretical concepts with numerical applications using dedicated R packages for survey analysis.
Explain how modern data science tools—including AI—can reshape survey design, implementation, and inference in the years ahead.
Contents
List of Figures List of Tables Preface 1 The Central Role of Statistical Data in Economics and Finance1.1 The Profusion and Diversity of Data Sources in Economics and Finance 1.2 Typology of Statistical Data in Economics and Finance 1.3 The European and International Dimension 2 The Importance of Surveys in Economics and Finance 2.1 The Rise of Surveys in the Statistical Landscape 2.2 Why Surveys Matter in Economics and Finance 2.3 Examples of Surveys in Economics and Finance 2.4 Using Surveys to Describe and Analyse Socio-Economic In-equalities 2.5 Using Surveys to Understand Subjective Dimensions 2.6 Using Surveys to Analyse Household and Business Access to Finance 2.7 Using Surveys in Labour Economics 2.8 Using Surveys to calculate Price Indexes 2.9 Financial Inclusion 2.10 Specific Features of Survey Data in Economics and Finance3 Main Concepts and Definitions 3.1 Population, Sample; Parameter and Estimator 3.2 Probability/Non-Probability Sampling Designs 3.3 Sample Representativity 3.4 Sampling frame and coverage errors 3.5 Bias, Variance, Standard error and Mean Square Error 3.6 Confidence Intervals 3.7 The Bias/Variance Tradeoff 3.8 The Total Survey Error (TSE) Framework 4 Simple random sampling 4.1 What is it? 4.2 Inclusion Probabilities 4.3 Estimating Population Totals and Means 4.4 Estimating Population Counts, Proportions and Net Changes 4.5 Domain estimation 4.6 Sampling Algorithms 4.7 Software Implementation with the R survey Package 5 Stratification 5.1 What is it? 5.2 Total, Mean and Proportion Estimators 5.3 Sample Allocation and Optimality 5.4 Choosing Stratification Criteria 5.5 Software Implementation with the R survey Package 6 Unequal Probability Sampling 6.1 The Horvitz-Thompson estimator 6.2 The Hansen-Hurwitz estimator 6.3 Software Implementation with the R survey Package 7 Multi-stage Sampling 7.1 Introduction and Notations 7.2 Estimating a Population Total 7.3 Case of Simple Random Sampling at each Stage 7.4 Design Optimality 7.5 Software Implementation with the R survey Package 8 Multi-phase Sampling 8.1 Introduction and Notations 8.2 The Double-Phase Horvitz-Thompson Estimator 8.3 Design Optimality in Multi-Phase Sampling 9 Indirect Sampling 9.1 Definition 9.2 The Generalised Weight Share Method 9.3 Examples 9.3.1 Compute Household Weights From Individual Weights 9.3.2 Dual-Frame Sampling With Telephone Numbers 10 Non-Probability Sampling 10.1 Examples of Non-Probability Designs 10.1.1 Quota Sampling 10.1.2 Other Examples 10.1.3 Access Panels 10.2 Inference Under Non-Probability Samples 10.2.1 Mass Imputation 10.2.2 Inverse Probability Weighting 10.2.3 Double Robust Approach 10.2.4 Case of Access Panels 10.3 Software Implementation in R 11 Collecting and Editing Survey Data 11.1 Questionnaire Design and Pilot Testing 11.2 Several Modes of Data Collection 11.3 Data Editing and Imputation 11.3.1 What is It? 11.3.2 Examples of Data Checks 11.3.3 Imputation of Missing Values 12 The Issue of Unit Non-response 12.1 A Major Challenge for Data Quality 12.2 Dealing With Unit Non-response 12.3 Variance Estimation Under Unit Non-response 12.4 Estimating Response Propensities 12.5 Preventing Unit Non-Response - The Case of Business Surveys 13 Incorporating Auxiliary Information to Increase Sampling Precision13.1 The Importance of Incorporating Auxiliary Information 13.2 Examples of Adjusted Estimators 13.2.1 Difference Estimator 13.2.2 Ratio Estimator 13.2.3 Generalised Regression Estimator 13.2.4 The Post-Stratified Estimator 13.3 Unified Calibration Framework 13.4 Software Implementation 13.5 Other Operational Aspects 13.6 Calibration as a way to deal with unit non-response 13.7 Calibration and Non-Probability Samples 13.8 Penalised Calibration 13.9 Conclusion: Weighting as a Tradeoff Between Accuracy and Volatility 14 The Case of Non-Linear Estimators: Linearisation and Boos-trap14.1 Introduction and Examples 14.2 Estimating Complex Parameters 14.2.1 Parameters Expressed as Functions of Population Totals 14.2.2 Parameters Expressed Using an Estimating Equation 14.2.3 Quantiles, Quantile Ratios and Other Rank-Based Parameters 14.3 Variance Estimation: The Linearisation Approach 14.3.1 The Seminal Approach 14.3.2 Alternative Linearisation Approaches 14.4 Examples of Linearised Estimators 14.4.1 Ratio of Two Totals 14.4.2 Dispersion 14.4.3 Theil Index 14.4.4 Distribution Quantiles 14.5 Software Implementation: The R convey package 14.6 Bootstrap and Other Re-sampling Methods 15 Survey Data and Statistical Modelling 15.1 The Case of Linear Regression 15.1.1 The Theoretical Framework 15.1.2 Software Implementation with the R survey Package 15.2 The Case of Logistic Regression 15.2.1 The Theoretical Framework 15.2.2 Software Implementation with the R survey Package 15.3 The Case of Poisson Regression 15.4 To weight or not to weight ? 16 Surveys at the Age of Data Science and AI 16.1 A More Diverse Statistical Landscape 16.2 Data Integration 16.2.1 Micro-Simulation: When Surveys help Forecast Eco-nomic Trends 16.2.2 Statistical Matching Between Surveys 16.2.3 Distributional National and Financial Accounts 16.3 Uncovering Meaningful Patterns in Survey Data 16.3.1 Ridge, LASSO and Elastic Net Regression16.3.2 Random Forests 16.4 The Impact of Artificial Intelligence 16.4.1 Investigating New Topics 16.4.2 Creating Survey Questionnaires 16.4.3 Test Survey Questions 16.4.4 Translating Survey Questions 16.4.5 Advanced Textual Analysis 16.4.6 Other Possible Applications and Challenges Ahead 17 Final words: Surveys 2.0 Bibliography



