Full Description
Shortlisted for the British Psychological Society Book Award 2017Shortlisted for the British Book Design and Production Awards 2016Shortlisted for the Association of Learned & Professional Society Publishers Award for Innovation in Publishing 2016An Adventure in Statistics: The Reality Enigma by best-selling author and award-winning teacher Andy Field offers a better way to learn statistics. It combines rock-solid statistics coverage with compelling visual story-telling to address the conceptual difficulties that students learning statistics for the first time often encounter in introductory courses - guiding students away from rote memorization and toward critical thinking and problem solving. Field masterfully weaves in a unique, action-packed story starring Zach, a character who thinks like a student, processing information, and the challenges of understanding it, in the same way a statistics novice would. Illustrated with stunning graphic novel-style art and featuring Socratic dialogue, the story captivates readers as it introduces them to concepts, eliminating potential statistics anxiety. The book assumes no previous statistics knowledge nor does it require the use of data analysis software. It covers the material you would expect for an introductory level statistics course that Field's other books (Discovering Statistics Using IBM SPSS Statistics and Discovering Statistics Using R) only touch on, but with a contemporary twist, laying down strong foundations for understanding classical and Bayesian approaches to data analysis. In doing so, it provides an unrivalled launch pad to further study, research, and inquisitiveness about the real world, equipping students with the skills to succeed in their chosen degree and which they can go on to apply in the workplace.The Story and Main CharactersThe Reality RevolutionIn the City of Elpis, in the year 2100, there has been a reality revolution. Prior to the revolution, Elpis citizens were unable to see their flaws and limitations, believing themselves talented and special. This led to a self-absorbed society in which hard work and the collective good were undervalued and eroded.To combat this, Professor Milton Grey invented the reality prism, a hat that allowed its wearers to see themselves as they really were - flaws and all. Faced with the truth, Elpis citizens revolted and destroyed and banned all reality prisms.The Mysterious DisappearanceZach and Alice are born soon after all the prisms have been destroyed. Zach, a musician who doesn't understand science, and Alice, a geneticist who is also a whiz at statistics, are in love. One night, after making a world-changing discovery, Alice suddenly disappears, leaving behind a song playing on a loop and a file with her research on it.Statistics to the Rescue!Sensing that she might be in danger, Zach follows the clues to find her, as he realizes that the key to discovering why Alice has vanished is in her research. Alas! He must learn statistics and apply what he learns in order to overcome a number of deadly challenges and find the love of his life.As Zach and his pocket watch, The Head, embark on their quest to find Alice, they meet Professor Milton Grey and Celia, battle zombies, cross a probability bridge, and encounter Jig:Saw, a mysterious corporation that might have something to do with Alice's disappearance...
Contents
Prologue1 Why You Need Science: The Beginning and The End1.1. Will you love me now?1.2. How science works1.2.1. The research process1.2.2. Science as a life skill1.3. Research methods1.3.1. Correlational research methods1.3.2. Experimental research methods1.3.3. Practice, order and randomization1.4. Why we need science2 Reporting Research, Variables and Measurement: Breaking the Law2.1. Writing up research2.2. Maths and statistical notation2.3. Variables and measurement2.3.1. The conspiracy unfolds2.3.2. Qualitative and quantitative data2.3.3. Levels of measurement2.3.4. Measurement error2.3.5. Validity and reliability3 Summarizing Data: She Loves Me Not?3.1. Frequency distributions3.1.1. Tabulated frequency distributions3.1.2. Grouped frequency distributions3.1.3. Graphical frequency distributions3.1.4. Idealized distributions3.1.5. Histograms for nominal and ordinal data3.2. Throwing Shapes4 Fitting Models (Central Tendency): Somewhere In The Middle4.1. Statistical Models4.1.1. From the dead4.1.2. Why do we need statistical models?4.1.3. Sample size4.1.4. The one and only statistical model4.2. Central Tendency4.2.1. The mode4.2.2. The median4.2.3. The mean4.3. The 'fit' of the mean: variance4.3.1. The fit of the mean4.3.2. Estimating the fit of the mean from a sample4.3.3. Outliers and variance4..4. Dispersion4.4.1. The standard deviation as an indication of dispersion4.4.2. The range and interquartile range5 Presenting Data: Aggressive Perfector5.1. Types of graphs5.2. Another perfect day5.3. The art of presenting data5.3.1. What makes a good graph?5.3.2. Bar graphs5.3.3. Line graphs5.3.4. Boxplots (box-whisker diagrams)5.3.5. Graphing relationships: the scatterplot5.3.6. Pie charts6 Z-Scores: The wolf is loose6.1. Interpreting raw scores6.2. Standardizing a score6.3. Using z-scores to compare distributions6.4. Using z-scores to compare scores6.5. Z-scores for samples7 Probability: The Bridge of Death7.1. Probability7.1.1. Classical probability7.1.2. Empirical probability7.2. Probability and frequency distributions7.2.1. The discs of death7.2.2. Probability density functions7.2.3. Probability and the normal distribution7.2.4. The probability of a score greater than x7.2.5. The probability of a score less than x: The tunnels of death7.2.6. The probability of a score between two values: The catapults of death7.3. Conditional probability: DeathscotchInferential Statistics: Going Beyond the Data8.1. Estimating parameters8.2. How well does a sample represent the population?8.2.1. Sampling distributions8.2.2. The standard error8.2.3. The central limit theorem8.3. Confidence Intervals8.3.1. Calculating confidence intervals8.3.2. Calculating other confidence intervals8.3.3. Confidence intervals in small samples8.4. Inferential statistics9 Robust Estimation: Man Without Faith or Trust9.1. Sources of bias9.1.1. Extreme scores and non-normal distributions9.1.2. The mixed normal distribution9.2. A great mistake9.3. Reducing bias9.3.1. Transforming data9.3.2. Trimming data9.3.3. M-estimators9.3.4. Winsorizing9.3.5. The bootstrap9.4. A final point about extreme scores10 Hypothesis Testing: In Reality All is Void10.1. Null hypothesis significance testing10.1.1. Types of hypothesis10.1.2. Fisher's p-value10.1.3. The principles of NHST10.1.4. Test statistics10.1.5. One- and two-tailed tests10.1.6. Type I and Type II errors10.1.7. Inflated error rates10.1.8. Statistical power10.1.9. Confidence intervals and statistical significance10.1.10. Sample size and statistical significance11 Modern Approaches to Theory Testing: A Careworn Heart11.1. Problems with NHST11.1.1. What can you conclude from a 'significance' test?11.1.2. All-or-nothing thinking11.1.3. NHST is influenced by the intentions of the scientist11.2. Effect sizes11.2.1. Cohen's d11.2.2. Pearson's correlation coefficient,r11.2.3. The odds ratio11.3. Meta-analysis11.4. Bayesian approaches11.4.1. Asking a different question11.4.2. Bayes' theorem revisited11.4.3. Comparing hypothesis11.4.4. Benefits of bayesian approaches12 Assumptions: Starblind12.1. Fitting models: bringing it all together12.2. Assumptions12.2.1. Additivity and linearity12.2.2. Independent errors12.2.3. Homoscedasticity/ homogeneity of variance12.2.4. Normally distributed something or other12.2.5. External variables12.2.6. Variable types12.2.7. Multicollinearity12.2.8. Non-zero variance12.3. Turning ever towards the sun13 Relationships: A Stranger's Grave13.1. Finding relationships in categorical data13.1.1. Pearson's chi-square test13.1.2. Assumptions13.1.3. Fisher's exact test13.1.4. Yates's correction13.1.5. The likelihood ratio (G-test)13.1.6. Standardized residuals13.1.7. Calculating an effect size13.1.8. Using a computer13.1.9. Bayes factors for contingency tables13.1.10. Summary13.2. What evil lay dormant13.3. Modelling relationships13.3.1. Covariance13.3.2. Pearson's correlation coefficient13.3.3. The significance of the correlation coefficient13.3.4. Confidence intervals for r13.3.5. Using a computer13.3.6. Robust estimation of the correlation13.3.7. Bayesian approaches to relationships between two variables13.3.8. Correlation and causation13.3.9. Calculating the effect size13.4. Silent sorrow in empty boats14 The General Linear Model: Red Fire Coming Out From His Gills14.1. The linear model with one predictor14.1.1. Estimating parameters14.1.2. Interpreting regression coefficients14.1.3. Standardized regression coefficients14.1.4. The standard error of b14.1.5. Confidence intervals for b14.1.6. Test statistic for b14.1.7. Assessing the goodness of fit14.1.8. Fitting a linear model using a computer14.1.9. When this fails14.2. Bias in the linear model14.3. A general procedure for fitting linear models14.4. Models with several predictors14.4.1. The expanded linear model14.4.2. Methods for entering predictors14.4.3. Estimating parameters14.4.4. Using a computer to build more complex models14.5. Robust regression14.5.1. Bayes factors for linear models15 Comparing Two Means: Rock or Bust15.1. Testing differences between means: The rationale15.2. Means and the linear model15.2.1. Estimating the model parameters15.2.2. How the model works15.2.3. Testing the model parameters15.2.4. The independent t-test on a computer15.2.5. Assumptions of the model15.3. Everything you believe is wrong15.4. The paired-samples t-test15.4.1. The paired-samples t-test on a computer15.5. Alternative approaches15.5.1. Effect sizes15.5.2. Robust tests of two means15.5.3. Bayes factors for comparing two means16 Comparing Several Means: Faith in Others16.1. General procedure for comparing means16.2. Comparing several means with the linear model16.2.1. Dummy coding16.2.2. The F-ratio as a test of means16.2.3. The total sum of squares (SSt)16.2.4. The model sum of squares (SSm)16.2.5. The residual sum of squares (SSr)16.2.6. Partitioning variance16.2.7. Mean squares16.2.8. The F-ratio16.2.9. Comparing several means using a computer16.3. Contrast coding16.3.1. Generating contrasts16.3.2. Devising weights16.3.3. Contrasts and the linear model16.3.4. Post hoc procedures16.3.5. Contrasts and post hoc tests using a computer16.4. Storm of memories16.5. Repeated-measures designs16.5.1. The total sum of squares, SSt16.5.2. The within-participant variance, SSw16.5.3. The model sum of squares, SSm16.5.4. The residual sum of squares, SSr16.5.5. Mean squares and the F-ratio16.5.6. Repeated-measures designs using a computer16.6. Alternative approaches16.6.1. Effect sizes16.6.2. Robust tests of several means16.6.3. Bayesian analysis of several means16.7. The invisible manFactorial Designs17.1. Factorial designs17.2. General procedure and assumptions17.3. Analysing factorial designs17.3.1. Factorial designs and the linear model17.3.2. The fit of the model17.3.3. Factorial designs on a computer17.4. From the pinnacle to the pit17.5. Alternative approaches17.5.1. Calculating effect sizes17.5.2. Robust analysis of factorial designs17.5.3. Bayes factors for factorial designs17.6. Interpreting interaction effectsEpilogue: The Genial Night: SI Momentum Requiris, Circumspice



