Contents

  1. Preface

    Danielle Navarro and Emily Kothe

    1. 0.1 Preface to Version 0.6.1
    2. 0.2 Preface to Version 0.6
    3. 0.3 Preface to Version 0.5
    4. 0.4 Preface to Version 0.4
    5. 0.5 Preface to Version 0.3
  2. I. Background
      1. 1.1 On the psychology of statistics
      2. 1.2 The cautionary tale of Simpson’s paradox
      3. 1.3 Statistics in psychology
      4. 1.4 Statistics in everyday life
      5. 1.5 There’s more to research methods than statistics
      1. 2.1 Introduction to psychological measurement
      2. 2.2 Scales of measurement
      3. 2.3 Assessing the reliability of a measurement
      4. 2.4 The “role” of variables: predictors and outcomes
      5. 2.5 Experimental and non-experimental research
      6. 2.6 Assessing the validity of a study
      7. 2.7 Confounds, artifacts and other threats to validity
      8. 2.8 Summary
  3. II. An Introduction to R
      1. 3.1 Installing R
      2. 3.2 Typing commands at the R console
      3. 3.3 Doing simple calculations with R
      4. 3.4 Storing a number as a variable
      5. 3.5 Using functions to do calculations
      6. 3.6 Letting RStudio help you with your commands
      7. 3.7 Storing many numbers as a vector
      8. 3.8 Storing text data
      9. 3.9 Storing “true or false” data
      10. 3.10 Indexing vectors
      11. 3.11 Quitting R
      12. 3.12 Summary
  4. III. Working with data
      1. 5.1 Measures of central tendency
      2. 5.2 Measures of variability
      3. 5.3 Skew and kurtosis
      4. 5.4 Getting an overall summary of a variable
      5. 5.5 Descriptive statistics separately for each group
      6. 5.6 Standard scores
      7. 5.7 Correlations
      8. 5.8 Handling missing values
      9. 5.9 Summary
      10. 5.10 Epilogue: Good descriptive statistics are descriptive!
      1. 6.1 An overview of R graphics
      2. 6.2 An introduction to plotting
      3. 6.3 Histograms
      4. 6.4 Stem and leaf plots
      5. 6.5 Boxplots
      6. 6.6 Scatterplots
      7. 6.7 Bar graphs
      8. 6.8 Saving image files using R and Rstudio
      9. 6.9 Summary
      1. 7.1 Tabulating and cross-tabulating data
      2. 7.2 Transforming and recoding a variable
      3. 7.3 A few more mathematical functions and operations
      4. 7.4 Extracting a subset of a vector
      5. 7.5 Extracting a subset of a data frame
      6. 7.6 Sorting, flipping and merging data
      7. 7.7 Reshaping a data frame
      8. 7.8 Working with text
      9. 7.9 Reading unusual data files
      10. 7.10 Coercing data from one class to another
      11. 7.11 Other useful data structures
      12. 7.12 Miscellaneous topics
      13. 7.13 Summary
      1. 8.1 Scripts
      2. 8.2 Loops
      3. 8.3 Conditional statements
      4. 8.4 Writing functions
      5. 8.5 Implicit loops
      6. 8.6 Summary
  5. IV. Statistical Theory
      1. 9.1 How are probability and statistics different?
      2. 9.2 What does probability mean?
      3. 9.3 Basic probability theory
      4. 9.4 The binomial distribution
      5. 9.5 The normal distribution
      6. 9.6 Other useful distributions
      7. 9.7 Summary
      1. 10.1 Samples, populations and sampling
      2. 10.2 The law of large numbers
      3. 10.3 Sampling distributions and the central limit theorem
      4. 10.4 Estimating population parameters
      5. 10.5 Estimating a confidence interval
      6. 10.6 Summary
      1. 11.1 A menagerie of hypotheses
      2. 11.2 Two types of errors
      3. 11.3 Test statistics and sampling distributions
      4. 11.4 Making decisions
      5. 11.5 The \(p\) value of a test
      6. 11.6 Reporting the results of a hypothesis test
      7. 11.7 Running the hypothesis test in practice
      8. 11.8 Effect size, sample size and power
      9. 11.9 Some issues to consider
      10. 11.10 Summary
  6. V. Statistical tools
      1. 12.1 The \(\chi^2\) goodness-of-fit test
      2. 12.2 The \(\chi^2\) test of independence (or association)
      3. 12.3 The continuity correction
      4. 12.4 Effect size
      5. 12.5 Assumptions of the test(s)
      6. 12.6 The most typical way to do chi-square tests in R
      7. 12.7 The Fisher exact test
      8. 12.8 The McNemar test
      9. 12.9 What’s the difference between McNemar and independence?
      10. 12.10 Summary
      1. 13.1 The one-sample \(z\)-test
      2. 13.2 The one-sample \(t\)-test
      3. 13.3 The independent samples \(t\)-test (Student test)
      4. 13.4 The independent samples \(t\)-test (Welch test)
      5. 13.5 The paired-samples \(t\)-test
      6. 13.6 One sided tests
      7. 13.7 Using the t.test() function
      8. 13.8 Effect size
      9. 13.9 Checking the normality of a sample
      10. 13.10 Testing non-normal data with Wilcoxon tests
      11. 13.11 Summary
      1. 14.1 An illustrative data set
      2. 14.2 How ANOVA works
      3. 14.3 Running an ANOVA in R
      4. 14.4 Effect size
      5. 14.5 Multiple comparisons and post hoc tests
      6. 14.6 Assumptions of one-way ANOVA
      7. 14.7 Checking the homogeneity of variance assumption
      8. 14.8 Removing the homogeneity of variance assumption
      9. 14.9 Checking the normality assumption
      10. 14.10 Removing the normality assumption
      11. 14.11 On the relationship between ANOVA and the Student \(t\) test
      12. 14.12 Summary
      1. 15.1 What is a linear regression model?
      2. 15.2 Estimating a linear regression model
      3. 15.3 Multiple linear regression
      4. 15.4 Quantifying the fit of the regression model
      5. 15.5 Hypothesis tests for regression models
      6. 15.6 Testing the significance of a correlation
      7. 15.7 Regarding regression coefficients
      8. 15.8 Assumptions of regression
      9. 15.9 Model checking
      10. 15.10 Model selection
      11. 15.11 Summary
      1. 16.1 Factorial ANOVA 1: balanced designs, no interactions
      2. 16.2 Factorial ANOVA 2: balanced designs, interactions allowed
      3. 16.3 Effect size, estimated means, and confidence intervals
      4. 16.4 Assumption checking
      5. 16.5 The \(F\) test as a model comparison
      6. 16.6 ANOVA as a linear model
      7. 16.7 Different ways to specify contrasts
      8. 16.8 Post hoc tests
      9. 16.9 The method of planned comparisons
      10. 16.10 Factorial ANOVA 3: unbalanced designs
      11. 16.11 Summary
  7. VI. Endings, alternatives and prospects
      1. 17.1 Probabilistic reasoning by rational agents
      2. 17.2 Bayesian hypothesis tests
      3. 17.3 Why be a Bayesian?
      4. 17.4 Evidentiary standards you can believe
      5. 17.5 The \(p\)-value is a lie.
      6. 17.6 Bayesian analysis of contingency tables
      7. 17.7 Bayesian \(t\)-tests
      8. 17.8 Bayesian regression
      9. 17.9 Bayesian ANOVA
      10. 17.10 Summary
    1. The undiscovered statistics
    2. Statistical models missing from the book
    3. Other ways of doing inference
    4. Learning the basics, and learning them in R