Legacy Course Pack for Statistics
About this Book
1
License
2
Introduction to Statistics
2.1
What is Statistics?
2.2
Variability
2.3
Descriptive versus Inferential Statistics
2.4
Populations and Samples
2.5
Constructs versus Measures
2.6
Classifying Measurement Scales
2.6.1
Level of Measurement
2.6.2
Continuous or Discrete
2.6.3
Qualitative or Quantitative
2.7
Experimental, Quasi-Experimental, and Non-Experimental Studies
2.8
The word “data” is Plural
3
Descriptive Statistics and Data Visualization
3.1
Review the Calculator Guide
3.2
Frequency
3.3
Frequency Distributions
3.4
Frequency Table – By Hand
3.5
Frequency Table - SPSS
3.6
Histograms: By Hand
3.7
Histograms for Continuous Variables, Bar Graphs for Discrete Variables
3.8
Histograms: Using SPSS
3.9
Frequency Polygon
3.10
Skewness & Kurtosis: From a Histogram
3.11
Measures of Central Tendency
3.12
Mode
3.13
Mean
3.14
Median
3.15
Outliers
3.16
Finding Outliers in SPSS
3.17
Which Measures of Central Tendency Are Appropriate?
3.18
Skewness – From Measures of Central Tendency
3.19
Skewness & Kurtosis – From SPSS
3.20
Population Parameters versus Sample Statistics
3.21
Range
3.22
Interquartile Range
3.23
Box Plots
3.24
Sum of Squares
3.25
Variance
3.26
Standard Deviation (SD)
3.27
Standard Deviation is More Useful than Variance or Sum of Squares
3.27.1
Unlike variance, SD is expressed in the same units as the variable
3.27.2
In normally distributed data, a majority of the scores (about 68%) will be +/- 1 standard deviation from the mean.
3.27.3
In normally distributed data, most all of the scores (about 95%) will be +/- 2 standard deviations from the mean.
3.27.4
In normally distributed data, nearly all of the scores (about 99.7%) will be +/- 3 standard deviations from the mean.
3.28
When do we find normally distributed data?
4
The Normal Distribution & z-Scores
4.1
Normal Distributions Are Special
4.2
Probability Density Functions (PDFs)
4.3
Probability and Areas Under the Curve
4.3.1
Classical (or theoretical) probability: When each outcome is equally likely, the probability for an event is equal the number of outcomes in the event divided by the number of possible outcomes (all the possible outcomes are called the sample space).
4.3.2
Addition rule for mutually exclusive events: When two events are mutually exclusive, add the separate probabilities to find the probability that any one of these events will occur.
4.3.3
Multiplication rule for independent events: When one event has no effect on the probability of the other event, multiply the separate probabilities to find the probability that these two events will occur together.
4.4
z-Scores
4.5
What do z-Scores Tell You?
4.6
Why are Normally-distributed z-Scores Useful?
4.7
Transforming a Raw Score into a z-Score
4.8
Transforming a z-Score into a Raw Score
4.9
Finding z-Scores in SPSS
4.10
Areas Under the Curve – NormalCDF and InverseNormal
5
Sampling Distributions
5.1
Review: Two branches of statistics
5.2
Sampling
5.3
Statistics are based on samples, Parameters are based on populations
5.4
Sampling Distribution
5.5
Standard Error
5.6
The Central Limit Theorem
5.7
Why are we doing this? Point estimates
5.8
Confidence Intervals
6
Hypothesis Testing
6.1
Key concept review: The sampling distribution of the mean
7
Probability of selecting a sample with a particular mean: the z-test
8
One- or two-tailed test?
9
Null Hypothesis Significance Testing (NHST)
9.1
Power and Errors
9.2
NHST is Confusing
10
T-Tests
10.1
Experimental Design: Between-Subjects vs Within-Subjects
10.2
Differences between groups
10.3
Degrees of Freedom
10.4
Comparison of tests that compare groups
10.5
T-Tests Versus the Z-Test
10.6
Conducting a One Sample t-Test
10.6.1
Hypotheses
10.6.2
Analysis
10.6.3
Decide
10.6.4
Conclude
10.6.5
Confidence Interval & Margin of Error
10.6.6
Effect Size
10.6.7
Interpretation of d (Cohen, 1988)
10.7
Conducting a Paired (Related) Samples t-Test
10.7.1
Check Assumptions
10.7.2
Data needed
10.7.3
Hypotheses
10.7.4
Analysis & Decision – By Hand
10.7.5
Decide
10.7.6
Conclude
10.7.7
Effect Size
10.7.8
Interpretation of d (Cohen, 1988)
10.7.9
Interpretation of
\(\eta^2\)
and
\(r^2\)
(Cohen, 1988)
10.7.10
Reporting your Results
10.8
Reporting p-Values from SPSS
10.9
APA Style Basics for Results Paragraphs
10.10
Conducting an Independent Samples t-Test
10.10.1
Check Assumptions
10.10.2
Data Needed
10.10.3
Write Hypotheses
10.10.4
Analyze & Decide – By Hand
10.10.5
Analyze & Decide – Using SPSS
10.10.6
Conclude
10.10.7
Effect Size
11
Correlation
11.1
What is correlation?
11.2
Correlation and Causation
11.3
Sample Size
11.4
Computing a Correlation
11.5
Correlations are Sensitive to Outliers
11.6
Hypothesis Testing for a Correlation
11.7
Effect size
11.7.1
Interpretation of
\(\eta^2\)
and
\(r^2\)
(Cohen, 1988)
12
Regression
12.1
IVs and DVs
12.2
The Regression Line
12.3
Finding the Regression Line
12.4
Simple Regression versus Multiple Regression
12.5
The General Linear Model
12.6
The Regression Coefficients
12.7
Regression - SPSS
12.7.1
Research Questions
12.7.2
Check Assumptions
12.7.3
Write Hypotheses
12.7.4
Analyze
12.7.5
Decide
12.7.6
Conclude
12.8
Results Paragraph
13
One-Way Analysis of Variance (ANOVA), Between-Subjects
13.1
What does ANOVA do?
13.2
Why not Multiple T-Tests?
13.3
Three Kinds of t-Tests, Two Kinds of ANOVA
13.4
One-Way Between-Subjects ANOVA – By Hand
13.4.1
Hypotheses
13.4.2
Analysis – By Hand
13.5
One-Way Between-Subjects ANOVA – SPSS
13.5.1
Hypothesize
13.5.2
Analyze
13.5.3
Check Assumptions
13.5.4
Decide
13.5.5
Post Hoc Tests
13.5.6
Effect size
13.6
Results Paragraph
13.6.1
Confidence intervals and point estimates
14
One-Way Analysis of Variance (ANOVA), Within-Subjects
14.1
Within-Subjects ANOVA vs Between-Subjects ANOVA
14.2
The F Ratio
14.3
One-Way Repeated Measures ANOVA (Within-Subjects) - SPSS
14.3.1
Hypotheses
14.3.2
Analysis
14.3.3
Check Assumptions
14.3.4
Decide
14.3.5
Post Hoc Tests
14.3.6
Conclude
15
Two-Way Analysis of Variance (ANOVA)
15.1
Interactions
15.2
Some Terminology
15.3
Main Effects
15.4
Simple Effects
15.5
Interaction Effect
15.6
Factorial ANOVA is really 3 ANOVAs
15.7
Interpret Interaction Effects First
15.8
Hypotheses
15.9
Analysis Two-Way ANOVA (Between-Subjects, Within-Subjects, or Mixed Model)
15.9.1
Differences from One-Way ANOVA
15.9.2
Analyzing fully Between-Subjects Designs
15.9.3
Analyzing fully Within-Subjects Designs
15.9.4
Analyzing Mixed Designs
15.9.5
SPSS Drops the Ball On Simple Effects Tests
15.10
Check Assumptions
15.11
Decision
15.11.1
Omnibus Tests for Between-Subjects Design or the Between-Subjects Factor in a Mixed Design
15.11.2
Omnibus Tests for Within-Subjects Design or the Within-Subjects Factor in a Mixed Design
15.11.3
Post Hoc Tests
15.12
Conclusion
15.13
Graphing Interactions
16
Selecting the Right Test
17
One Variable and Two Variable Chi-Square
17.1
Nonparametric tests
17.2
Two kinds of chi-square tests
17.3
Chi-square goodness-of-fit test (one-sample chi-square)
17.3.1
Write Hypotheses
17.3.2
Check Assumptions
17.3.3
Analyze
17.3.4
Decide
17.3.5
Conclude
17.4
Chi-square test for independence (two sample chi-square)
17.4.1
Data Needed
17.4.2
Write Hypotheses
17.4.3
Check Assumptions
17.4.4
Analysis
17.4.5
Decision
17.4.6
Conclusion
17.4.7
Interpretation of
\(\phi\)
(phi; Cohen, 1988)
17.5
Results Paragraph
Appendix: Interpreting Effect Sizes and Reporting
p
-Values
17.6
Interpretation of
\(\eta^2\)
and
\(r^2\)
(Cohen, 1988)
17.7
Interpretation of
\(r\)
(Cohen, 1988; Note: This is not
\(r^2\)
)
17.8
Interpretation of d (Cohen, 1988)
17.9
Interpretation of
\(\phi\)
(phi; Cohen, 1988)
17.10
Reporting p-Values from SPSS
References
© David Schuster
Published with bookdown
Legacy Course Pack for Statistics
References