## Related and Independent t-Tests

Order Description

Research Methods and Data Analysis for Psychology.

Related and Independent t-Tests

For this Hand-in Assignment, refer to the part 3 of the SPSS workbook. The workbook provides several worked-out problems that serve as examples of how to perform the statistical analysis presented this week, followed by 4 problems in Part 3 that you must complete on your own and hand-in. (IBM SPSS Statistic viewer is required for this document).

Assignment question:

For each of the (4) studies described in Part 3 of the SPSS workbook:

• Specify the design of each study, what the dependent and independent variables are, and then perform the appropriate analysis.

• Report the findings in the standard notation, and state what the implications of your analysis are for the hypotheses.

• Use the information in the worked-out problems in the first part of your workbook as guidance on how to present your findings.

Support your Hand-in Assignment with specific references to all resources used in its preparation.

© Laureate Education, Inc.

1

Week 5: SPSS-PASW Workbook

Week Five: One-Way Designs

This week’s SPSS-PASW Workbook contains three parts which focus on one-way designs. In particular the focus is on t-Tests and ANOVA. Part one demonstrates how to perform t-Tests in SPSS-PASW. Part two demonstrates how to perform ANOVA in SPSS-PASW. Part three contains several problems for you to answer and submit as your hand-in assignment for this week.

Part 1: t-Tests

Parametric t-Tests (Related and Unrelated)

A t-test can look at the difference between means of two scores.

Related (aka Dependent) Samples t-Test

This is the parametric comparison of two related groups. For example, when comparing mean scores for subjects at some task before and after taking a drug.

First, open the family.sav file (provided in this week’s Learning Resources). Then, add a variable to the data file, so that you can run the related t-test. In this case, the comparison will be between the subjects’ height/weight ratio before he or she was put on a 4-week diet/exercise plan and after that plan. The variable already in the data set HWRATIO is the measure before. At the end of the data file, add the variable HWRATIO2 to represent the individual’s measurements after the plan. Using what you learned in Week 4 about entering data, create the new variable using the information below:

Variable Name: HWRATIO2

Variable Label: Height/Weight Ratio after plan

Measure: Scale

Data: see table 5.1 below

To run the procedure, select ANALYZE > COMPARE MEANS > PAIRED-SAMPLES T TEST.

A dialogue box will appear. The dialogue box has a two-column format. SPSS-PASW will analyse each pair to determine if their means are significantly, statistically different. In this case, under PAIR 1, select the variables using the arrows—HWRATIO as VARIABLE 1 and HWRATIO2 as VARIABLE 2, then click the ‘OK’ button.

Table 5.1: Data for Height/Weight Ratio after a 4-week diet/exercise plan

© Laureate Education, Inc.

2

S

Output

The results appear in three sections:

? The first section gives you a table called Paired Samples Statistics with the mean scores, standard deviations, and standard error mean for the two variables.

? The second section is a table called Paired Samples Correlations, showing the correlation between the two variables and the level of significance.

? The third section is a table called Paired Samples Test, which indicates the significance of the results. This includes the t-value, degrees of freedom (df), and the two-tailed significance level.

• What is the t-value for the comparison between the height-to-weight ratio scores?

The t-value for the comparison between the two height-to-weight ratio scores is -5.30.

© Laureate Education, Inc.

3

• Is there a significant difference between the scores before and after the diet/exercise plan? If so, which is the greater height/weight ratio?

The results of this paired samples t-test indicate that there is a significant difference between the scores before and after the diet/exercise plan, as it can be seen that the significance (‘Sig. (2-tailed)’) is less than .001 (p < .001). This difference is due to a greater height/weight ratio after the diet/exercise plan (M = .52) than before the plan (M = .48).

t-Test for Independent Samples

This is the parametric t-test for two independent samples—a between-subjects design where, for example, subjects are randomly assigned to two separate test conditions (e.g. drug and control) and the mean scores (e.g. reaction time) are compared to determine if they are significantly different from each other.

In this example, you test whether there is a statistical difference in weight-to-height ratios between the male and female subjects. The format for variables to be used in the independent t-test is different from that used in the related. Instead of the scores being placed in two separate columns (variables), all of the scores are placed in a single column (variable). A second variable identifies for SPSS-PASW which of the two groups each score belongs to. So, in this example, there is the variable HWRATIO2 as the dependent variable and NSEX as the independent variable.

To run the analysis, go to ANALYZE > COMPARE MEANS > INDEPENDENT-SAMPLES T TEST. As usual, the left column lists all the variables in your data file. On the right, there are two boxes:

• The ‘Test Variable(s)’ box is where you move the dependent variable(s). (e.g., HWRATIO2)

• The ‘Grouping Variable’ box is where you move the variable that distinguishes between the two independent groups (e.g. NSEX)

First, select the dependent variable HWRATIO2 and move it over to the ‘Test Variable(s)’ box. Next, move NSEX over into the ‘Grouping Variable’ box and click the DEFINE GROUPS button. Values from the grouping variable must be entered into the two boxes. For variable sex, where only two levels are recorded, enter “1″ in the ‘Group 1’ box for male subjects, and “2″ in the ‘Group 2’ box for female subjects. Then, click ‘Continue’ and ‘OK.’

Note: There may be times where you have a larger range of values, such as five different education levels, but only want to look at the difference between two of them. You would enter the two values you wish to compare.

© Laureate Education, Inc.

4

Output

There are two sections:

? The first section of the output gives you a table called Group Statistics, which indicates the number of cases and the mean scores for each condition.

? The second section provides a table called Independent Samples Test, which starts with Levene’s Test for Equality of Variance. If the variance is unequal, indicated by significant difference (p < .05), then look at the line ‘Equal variances not assumed’ and examine the two-tailed significance level. On the other hand, if the difference is not significant (p > .05), look at the line ‘Equal variances assumed’ and examine the two-tailed significance level. The ‘t-test for Equality of Means’ area of the table gives you t-values, degrees of freedom, and the two-tailed significance levels.

In this example, Levene’s is not significant (0.137), so you should then look at the ‘Equal variances assumed’ line of the output table. In this example, the t-test for Equality of Means is not significant (two-tailed significance of .478), so you should reject the hypothesis that there is a difference between males and females in their height-to-weight ratios.

Non-Parametric t-Tests—Mann-Whitney, Unrelated and Wilcoxon, Related

Mann-Whitney – Unrelated

Go back to the family.sav file.

This is the non-parametric t-test for two independent samples—a between-subjects design. To run the analysis, select ANALYZE > NONPARAMETRIC TESTS > LEGACY DIALOGS > 2 INDEPENDENT SAMPLES.

The left column lists all the variables in your data file. On the right, there are two boxes:

? The ‘Test Variable List’ box is where you move the dependent variable(s).

? The ‘Grouping Variable’ box is where you move the variable that distinguishes between the two independent groups.

Move HWRATIO2 into the ‘Test Variable List’ box, and move NSEX into the ‘Grouping Variable’ box. Then, click the ‘Define Groups’ button. Values from the grouping variable must be entered into the two boxes. In the case of the variable

© Laureate Education, Inc.

5

NSEX, enter “1″ in the ‘Group 1’ box for male subjects, and “2″ in the ‘Group 2’ box for female subjects. Click ‘Continue and ‘OK.’

Output

SPSS-PASW divides the entire set of subjects into three groups:

? those with a score of 1 (male)

? those with a score of 2 (female)

? those with missing data, which are excluded from the analysis

The first table, ‘Ranks,’ gives the mean ranks for the two conditions that are included (Male and Female), as well as the sums of the ranks and the numbers of cases.

The second table, ‘Test Statistics,’ gives the Z score and p-values for the t-test.

• Is there a difference between males and females? How do the results compare to the previous parametric analysis?

The results of this t-test indicate that there is not a difference between males and females in terms of height-to-weight ratio, since p is not significant (p >.05). When comparing these results to the results of the previous parametric analysis, it should be noted that they produce the same result—the hypothesis (H1) that there is a difference between male and female height-to-weight ratios should be rejected.

Wilcoxon – Related

This is the non-parametric repeated measures t-test, in a within subjects design. Like the parametric equivalent, you will complete a comparison of height-to-weight ratios for the sample population before and after a four-week exercise/diet program. To run the analysis, continue in the family.sav file and select ANALYZE > NONPARAMETRIC TESTS > LEGACY DIALOGS > 2 RELATED SAMPLES.

The dialogue box that appears has a two-column format. SPSS-PASW will analyse each pair to determine if their means are significantly different, statistically. In this case, under PAIR 1, select HWRATIO as VARIABLE 1 and HWRATIO2 as VARIABLE 2. Then, click ‘OK.’

Output

The output for this procedure is quite different from the parametric test. The first table, ‘Ranks,’ gives you information about how many rank scores for one

© Laureate Education, Inc.

6

condition are less than (Negative Ranks), greater than (Positive Ranks), or equal to (Ties) the ranks scores for the other condition. The mean ranks for each of these three levels are given, as well as the sums of the ranks for each and the number of cases that fall under each level.

The main results are in the second table, ‘Test Statistics,’ where the Z value and the p value are given. The usual standard for levels of significance is used (p < .05).

• How many cases are there where HWRATIO is greater than HWRATIO2?

There is only one case in which HWRATIO is greater than HWRATIO2.

• Is there a significant difference between ranked height/weight ratios before and after the exercise/diet program? How do the results compare to the previous parametric analysis?

The results of this t-test indicate that there is difference between height/weight ratios before and after the exercise/diet program, since p is significant (p < .05). When comparing these results to the results of the previous parametric analysis, it should be noted that they produce the same result—the hypothesis (H1) that there is a difference between height/weight ratios before and after the exercise/diet program should be accepted.

© Laureate Education, Inc.

7

Part 2: ANOVAs

The ANOVAs used in this practical are useful when you want to determine if there is a significant difference between three or more groups, when you have only a single variable.

One-way ANOVA for Independent Samples

In this case, we will determine if there is a significant difference in the height-to-weight ratio between the three age groups in the sample in the family.sav file—children, adults, and elderly. We also want to carry out a Tukey’s post-hoc test to identify where those differences lie, if there are any. The procedure is similar to carrying out an unrelated samples t-test. Select ANALYZE > COMPARE MEANS > ONE-WAY ANOVA.

The layout of the dialogue box is the same as the one for unrelated t-tests. First select your dependent variable(s). In this case, move the variable HWRATIO into the ‘Dependent List’ box. Your factor (independent variable) is the variable AGEGRP—move this variable into the ‘Factor’ box.

Before running the analysis, click the Post-hoc button and turn on the ‘Tukey’s-b’ test and ‘Continue.’ Then, click ‘Options’ and tick ‘Descriptive.’ Finally, click ‘Continue’ and ‘OK’ to run the analysis.

Output

There are two sections to the results for the one-way ANOVA:

• The second table, ‘ANOVA,’ indicates whether any significant differences exist between the different levels of the independent variable. The between groups and within groups sums of squares are listed, as well as the degrees of freedom, the F-ratio, and the F-probability score (significance level). If the F-probability is less than 0.05, then a significant difference exists. In this case, the F-prob. (Sig.) is .000, so there is a statistically significant difference in height-to-weight ratios between the three age groups.

• The post-hoc test identifies where exactly those difference lie. The last section, ‘Homogeneous Subsets,’ includes a small table with the levels of the independent variable listed down the side. In comparing these levels it can be seen that children have a significantly higher mean height-to-weight ratio than adults and the elderly.

© Laureate Education, Inc.

8

One-way ANOVA for Related Samples

This procedure is new, and different from any others you have done. Start by adding a third height-to-weight ratio variable in your family.sav file, representing the ratios for the subjects some time after they stopped doing the exercise/diet plan. The data is below:

Variable Name: HWRATIO3

Variable Label: Height/Weight Ratio post-plan

Measure: Scale

Data: see Table 5.2 below

Table 5.2

Subject Number

HWRATIO3 score

Run a single factor ANOVA by selecting ANALYZE > GENERAL LINEAR MODEL > REPEATED MEASURES.

This dialogue box is different from the ones previously used. First, give a name to the factor being analysed—the thing the three variables have in common. All three variables cover height-to-weight ratios, so that is the factor being analysed. Do the following:

© Laureate Education, Inc.

9

? In the ‘Within-Subject Factor Name” box type RATIO.

? In the ‘Number of Levels’ box, type 3 (representing the three variables).

? Press the ‘Add’ button, then the ‘Define’ button.

The next dialogue box is a bit more familiar. In the right-hand column, there are three question marks with a number beside each. Select each of the three variables to be included in the analysis (HWRATIO, HWRATIO2, HWRATIO3), and move them across with the arrow button. Each of the variables replaces one of the question marks, indicating to SPSS-PASW which three variables represent the three levels of the factor RATIO. Then proceed by clicking ‘OK.’

Output

Disregard the sections of the output titled ‘Multivariate Tests’ and ‘Mauchly’s Test of Sphericity.’

Instead, examine the section titled ‘Tests of Within-Subjects Effects’ (fourth table). This table indicates whether any significant differences exist between the different levels of the within-subjects variable. The degrees of freedom and sums of squares are listed, as well as the F-score and its significance level. If the significance level is less than 0.05 than a significant difference exists. In this case, it is .001 (look at the measure for sphericity assumed), so we can say that there is a statistically significant difference in height-to-weight ratios between the three times when measurement were taken.

Now run a post-hoc test to identify where the differences lie, using Paired-Sample t-tests. In this case:

HWRATIO & HWRATIO2

HWRATIO & HWRATIO3

HWRATIO2 & HWRATIO3

From these three t-tests, you can determine which of the height-to-weight ratios are significantly different from each other.

© Laureate Education, Inc.

10

Part 3: Hand-in Assignment Problems for Week 5

Below are four problems that you will complete for this week’s Hand-in Assignment. Read each problem and complete the necessary analyses on the data provided. Then, answer the questions for each problem and place your answers (including appropriate charts and graphs) in a Word document. Save your Word document as a .doc file and submit it to the Turnitin link for this week’s Hand-in Assignment after completing all of the problems for this week.

Note: Appendix A at the end of this document contains several sample problems that are already worked out for you. Refer to Appendix A as needed for guidance on how to present your results.

Problem #1: t-Test exercise—Does Meditating Reduce Anxiety?

Twenty teachers (50% female) aged between 23 and 50 years (M = 32.60, SD = 8.40) were referred to a 10-week meditation course with the aim of becoming less anxious. Anxiety (scored from 0 [not anxious] to 10 [highly anxious]) was measured before and after the course. The research hypothesis (H1) was that meditation training would reduce anxiety.

Note: Use the Does meditating reduce anxiety.sav file provided in this week’s Learning Resources for this exercise.

1. What is the design of the study?

2. Specify the dependent and independent variables. State the number of levels for the independent variable, and whether it is a within or between participant factor.

Prior to performing the inferential analysis, ensure that the data are normally distributed using the Kolmogorov-Smirnov test. This is accessed in SPSS-PASW by selecting ANALYZE > DESCRIPTIVE STATISTICS > EXPLORE, which opens the following dialogue box:

© Laureate Education, Inc.

11

3

Move the relevant variable(s) into the Dependent List box. Click on ‘Plots’ so that the following dialogue box appears:

Tick Normality plots with tests and then click on ‘Continue’ and ‘OK.’

3. Looking at the ‘Tests of Normality’ table, do the Kolmogorov-Smirnov tests indicate that the variables are normally distributed? Document and explain your findings giving the relevant statistics.

Given these findings, perform a ‘Paired-Samples T Test’ to examine the hypothesis that anxiety would reduce following the meditation course.

© Laureate Education, Inc.

12

4. Examine the mean scores and standard deviations. At what point in time is anxiety higher?

5. Now, examine the ‘Paired Samples Test’ table. Is there support for the hypothesis that anxiety decreases after meditation training? Document and explain your findings giving the relevant statistics.

Answer the questions above and place your answers (including appropriate charts and graphs) in a Word document. Save your Word document as a .doc file and submit it to the Turnitin link for this week’s Hand-in Assignment after completing all of the problems for this week.

© Laureate Education, Inc.

13

Problem #2: t-Test exercise—Arachnophobia

Sixty undergraduates (50% female, M age = 21.18 years, SD = 5.11) took part in a experiment that examined whether arachnophobia (fear of spiders) was more pronounced when the individual was faced with real spiders, instead of being shown photographs of them. Half of the undergraduates were shown photographs and the others were asked to hold a spider. Their fear ratings were then recorded. The experimental hypothesis (H1) was that fear ratings would be greater in the presence of a live spider.

Note: Use the Arachnophobia.sav file provided in this week’s Learning Resources for this exercise.

1. What is the design of the study?

2. Specify the dependent and independent variables. State the number of levels for the independent variable and whether it is a within or between participant factor.

3. Perform an independent t-test to examine H1. Document and explain your findings giving the relevant statistics and report the implications for the hypothesis.

Answer the questions above and place your answers (including appropriate charts and graphs) in a Word document. Save your Word document as a .doc file and submit it to the Turnitin link for this week’s Hand-in Assignment after completing all of the problems for this week.

© Laureate Education, Inc.

14

Problem #3: ANOVA exercise—Rats in a maze

The effect of amphetamine sulphate on rats’ reaction times was examined by randomly allocating them to one of three conditions: control; 1mg amphetamine sulphate; 2mg amphetamine sulphate. Each was put in a maze and the time taken for them to exit was recorded. The experimental hypothesis (H1) was that administering amphetamine sulphate would reduce the time taken to exit the maze.

Note: Use the Rats in a maze.sav file provided in this week’s Learning Resources for this exercise.

1. What is the design of the study?

2. Specify the dependent and independent variables. State the number of levels for the independent variable, and whether it is a within or between participant factor.

Perform a one-way independent ANOVA to test H1.

3. Looking at the ‘Descriptives’ table, is there a numeric trend that is consistent with the hypothesis?

4. Looking at the ‘ANOVA’ table, is this difference statistically significant? Document and explain your findings giving the relevant statistics.

5. What does the ‘Post Hoc Test’ tell you about the difference in the group means?

6. What can you conclude about H1?

Answer the questions above and place your answers (including appropriate charts and graphs) in a Word document. Save your Word document as a .doc file and submit it to the Turnitin link for this week’s Hand-in Assignment after completing all of the problems for this week.

© Laureate Education, Inc.

15

Problem #4: ANOVA exercise—Meditation and anxiety follow up

You previously examined the efficacy of meditation on anxiety. The data you analysed were collected from 20 teachers who underwent a 10-week meditation course. Participants were asked to indicate their anxiety on an 11-point scale (where 0 = not anxious and 10 = highly anxious) before and after the course. The alternative hypothesis (H1) was that meditation training would reduce anxiety. Consistent with H1, anxiety was significant lower after the course (M = 3.50, SD = 1.85) than prior to it (M = 4.70, SD = 2.56; t(19) = 3.56, p<.01).

Typically, the effects of these interventions can be short-lived so it is advisable to collect follow-up data. Ten weeks after participants completed the course they were contacted and completed the anxiety questionnaire for a third time.

Note: Use the Meditation and anxiety follow up.sav file provided in this week’s Learning Resources for this exercise.

1. State the design of the study

2. Specify what the dependent and independent variables are.

3. Formulate a hypothesis for the study.

4. Calculate a one-way repeated measures ANOVA. Pay attention to the sphericity assumption and report the correct F and p values. Is there a linear or non-linear trend in the data?

5. Write it up as you would the results section of a report.

Answer the questions above and place your answers (including appropriate charts and graphs) in a Word document. Save your Word document as a .doc file and submit it to the Turnitin link for this week’s Hand-in Assignment after completing all of the problems for this week.

© Laureate Education, Inc.

16

Appendix A: Worked-Out Problems

Refer to this Appendix for guidance on how to present your results.

A1) t-Test example: Transient self-esteem

A self-esteem questionnaire (scored from 0 [low self-esteem] to 100 [high self-esteem]) and an IQ test were administered to 17 male and three female investment bankers (M age = 39.40 years, SD = 4.63). Participants were then given false feedback on their IQ test results: each was informed that their IQ was 15 points lower than it actually was. The bankers then completed the self-esteem questionnaire for a second time. The alternative hypothesis (H1) was that receiving false feedback would temporarily lower participants’ self-esteem. Note: Use the Transient self-esteem.sav file provided in this week’s Learning Resources for this exercise.

What is the design of the study?

It is a one-way repeated measures design.

Specify the dependent and independent variables. State the number of levels for the independent variable and whether it is a within or between participant factor.

The DV is self-esteem. The within participant factor is the manipulation. It has two levels: before/after receiving false feedback.

Perform the appropriate inferential technique to test H1 and report the findings using the standard notation.

Self-esteem was lower after the manipulation (M = 40.15, SD = 19.86) than before it (M = 38.60, SD = 20.00). Since the hypothesis is directional, it can be accepted, t(19) = 1.84, p<.05, 1-tailed. Note that SPSS provides the two-tailed probability level (p=.081), which should be halved to provide the one-tailed probability level. This clearly demonstrates the advantage of formulating a directional hypothesis: if no direction was specified, the difference would not be significant.

© Laureate Education, Inc.

17

A2) t-Test example: Gender Differences in IQ

A student had 10 males and 10 females (M age = 21.95 years, SD = 3.55) complete the Weschler Adult Intelligence Scale (WAIS). She hypothesised (H1) that there would be a gender difference in IQ scores.

Note: Use the IQ and Gender.sav file provided in this week’s Learning Resources for this exercise.

What is the design of the study?

It is a one-way independent groups design.

Specify the dependent and independent variables. State the number of levels for the independent variable and whether it is a within or between participant factor.

The DV is IQ and the between participant factor is gender. It has two levels: male/female.

Now perform the appropriate test to examine H1 and report the results of your analysis in the standard notation.

Male IQ (M = 106.50, SD = 14.83) was higher than female IQ (M = 93.50, SD = 12.56). H1 can be accepted since male IQ was significantly higher than female IQ (t(18) = –2.12, p<.05).

A3) ANOVA example: Personality and smoking

A researcher wished to examine whether smoking influenced personality. Participants were 15 smokers, 15 non-smokers and 15 former smokers (42.2% male) aged between 18 and 53 years (M = 22.27, SD = 7.98). Each was assessed on the ‘Big Five’. The alternative hypotheses (H1–H5) were that there would be differences on the personality measures according to smoking status.

Note: Use the Personality and Smoking.sav file provided in this week’s Learning Resources for this exercise.

© Laureate Education, Inc.

18

What is the design of the study?

It is a one-way independent groups design.

Specify the dependent and independent variables. State the number of levels for the independent variable and whether it is a within or between participant factor.

The DVs are: openness; conscientiousness; extroversion; agreeableness; and neuroticism. The between participant factor is smoking status. It has three levels: non-smoker/former smoker/smoker.

Perform the appropriate analysis to test the hypotheses. Remember to enter all five personality measures as dependent variables. Report the results of the analysis using the standard notation. Because you are performing five one-way ANOVAs, you should perform a Bonferroni correction in which you adopt a more conservative probability level of 0.01 (i.e., 0.05/5).

There were no group differences on Openness (F(2, 42) = 1.26, p>.05) or Conscientiousness (F(2, 42) = 3.05, p>.05). There was a significant main effect for Extroversion, F(2, 42) =18.09, p<.001. Non smokers were less extroverted (M = 2.60, SD = .99) than smokers (M = 5.20, SD = 1.37) and former smokers (M = 4.73, SD = 1.39)(ps<.01) who did not differ (p>.05). There was also a main effect for Agreeableness (F(2, 42) =9.09, p<.01). Non-smokers were more agreeable (M = 4.87, SD = 1.30) than smokers (M = 3.07, SD = .80) (p<.01) and former smokers (M = 3.87, SD = 1.30) (p<.05). After applying a Bonferroni correction (i.e., changing the probability level to .01 rather than .05 because there are five-one way ANOVAs), there was no effect for Neuroticism (F(2, 42) = 4.63, p>.01).

Do the inferential analyses support the hypotheses?

The ANOVAs provided some evidence that personality traits differed across the groups, so two of the five hypotheses (one for each personality trait) can be accepted.

A4) ANOVA example: Boredom effect Note: The following demonstrates how to calculate a one-way repeated measures ANOVA in SPSS and then interpret the SPSS output.

Note: Use the Boredom effect.sav file provided in this week’s Learning Resources for this exercise.

© Laureate Education, Inc.

19

Researchers were interested in the effect of fatigue on a mentally undemanding but boring task. Fifteen students were each seated in an isolated cubicle and told to ‘dot’ a white piece of paper with the tip of a ball point pen as many times as they could in one minute. They did this for three trials. Trial 1 took place at the beginning of the experiment. After trial 1, students watched a repetitive screen saver for 15 minutes and were then asked to complete trial 2 of the tapping task. After trial 2, students again watched the screen saver for 15 minutes before completing trial 3 of the tapping task. The number of pen marks in each trial was counted and is provided in Table 3.1 for all participants across the three trials.

Table 3.1: Raw data for dotting task

Participant

Trial 1

Trial 2

Trial 3

The researchers hypothesised that participants would succumb to a boredom effect in which the number of times they dotted the paper would diminish over time (H1).

What is the design of the experiment?

It is a one-way repeated measures design.

Specify the dependent and independent variables. State the number of levels for the independent variable.

The DV is the number of occasions the students dotted the paper. The within participant factor (or IV) is trial (or time). It has three levels: trial 1/trial 2/trial 3.

Entering the data into SPSS

Entering the data in SPSS is straightforward. The data can be entered as they appear in Table 3.1 (though there is no need to enter the participant number). You should bear in mind that each participant’s data are contained in a single row.

© Laureate Education, Inc.

20

Click on Variable View towards the bottom left part of the screen. Recall that this is where you define the variables.

On the first row, provide a Name (without spaces) for the first level of your independent variable (within participant factor). Name this trial1. Moving the cursor to the right, change the Decimals from 2 to 0 (since you only have whole numbers – the participants could not dot the paper half a time). Though trial1 would suffice, you could move the cursor to the Label cell and give it a name with spaces such as Trial 1 or First trial. There is no need to enter anything into the Values box since the data you will enter are meaningful – they relate to the number of times the students dotted the paper. Repeat the process for the two other conditions. Your screen should look like something like the one below:

Click on Data View and enter the data. Your screen should like the one overleaf.

© Laureate Education, Inc.

21

Calculating a one-way repeated measures ANOVA in SPSS

One-way repeated measures ANOVA is accessed through Analyze-General Linear Model-Repeated Measures. This opens a dialogue box where you state what your within participant factor is (trial) and how many levels it has (three). So where factor1 appears by default, change this to Trial. Type 3 into the Number of Levels box and click on . Now click on to open dialogue box (a):

(a)

(b)

It is important to enter the variables in the correct order. Since participants completed trial 1 first, move Trial 1 [trial1] in the Within-Subjects Variables box first using .

© Laureate Education, Inc.

22

Enter Trial 2 [trial2] second and then enter Trial 3 [trial3] so that the dialogue box looks like the one shown in (b).

Now click on to reveal dialogue box (c):

(c)

(d)

Move Trial from the Factor(s) and Factor Interactions box into the Display Means for box using . Click on the empty square to the left of Compare main effects so that a tick appears and under Confidence interval adjustment change LSD (none) to Bonferroni. By doing this you have requested a post hoc test, which will be necessary to interpret any significant main effect since there are more than two levels to the within participant factor. If there were only two levels, e.g., trial 1, trial 2, you could perform a related t-test. Click also on the empty square to the left of Descriptive Statistics so that a tick appears. This requests that means and standard deviations are presented in the output. The dialogue box should look like the one shown in (d). Click on and click on .

Interpreting SPSS output

a) Within-Subjects Factors

While it’s not immediately apparent, the first table indicates that the within participant factor has three levels. These are trial 1, trial 2 and trial 3.

Within-Subjects FactorsMeasure: MEASURE_1trial1trial2trial3Trial123DependentVariable

© Laureate Education, Inc.

23

b) Descriptive Statistics

The Descriptive Statistics table provides the means and standard deviations for each condition. Since ANOVA models determine whether differences in mean scores are statistically significant, you should scrutinise them closely. Providing the sphericity assumption is met, the larger the difference in the means, the more likely you are to obtain a significant effect.

It is clear that participants dotted the paper more times in the first trial than on trials 2 or 3 but that performance was fairly consistent on trials 2 and 3 (yet still in the predicted direction). You should create a table in Word to present this information. Never cut and paste SPSS output into your results section.

Descriptive Statistics90.205.9901572.4011.1021570.8011.63915Trial 1Trial 2Trial 3MeanStd. DeviationN

You can ignore the Multivariate Tests table.

c) Mauchly’s Test of Sphericity

This informs you whether the assumption of sphericity has been violated. If ‘Sig.’ is less than .05 (i.e., p<.05) you should use the Greenhouse-Geisser correction to the degrees of freedom. This could be reported along the lines, “the data failed to satisfy assumptions of sphericity (?2(2) = 16.19, p<.001) so the values given are following Greenhouse Geisser corrections (e = .58).”

Mauchly’s Test of SphericitybMeasure: MEASURE_1.28816.1872.000.584.605.500Within Subjects EffectTrialMauchly’s WApprox.Chi-SquaredfSig.Greenhouse-GeisserHuynh-FeldtLower-boundEpsilonaTests the null hypothesis that the error covariance matrix of the orthonormalized transformed dependent variables isproportional to an identity matrix.May be used to adjust the degrees of freedom for the averaged tests of significance. Corrected tests are displayed inthe Tests of Within-Subjects Effects table.a. Design: Intercept Within Subjects Design: Trialb.

© Laureate Education, Inc.

24

d) Tests of Within-Subjects Effects

This table tells you whether you have a significant main effect for trial. Since the assumption of sphericity was violated, you should report the Greenhouse-Geisser values. The relevant pieces of information are in bold. This could be reported as, “there was a significant effect of trial (F(1.17, 16.35) = 26.64, p<.001).”

Tests of Within-Subjects EffectsMeasure: MEASURE_13478.80021739.40026.635.0003478.8001.1682978.02126.635.0003478.8001.2102875.81026.635.0003478.8001.0003478.80026.635.0001828.532865.3051828.53316.354111.8081828.53316.935107.9711828.53314.000130.610Sphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSphericity AssumedGreenhouse-GeisserHuynh-FeldtLower-boundSourceTrialError(Trial)Type III Sumof SquaresdfMean SquareFSig.

e) Tests of Within-Subjects Contrasts

This table indicates whether the trend in the condition means can be summarised by a straight (linear) line or a curved (quadratic) line. As with all inferential tests, if ‘Sig.’ is less than .05 you have statistical support for an effect. In the present case there is evidence for both (since ‘Sig.’ is less than .05 for both).

Tests of Within-Subjects ContrastsMeasure: MEASURE_12822.70012822.70029.104.000656.1001656.10019.513.0011357.8001496.986470.7331433.624TrialLinearQuadraticLinearQuadraticSourceTralError(Trial)Type III Sumof SquaresdfMean SquareFSig.

To understand this, I have plotted the mean scores over the three trials. This is shown in Figure 3.1. The mean tapping frequency decreases over the trials, but the decreases are not uniform (they can not be represented perfectly by a straight line): the mean difference between trials 1 and 2 (17.80 taps) is much greater than the mean difference between trials 2 and 3 (1.60 taps). If the differences were uniform (e.g., 17.80 taps between trials 1 and 2 and 17.80 taps between trials 2 and 3) then there would only be evidence for a linear trend. You could report this as, “there was evidence for linear (F(1, 14) = 29.10, p<.001) and non-linear (F(1, 14) = 19.51, p<.001) trends in the data”. Note that since the F ratio is larger for the linear term, you have more support for this interpretation.

© Laureate Education, Inc.

25

1 2 3 Trial 70 75 80 85 90 95 Dots per minute Figure 3.1: Mean ‘dotting’ by trial

The Tests of Between-Subjects Effects table can be ignored since there is no between participant factor. This will be considered next week when we cover mixed designs.

f) Estimated Marginal Means

Two tables appear under Estimated Marginal Means. The first repeats the mean scores (with standard errors this time) and the second shows which (if any) group differences are significant. If you did not find a statistically significant main effect, then there is no point looking at the pairwise comparisons.

EstimatesMeasure: MEASURE_190.2001.54786.88393.51772.4002.86766.25278.54870.8003.00564.35577.245Trial123MeanStd. ErrorLower BoundUpper Bound95% Confidence Interval

The Bonferroni adjustment for multiple comparisons corrects the usual probability level (p=.05) by the number of comparisons made (three in this case) to control for Type I errors.

The first two rows show that the difference between trial 1 and trial 2 is significant (since ‘Sig.’ =.000), as is the difference between trial 1 and trial 3 (since ‘Sig.’ =.000). There

© Laureate Education, Inc.

26

was no significant difference between trials 2 and 3 (since ‘Sig.’ =.588). This makes sense since the mean difference between trial 1 and 2 is 17.80 taps and the mean difference between trial 1 and trial 3 is 19.40 taps. In contrast, the difference between trials 2 and 3 is only 1.60 taps. This could be reported as, “pairwise comparisons with Bonferroni adjustment showed that trial 1 produced significantly more taps then trial 2 (p<.001) or trial 3 (p<.001). There was no difference between trials 2 and 3 (p>.05).”

Pairwise ComparisonsMeasure: MEASURE_117.800*3.435.0008.46327.13719.400*3.596.0009.62729.173-17.800*3.435.000-27.137-8.4631.6001.178.588-1.6034.803-19.400*3.596.000-29.173-9.627-1.6001.178.588-4.8031.603(J) Trial231312(I) Trial123MeanDifference(I-J)Std. ErrorSig.aLower BoundUpper Bound95% Confidence Interval forDifferenceaBased on estimated marginal meansThe mean difference is significant at the .05 level.*. Adjustment for multiple comparisons: Bonferroni.a.

PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET AN AMAZING DISCOUNT 🙂