Quantitative Methods
Part 2: Quantitative Methods
Quantitative Methods: An Overview
We now move from research design to more specific methodological decisions. We first look at some early methodological decisions, namely the research site and
selecting participants, and then discuss additional sources of bias. As with the selection of a research design, choosing the most appropriate methods for the study
will determine whether or not the research question is answered. It is critical that the research design and data collection procedures are appropriate given the
research question under investigation.
Part 2: Quantitative Methods
Quantitative Methods: Choosing Data Sources
There are a great number of quantitative data collection instruments. What they all have in common is their generation of numerical data. Although this may initially
appear restricting, most anything can be quantified. We can quantify physical data (e.g., sitting time, heart rate, vigour of activity, goals, steps, speed, distance,
etc.) We can quantify cognitive ability and activity (e.g., IQ tests, standardised ability tests, fMRI brain activation, executive function tasks, etc.). We can
quantify attitudes and opinions (e.g., rate the extent to which you agree with the following statement….) We can even quantify pre-verbal infants’ thoughts (true
story: researchers infer whether infants can comprehend something by determining the amount of time they look at it – with longer looking times indicating that infants
deem the situation to be unexpected or novel).
Choosing Data Collection Instruments
Data collection instruments refer to the instruments, devices, tools or machines that are used to collect data. In quantitative research these might be things like
questionnaires or surveys (to measure attitudes, frequencies, etc.), tape measures (to measure height or distance), standardised tests (to measure literacy and
numeracy ability or IQ) or exams (to measure students’ competencies).
The instruments used in quantitative research are usually very different from those used in qualitative research. This is primarily for two reasons:
1. Quantitative researchers aim to distill data down to numbers, in order to perform statistical analyses that indicate what was found, how confident we can be in
those results and whether we can generalise these findings to the larger population.
2. Quantitative research typically involves large sample sizes to enhance generalisability (the ability to extend our conclusions to the broader population). To
accomplish this, they attempt to summarise the data statistically. Just imagine trying to identify themes around NAPLAN testing in the 1-hour interview transcripts of
100 participants! Much easier (albeit less informative in terms of depth of understanding) would be to say that, as a group, teachers had generally positive opinions
of NAPLAN testing.
In quantitative research it is also vitally important that the data collection instruments produce valid and reliable data. What does valid and reliable mean? These
are important research words that you need to understand.
Validity (accuracy) is a term that indicated the accuracy and authenticity of the data. That is, validity involves asking whether we are measuring what we said we are
going to measure. Validity is essential for interpreting and generalising research. As an analogy, think of the bulls-eye of a dartboard as what you aim to measure
(e.g., students’ literacy competencies). If your adopted measure of students’ literacy competencies is the dart, a valid measure would hit the bulls-eye (e.g.,
NAPLAN?) A less-valid measure, such as using students’ spelling abilities as a proxy for their overall literacy competencies, would miss the mark.
An example of a valid instrument is the Children’s Leisure Activities Study Questionnaire. For this questionnaire parents record the amount time (in hours) their child
spends in a normal week participating in specified activities, for example dance, cricket, TV watching, playing video games. The list of activities are then
characterised into different intensities (i.e. light activities, moderate and vigorous) and from the numbers provided by the parents the amount of time spent in the
different intensity activities is able to be determined. This is what you call a subjective measure of physical activity. Subjective measures are prone to recall bias,
that is, it is likely that parent will subconsciously over inflate time their child is active and under represent the time that their child is inactive.
To determine if the children are participating in the different activities as indicated by their parents, we need to use an objective measure – something like an
activity monitor. An activity monitor is a similar device to a pedometer but records movement as well as intensities of movement (where as pedometer only records
movement). It is normally worn on the right hip for several consecutive days. It provides an objective measure of physical activity. There is no way that the monitor
can record movement that is not there!
To determine if the questionnaire is valid, the results from the questionnaire need to be compared with the results of the monitor. If the results are similar between
the two instruments, then the questionnaire is said to be valid. For example, if the questionnaire showed that a child spent 120 minutes a day in moderate intensity
activity and the monitor showed that the same child spent 125 minutes in moderate intensity activity, the questionnaire is valid. However if the questionnaire showed
that the child spent 260 minutes in moderate intensity activity each day but the monitor still only showed 125 minutes, then the questionnaire would not be considered
valid. In quantitative research it is very very important that the research instruments give valid data.
Reliability (consistency) refers to the consistency of the measures to produce similar results over repeated measures. To continue our dartboard analogy, regardless of
whether or not our measure is valid, a reliable instrument will produce similar results in similar circumstances. That is, regardless of whether or not the dart (our
adopted measure) hits the bulls-eye or not, each time you throw that dart it lands in the same spot.
An example of this is a large state-wide exam which is marked by several teachers. We need to know that each teacher will be marking the exam using the same criteria
and making at the same level (either hard or easy). The teachers need to undertake a reliability test before they start marking so that every teacher knows the
difference between a mark of 7 and a mark of 9. Kervin et al. (2006) describes this well.
Part 2: Quantitative Methods
Quantitative Methods: Choosing Data Sources Revisited
Instruments in more detail
If no suitable research instrument (e.g., questionnaire) exists, how might you start to develop an instrument for data collection? What might be some things that you
would need to consider when you are developing this instrument? Will all participants be able to read? Will all participants understand the questions? How do you make
sure that the questions will be answered as intended? The use of poor instruments (i.e. ones that have not been well designed) in research can be detrimental.
Questionnaires are probably the most popular instruments used in quantitative research, perhaps because they make it easy to collect a large amount of data from a
large number of people. However, there are also many limitations associated with questionnaires. For example, the cognitive ability of the participant is usually not
known, which may potentially affect the results. Further, it is difficult to know if participants answered the questions honestly or accurately. Limitations can be
overcome if questionnaires are designed and developed correctly. Poorly designed questionnaires are often plagued by additional factors that limit their validity and
reliability (e.g., combining two questions into one, leading questions, loaded questions, vague response categories, ambiguous questions), and thus the conclusions
that can be drawn from them.
When developing a questionnaire four main questions should be considered:
1. What is the purpose of the questionnaire?
2. What are the specific objectives of the questionnaire?
3. What type of information is needed?
4. What design will be used?
Below are two examples of different questionnaires. They are very different in all aspects.
Example 1: Think about a normal school week and write down how long you spend doing the following activities before and after school each day. Please write your
answers in minutes.
Mon Tues Wed Thurs Fri
Watching TV
Watching DVDs
Using the computer for fun
Using the computer for doing homework
Doing homework not on the computer
Reading for fun
Being tutored
Doing crafts and hobbies
Example 2: For each section of the review form, please rate the extent to which you disagree or agree with each statement. At the end this section please detail any
comments that you believe explain your ratings or provide us with suggestions for improvement.
Strongly Disagree Disagree Neither Agree/ Disagree Agree Strongly Agree
The structure of the website is clear 1 2 3 4 5
The structure of the website is consistent 1 2 3 4 5
It is easy to navigate around the site 1 2 3 4 5
The font is easily read 1 2 3 4 5
The forum structure is easy to follow 1 2 3 4 5
The weekly planner is easy to use 1 2 3 4 5
Once these questions have been considered, it is also important to look at good practice guidelines for survey design. Although a full discussion of these principles
is beyond the scope of this subject, Cresswell’s (2009) Educational Research text provides a good overview.
Part 2: Quantitative Methods
Quantitative Methods: Choosing a Research Site
One of the early things researchers will choose when determining their data collection procedures is the site where the data will be collected. This is known as the
research site. There are an innumerable number of sites that are used in educational research, but some that are more common include the classroom (or multiple
classrooms), a staffroom, lecture theatre or sports field. Can you think of others?
Selection of an inappropriate site can significantly affect the data collected. For example, students completing an IQ test could do this in their classroom (before,
during or after class), in the library (which may or may not be quiet and isolated), at home (with or without parents observing) or in the staff room. What impact
might these different settings have on the data collection (that is, students’ resultant IQ scores)? Consider not only noise levels, but also other distractions,
confidentiality, size and appropriateness for all participants to determine the effect of each research site on the data generated.
Part 2: Quantitative Methods
Quantitative Methods: Choosing Research Participants
A variety of terms are used to describe research participants (for example, students, children, clients and subjects). The different terms relate to the setting where
the research is conducted. In educational research, the word participant is often used.
Research participants can be as varied as the research site. Like the research site, however, appropriate participants must be chosen to ensure that the research
question can be answered. For instance, there is no use choosing students from Year 6 just because they are available if you really want to know about Year 4 students.
The implementation of specific inclusion and exclusion criteria is one way to ensure that the right participants are chosen for the research.
Perhaps the best way to understand inclusion and exclusion criteria is by looking at the following example. We previously spoke about the HIKCUPS study (see
experimental designs). The research question for this study was: What is the effect of the HIKCUPS program on weight status in children? Consider:
• Who would be recruited into this study? Children? Children capable of participating in the HIKCUPS program? Does this mean all children from any age? Does this
include children with special needs? What about children that have already gone through puberty, can they be included? Does this mean all children in any weight
category? Or should it just involve overweight and obese children? Do all children have to speak English?
• Do their parents need to be available as well? Does it matter if they can’t participate in the afternoon sessions?
As you can see from these questions above, without specific inclusion and exclusion criteria a number of different participants could be recruited. These decisions
significantly impact the data collected and the outcomes of the research.
Below is an example of appropriate inclusion and exclusion criteria for the HIKUPS study.
• Inclusion Criteria: Children, 5-9 yrs old, overweight and obese, pre-pubertal, able to participate in an after-school activity program, English speaking,
generally fit and healthy
• Exclusion Criteria: Extreme obesity, medication related to weight, significant dietary restrictions
Sampling Participants
In addition to inclusion and exclusion criteria, it is just as important to have the right number of participants (i.e. not too many so as to waste valuable resources,
but not too few so as to make it impossible to find statistically significant results) and also to know how you will select them. Qualitative and quantitative studies
usually involve very different numbers of participants. As a very general rule most qualitative studies have less than 15 participants, whereas most quantitative
studies have much more than 15 participants. Why might this be the case? See our discussion of sampling below for a clue.
Depending on the research design, different numbers of participants are needed. As a general rule descriptive designs usually involve between 20 and 100 participants,
while in correlational studies a minimum of 30 participants are needed. In experimental studies this number can be a low as 15 in each group (i.e. 15 in the control
group and 15 in the intervention group).
The process of selecting participants is very important. There are many different methods used to choose participants for quantitative studies. Again it is very
important to use the correct method; otherwise you may have participants that are not appropriate for the research study. The four main methods of sampling
participants used in quantitative research are: (1) random sampling; (2) stratified sampling; (3) convenience sampling; and (4) cluster sampling. These different
sampling methods will be discussed through the example below.
Example
• Research questions: What are the physical activity levels of preschoolers in the Illawarra region?
The problem with this question is that there are approximately 3500 preschool children in the Illawarra. It would be very difficult to take measurements on all of
these children. This means that we need to select a representative sample of approximately 100 children. A representative sample is one the reflects the
characteristics (e.g., age, gender, socioeconomic status, etc.) of the larger population. In this example, let’s say that we could recruit 100 children in a way that
is representative of the whole population. In reality this is probably too small, but we will stick with 100 children for this example. How 100 children could be
selected using each of the sampling methods is described below.
• Random sampling – random sampling is like pulling a number out of a hat. This is the best form of sampling but also the most difficult to perform. How do you
think that you could randomly select 100 children from the 3500 children? To randomly select 100 children we would need a list of every preschool child in the
Illawarra and then using a table of random numbers or random number generator from a computer program, randomly select 100 children. Although it is the best sampling
method it is rarely used in educational research.
• Stratified sampling – We would use this type of sampling if we wanted to look at the differences between groups. For example if we wanted to look at the
following sub questions – What is the difference between girls and boys? OR what is the difference between 3 year olds and 4 year olds? To do this we would need to
stratify (divide) the sample, so in the final analysis there would be enough participants in each of the groups. This means that we would need to inflate the sample so
that there are 100 children in each of the groups (i.e. 100 3yr old girls, 100 3yr old boys, 100 4yr old girls, 100 4yr old boys)
• Convenience sampling – this is the easiest type of sampling. Using this method you would simply pick the first 100 children that were interested. It would not
matter if they were all from the same area or background, or if they were all boys. This sort of sampling, while common, provides the least representative sample.
• Cluster sampling – In this example, cluster sampling involves choosing a handful of preschools (e.g. 5 preschools) and then taking measurements on 20 children
selected at random from each of these preschools. However a limitation to this method is that children at any given preschool may be more similar to each other than at
another preschool. For example – they may all participate in regular structured physical activity and therefore this may bias the results. This needs to be considered
in the analysis.
Why is a representative sample so important? It is important because having a representative sample allows us (when our research is well designed) to extend our
conclusions to the broader population under investigation. To illustrate, say we wanted to know who is going to win the next election, and the means we use to
determine this is asking two people on a street corner in Cronulla. If you now know their voting preferences, how confident would you be that you know what the outcome
of the next election will be? What if you were to ask 100 people in Cronulla? Still no?
What if you surveyed thousands of people, selected at random from all Australian states and territories? Even more, what if you knew that your sample reflected the
key characteristics of the larger Australian population (e.g., age, gender, cultural background, religion, etc.) Having a large representative sample such as this
allows us to draw conclusions that extend beyond the participants in our study and have confidence that our results are a true reflection of reality. So although
random sampling is by far the most difficult and resource-intensive method of sampling, it remains the golden standard that researchers should strive.
Part 2: Quantitative Methods
Quantitative Methods: Other Considerations
Most educational research is conducted in real life settings. This means that there are often many external variables and potential biases that cannot be controlled,
yet need to be considered. These external biases are particularly important in quantitative research, in particular when experimental designs are employed. Where these
sources of bias cannot be avoided, they should at least be acknowledged in communicating the study’s results. Some of these potential bias include:
• History – Some participants may have previously been trained in the skill or ‘thing’ that you are trying to measure. For example if you are trying to determine
if a new swimming program improves swimming in Year 6 children but half of the children actually attend swimming training outside of school, your results may be
affected.
• Maturation – If some participants have gone through puberty but others have not – this may affect the results.
• Compensatory Rivalry – Participants in a one group may try and improve more than the other group (e.g. participants in the control group may try to improve
more than the intervention group just because the have been randomised to the control group).
• Resentful Demoralisation – This is the opposite to compensatory rivalry – participants may just not try because they were allocated the control group and
therefore think that it is not worth trying.
• Evaluation Apprehension – Participants may not answer questions accurately if they are apprehensive about what will happen with the results or who will see the
results.
What is most important to remember is that anything that could cause a change in what a researcher is measuring, independent of the program or intervention, could be a
threat to the internal validity of the study.
Part 2: Quantitative Methods
Capstone activity
Weekly ‘Capstone’ Activity and Discussion:
We have now discussed a range of designs and characteristics associated with quantitative research. Although we have not exhausted the list of quantitative designs (as
an example, at least eight different experimental designs have been proposed), you should now be familiar with at least the basics of some common quantitative designs.
This should include their unique aims, the sorts of research questions they address.
After working through the Moodle content for this week and reading Chapter 5 (the quantitative component) of the text, revisit the area of interest you identified in
the Week 3 capstone activity. Within this area, propose four different research areas/topics that would lend themselves well to different quantitative designs (propose
one topic/area for each of experiment, causal-comparative, correlational and descriptive). Post your responses in the forum.
PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET AN AMAZING DISCOUNT 🙂
6.
Part 2: Quantitative Methods
Quantitative Methods: An Overview
We now move from research design to more specific methodological decisions. We first look at some early methodological decisions, namely the research site and
selecting participants, and then discuss additional sources of bias. As with the selection of a research design, choosing the most appropriate methods for the study
will determine whether or not the research question is answered. It is critical that the research design and data collection procedures are appropriate given the
research question under investigation.
Part 2: Quantitative Methods
Quantitative Methods: Choosing Data Sources
There are a great number of quantitative data collection instruments. What they all have in common is their generation of numerical data. Although this may initially
appear restricting, most anything can be quantified. We can quantify physical data (e.g., sitting time, heart rate, vigour of activity, goals, steps, speed, distance,
etc.) We can quantify cognitive ability and activity (e.g., IQ tests, standardised ability tests, fMRI brain activation, executive function tasks, etc.). We can
quantify attitudes and opinions (e.g., rate the extent to which you agree with the following statement….) We can even quantify pre-verbal infants’ thoughts (true
story: researchers infer whether infants can comprehend something by determining the amount of time they look at it – with longer looking times indicating that infants
deem the situation to be unexpected or novel).
Choosing Data Collection Instruments
Data collection instruments refer to the instruments, devices, tools or machines that are used to collect data. In quantitative research these might be things like
questionnaires or surveys (to measure attitudes, frequencies, etc.), tape measures (to measure height or distance), standardised tests (to measure literacy and
numeracy ability or IQ) or exams (to measure students’ competencies).
The instruments used in quantitative research are usually very different from those used in qualitative research. This is primarily for two reasons:
1. Quantitative researchers aim to distill data down to numbers, in order to perform statistical analyses that indicate what was found, how confident we can be in
those results and whether we can generalise these findings to the larger population.
2. Quantitative research typically involves large sample sizes to enhance generalisability (the ability to extend our conclusions to the broader population). To
accomplish this, they attempt to summarise the data statistically. Just imagine trying to identify themes around NAPLAN testing in the 1-hour interview transcripts of
100 participants! Much easier (albeit less informative in terms of depth of understanding) would be to say that, as a group, teachers had generally positive opinions
of NAPLAN testing.
In quantitative research it is also vitally important that the data collection instruments produce valid and reliable data. What does valid and reliable mean? These
are important research words that you need to understand.
Validity (accuracy) is a term that indicated the accuracy and authenticity of the data. That is, validity involves asking whether we are measuring what we said we are
going to measure. Validity is essential for interpreting and generalising research. As an analogy, think of the bulls-eye of a dartboard as what you aim to measure
(e.g., students’ literacy competencies). If your adopted measure of students’ literacy competencies is the dart, a valid measure would hit the bulls-eye (e.g.,
NAPLAN?) A less-valid measure, such as using students’ spelling abilities as a proxy for their overall literacy competencies, would miss the mark.
An example of a valid instrument is the Children’s Leisure Activities Study Questionnaire. For this questionnaire parents record the amount time (in hours) their child
spends in a normal week participating in specified activities, for example dance, cricket, TV watching, playing video games. The list of activities are then
characterised into different intensities (i.e. light activities, moderate and vigorous) and from the numbers provided by the parents the amount of time spent in the
different intensity activities is able to be determined. This is what you call a subjective measure of physical activity. Subjective measures are prone to recall bias,
that is, it is likely that parent will subconsciously over inflate time their child is active and under represent the time that their child is inactive.
To determine if the children are participating in the different activities as indicated by their parents, we need to use an objective measure – something like an
activity monitor. An activity monitor is a similar device to a pedometer but records movement as well as intensities of movement (where as pedometer only records
movement). It is normally worn on the right hip for several consecutive days. It provides an objective measure of physical activity. There is no way that the monitor
can record movement that is not there!
To determine if the questionnaire is valid, the results from the questionnaire need to be compared with the results of the monitor. If the results are similar between
the two instruments, then the questionnaire is said to be valid. For example, if the questionnaire showed that a child spent 120 minutes a day in moderate intensity
activity and the monitor showed that the same child spent 125 minutes in moderate intensity activity, the questionnaire is valid. However if the questionnaire showed
that the child spent 260 minutes in moderate intensity activity each day but the monitor still only showed 125 minutes, then the questionnaire would not be considered
valid. In quantitative research it is very very important that the research instruments give valid data.
Reliability (consistency) refers to the consistency of the measures to produce similar results over repeated measures. To continue our dartboard analogy, regardless of
whether or not our measure is valid, a reliable instrument will produce similar results in similar circumstances. That is, regardless of whether or not the dart (our
adopted measure) hits the bulls-eye or not, each time you throw that dart it lands in the same spot.
An example of this is a large state-wide exam which is marked by several teachers. We need to know that each teacher will be marking the exam using the same criteria
and making at the same level (either hard or easy). The teachers need to undertake a reliability test before they start marking so that every teacher knows the
difference between a mark of 7 and a mark of 9. Kervin et al. (2006) describes this well.
Part 2: Quantitative Methods
Quantitative Methods: Choosing Data Sources Revisited
Instruments in more detail
If no suitable research instrument (e.g., questionnaire) exists, how might you start to develop an instrument for data collection? What might be some things that you
would need to consider when you are developing this instrument? Will all participants be able to read? Will all participants understand the questions? How do you make
sure that the questions will be answered as intended? The use of poor instruments (i.e. ones that have not been well designed) in research can be detrimental.
Questionnaires are probably the most popular instruments used in quantitative research, perhaps because they make it easy to collect a large amount of data from a
large number of people. However, there are also many limitations associated with questionnaires. For example, the cognitive ability of the participant is usually not
known, which may potentially affect the results. Further, it is difficult to know if participants answered the questions honestly or accurately. Limitations can be
overcome if questionnaires are designed and developed correctly. Poorly designed questionnaires are often plagued by additional factors that limit their validity and
reliability (e.g., combining two questions into one, leading questions, loaded questions, vague response categories, ambiguous questions), and thus the conclusions
that can be drawn from them.
When developing a questionnaire four main questions should be considered:
1. What is the purpose of the questionnaire?
2. What are the specific objectives of the questionnaire?
3. What type of information is needed?
4. What design will be used?
Below are two examples of different questionnaires. They are very different in all aspects.
Example 1: Think about a normal school week and write down how long you spend doing the following activities before and after school each day. Please write your
answers in minutes.
Mon Tues Wed Thurs Fri
Watching TV
Watching DVDs
Using the computer for fun
Using the computer for doing homework
Doing homework not on the computer
Reading for fun
Being tutored
Doing crafts and hobbies
Example 2: For each section of the review form, please rate the extent to which you disagree or agree with each statement. At the end this section please detail any
comments that you believe explain your ratings or provide us with suggestions for improvement.
Strongly Disagree Disagree Neither Agree/ Disagree Agree Strongly Agree
The structure of the website is clear 1 2 3 4 5
The structure of the website is consistent 1 2 3 4 5
It is easy to navigate around the site 1 2 3 4 5
The font is easily read 1 2 3 4 5
The forum structure is easy to follow 1 2 3 4 5
The weekly planner is easy to use 1 2 3 4 5
Once these questions have been considered, it is also important to look at good practice guidelines for survey design. Although a full discussion of these principles
is beyond the scope of this subject, Cresswell’s (2009) Educational Research text provides a good overview.
Part 2: Quantitative Methods
Quantitative Methods: Choosing a Research Site
One of the early things researchers will choose when determining their data collection procedures is the site where the data will be collected. This is known as the
research site. There are an innumerable number of sites that are used in educational research, but some that are more common include the classroom (or multiple
classrooms), a staffroom, lecture theatre or sports field. Can you think of others?
Selection of an inappropriate site can significantly affect the data collected. For example, students completing an IQ test could do this in their classroom (before,
during or after class), in the library (which may or may not be quiet and isolated), at home (with or without parents observing) or in the staff room. What impact
might these different settings have on the data collection (that is, students’ resultant IQ scores)? Consider not only noise levels, but also other distractions,
confidentiality, size and appropriateness for all participants to determine the effect of each research site on the data generated.
Part 2: Quantitative Methods
Quantitative Methods: Choosing Research Participants
A variety of terms are used to describe research participants (for example, students, children, clients and subjects). The different terms relate to the setting where
the research is conducted. In educational research, the word participant is often used.
Research participants can be as varied as the research site. Like the research site, however, appropriate participants must be chosen to ensure that the research
question can be answered. For instance, there is no use choosing students from Year 6 just because they are available if you really want to know about Year 4 students.
The implementation of specific inclusion and exclusion criteria is one way to ensure that the right participants are chosen for the research.
Perhaps the best way to understand inclusion and exclusion criteria is by looking at the following example. We previously spoke about the HIKCUPS study (see
experimental designs). The research question for this study was: What is the effect of the HIKCUPS program on weight status in children? Consider:
• Who would be recruited into this study? Children? Children capable of participating in the HIKCUPS program? Does this mean all children from any age? Does this
include children with special needs? What about children that have already gone through puberty, can they be included? Does this mean all children in any weight
category? Or should it just involve overweight and obese children? Do all children have to speak English?
• Do their parents need to be available as well? Does it matter if they can’t participate in the afternoon sessions?
As you can see from these questions above, without specific inclusion and exclusion criteria a number of different participants could be recruited. These decisions
significantly impact the data collected and the outcomes of the research.
Below is an example of appropriate inclusion and exclusion criteria for the HIKUPS study.
• Inclusion Criteria: Children, 5-9 yrs old, overweight and obese, pre-pubertal, able to participate in an after-school activity program, English speaking,
generally fit and healthy
• Exclusion Criteria: Extreme obesity, medication related to weight, significant dietary restrictions
Sampling Participants
In addition to inclusion and exclusion criteria, it is just as important to have the right number of participants (i.e. not too many so as to waste valuable resources,
but not too few so as to make it impossible to find statistically significant results) and also to know how you will select them. Qualitative and quantitative studies
usually involve very different numbers of participants. As a very general rule most qualitative studies have less than 15 participants, whereas most quantitative
studies have much more than 15 participants. Why might this be the case? See our discussion of sampling below for a clue.
Depending on the research design, different numbers of participants are needed. As a general rule descriptive designs usually involve between 20 and 100 participants,
while in correlational studies a minimum of 30 participants are needed. In experimental studies this number can be a low as 15 in each group (i.e. 15 in the control
group and 15 in the intervention group).
The process of selecting participants is very important. There are many different methods used to choose participants for quantitative studies. Again it is very
important to use the correct method; otherwise you may have participants that are not appropriate for the research study. The four main methods of sampling
participants used in quantitative research are: (1) random sampling; (2) stratified sampling; (3) convenience sampling; and (4) cluster sampling. These different
sampling methods will be discussed through the example below.
Example
• Research questions: What are the physical activity levels of preschoolers in the Illawarra region?
The problem with this question is that there are approximately 3500 preschool children in the Illawarra. It would be very difficult to take measurements on all of
these children. This means that we need to select a representative sample of approximately 100 children. A representative sample is one the reflects the
characteristics (e.g., age, gender, socioeconomic status, etc.) of the larger population. In this example, let’s say that we could recruit 100 children in a way that
is representative of the whole population. In reality this is probably too small, but we will stick with 100 children for this example. How 100 children could be
selected using each of the sampling methods is described below.
• Random sampling – random sampling is like pulling a number out of a hat. This is the best form of sampling but also the most difficult to perform. How do you
think that you could randomly select 100 children from the 3500 children? To randomly select 100 children we would need a list of every preschool child in the
Illawarra and then using a table of random numbers or random number generator from a computer program, randomly select 100 children. Although it is the best sampling
method it is rarely used in educational research.
• Stratified sampling – We would use this type of sampling if we wanted to look at the differences between groups. For example if we wanted to look at the
following sub questions – What is the difference between girls and boys? OR what is the difference between 3 year olds and 4 year olds? To do this we would need to
stratify (divide) the sample, so in the final analysis there would be enough participants in each of the groups. This means that we would need to inflate the sample so
that there are 100 children in each of the groups (i.e. 100 3yr old girls, 100 3yr old boys, 100 4yr old girls, 100 4yr old boys)
• Convenience sampling – this is the easiest type of sampling. Using this method you would simply pick the first 100 children that were interested. It would not
matter if they were all from the same area or background, or if they were all boys. This sort of sampling, while common, provides the least representative sample.
• Cluster sampling – In this example, cluster sampling involves choosing a handful of preschools (e.g. 5 preschools) and then taking measurements on 20 children
selected at random from each of these preschools. However a limitation to this method is that children at any given preschool may be more similar to each other than at
another preschool. For example – they may all participate in regular structured physical activity and therefore this may bias the results. This needs to be considered
in the analysis.
Why is a representative sample so important? It is important because having a representative sample allows us (when our research is well designed) to extend our
conclusions to the broader population under investigation. To illustrate, say we wanted to know who is going to win the next election, and the means we use to
determine this is asking two people on a street corner in Cronulla. If you now know their voting preferences, how confident would you be that you know what the outcome
of the next election will be? What if you were to ask 100 people in Cronulla? Still no?
What if you surveyed thousands of people, selected at random from all Australian states and territories? Even more, what if you knew that your sample reflected the
key characteristics of the larger Australian population (e.g., age, gender, cultural background, religion, etc.) Having a large representative sample such as this
allows us to draw conclusions that extend beyond the participants in our study and have confidence that our results are a true reflection of reality. So although
random sampling is by far the most difficult and resource-intensive method of sampling, it remains the golden standard that researchers should strive.
Part 2: Quantitative Methods
Quantitative Methods: Other Considerations
Most educational research is conducted in real life settings. This means that there are often many external variables and potential biases that cannot be controlled,
yet need to be considered. These external biases are particularly important in quantitative research, in particular when experimental designs are employed. Where these
sources of bias cannot be avoided, they should at least be acknowledged in communicating the study’s results. Some of these potential bias include:
• History – Some participants may have previously been trained in the skill or ‘thing’ that you are trying to measure. For example if you are trying to determine
if a new swimming program improves swimming in Year 6 children but half of the children actually attend swimming training outside of school, your results may be
affected.
• Maturation – If some participants have gone through puberty but others have not – this may affect the results.
• Compensatory Rivalry – Participants in a one group may try and improve more than the other group (e.g. participants in the control group may try to improve
more than the intervention group just because the have been randomised to the control group).
• Resentful Demoralisation – This is the opposite to compensatory rivalry – participants may just not try because they were allocated the control group and
therefore think that it is not worth trying.
• Evaluation Apprehension – Participants may not answer questions accurately if they are apprehensive about what will happen with the results or who will see the
results.
What is most important to remember is that anything that could cause a change in what a researcher is measuring, independent of the program or intervention, could be a
threat to the internal validity of the study.
Part 2: Quantitative Methods
Capstone activity
Weekly ‘Capstone’ Activity and Discussion:
We have now discussed a range of designs and characteristics associated with quantitative research. Although we have not exhausted the list of quantitative designs (as
an example, at least eight different experimental designs have been proposed), you should now be familiar with at least the basics of some common quantitative designs.
This should include their unique aims, the sorts of research questions they address.
After working through the Moodle content for this week and reading Chapter 5 (the quantitative component) of the text, revisit the area of interest you identified in
the Week 3 capstone activity. Within this area, propose four different research areas/topics that would lend themselves well to different quantitative designs (propose
one topic/area for each of experiment, causal-comparative, correlational and descriptive). Post your responses in the forum.