concept paper

n Discuss how a hypothesis differs from a prediction.
n Describe the different sources of ideas for research, including common sense, observation, theories, past research, and practical problems.
n Identify the two functions of a theory.
n Summarize the fundamentals of conducting library research in psychology, including the use of PgcINFO.
n Summarize the information included in the abstract, introduction, method, results, and dis-cussion sections of research articles.

Foundations

4 PART 1 Foundations
1-1 The Language of Research, 5 1-2a Structure of Research, 14
1-1a Types of Studies, 5 1-2b Deduction and Induction, 16
1-lb Time in Research, 6 1-2c Positivism and Post-Positivism, 18
1-1c Types of Relationships, 6 1-2d Introduction to Validity, 20
1-1d Variables, 8 1-3 Ethics in Research, 23
1-le Hypotheses, 9 1-3a The Language of Ethics, 24
1-1f Types of Data, 11 1-4 Conceptualizing, 24
1-1g The Unit of Analysis, 12 1-4a Problem Formulation, 25
1-1h Research Fallacies, 13 1-4b Concept Mapping, 27
1-2 Philosophy of Research, 13 1-4c Logic Models, 29
nmtri=igarAtirMi
You have to begin somewhere. Unfortunately, you can only be in one place at a time and, even less fortunately for you, you happen to be right here right now, so you may as well consider this a place to begin, and what better place to begin than an intro¬duction? Here’s where I cover all the stuff you think you already know, and probably should already know, but most likely don’t know as well as you think you do.
Let’s begin with the big historical picture before you get started in your study of how things are currently done. For now, we might suggest that the beginning lies in the struggle of our species to survive and that our current point of view results from the evolution of learning to survive via that most basic of research strategies: trial and error. It may be coincidental, but at present one of the most important research designs is called the clinical trial in which clinical researchers attempt to es¬tablish control over sources of error in their methods and results. Philosophy is another long-term influence on how research is done. Philosophy has always been a critical aspect of the sorting and rating of “truth” whether in science, politics, reli¬gion, or other domains. For example, if you have heard any discussion of whether “intelligent design” should be included in school curricula, you know that philoso¬phy remains very much a part of the public discussion of what we might call our cul¬tural knowledge base.
Fast-forwarding to the here and now, it is clear that modern research involves an eclectic blending of an enormous range of skilLs and activities. To be a good social researcher, you must be able to work well with a variety of people, understand the specific methods used to conduct research, understand the core of the subject that you are studying as well as its boundaries, convince someone to give you the funds to study it, stay on track and on schedule, speak and write persuasively, and on and on. You’ll be challenged to learn and apply language, concepts, and skills in the context of complex environments that include your study participants, fellow students, research advisors, ethical and scientific review boards, funding agencies and the gen¬eral public. Few paths are more difficult, interesting, or potentially worthwhile.
Perhaps language acquisition is the most fundamental step needed to get your foot in the door of modern research methods. This chapter begins with the basic language of research, the introductory vocabulary you need to read the rest of the text. With the basic terminology under your belt, I’ll show you some of the underly¬ing philosophical issues that drive the research endeavor. Social research always occurs in a social context. It is a human endeavor. I’ll point out the critical ethical issues that affect the researcher, research participants, and the research effort gen¬erally. For instance, you’ll consider how much risk your research participants can be placed under and how you must ensure their privacy. In the section on concep¬tualization, I answer questions such as where do research problems come from and how do I develop a research question?
That ought to be enough to get you started. At least it ought to be enough to get you thoroughly confused. But don’t worry, there’s stuff that’s far more confus¬ing than this yet to come.

1-1 The Language of Research

Learning about. research is a lot like learning about anything else. To start, you need to learn the jargon people use, the big controversies they fight over, and the different factions that define the major players. For example, there has been a tend¬ency for researchers to classify methods as either qualitative or quantitative (basi¬cally words versus numbers). Recently, more attention has been paid to the “debate” about the relative merits of various approaches under these headings due to the emergence of qualitative methods into the mainstream of social science methodology. Yet, as we will see, qualitative versus quantitative is a very limited di¬chotomy, whether we are discussing types of data or types of research strategies. There is now recognition of the value of using mixed methods, depending on the nature of the study goals or research questions. The term mixed methods means that more that one kind of method, most often a combination of qualitative and quanti¬tative methods, is used in the study.
1-1a Types of Studies
Research projects take three basic forms:
1. Descriptive studies are designed primarily to describe what is going on or what exists. Public opinion polls that seek only to describe the proportion of people who hold various opinions arc primarily descriptive in nature. For instance, if you want to know what percent of the population would vote for a Democrat or a Republican in the next presidential election, you are simply interested in describing something.
2. Relational studies look at the relationships between two or more variables. A pub¬lic opinion poll that compares what proportion of males and females say they would vote for a Democratic or a Republican candidate in the next presidential election is essentially studying the relationship between gender and voting preference.
3. Causal studies are designed to determine whether one or more variables (for example, a program or treatment variable) causes or affects one or more out¬come variables. If you performed a public opinion poll to try to determine whether a recent political advertising campaign changed voter preferences, you would essentially be studying whether the campaign (cause) changed the pro¬portion of voters who would vote Democratic or Republican (effect).
The three study types can be viewed as cumulative. That is, a relational study assumes that you can first describe (by measuring or observing) each of the varia¬bles you are trying to relate. A causal study assumes that you can describe both the cause and effect variables and that you can show that they are related to each other. Causal studies are probably the most demanding of the three types of studies to perform.
Probably the vast majority of applied social research consists of descriptive and relational studies. So why should we worry about the more difficult studies? Because for most social sciences, it is important to go beyond simply looking at the world or looking at relationships. Instead, you might like to be able to change the world, to improve it, and eliminate some of its major problems. If you want to change the world (especially if you want to do this in an organized, scientific way), you are auto¬matically interested in causal relationships—ones that tell how causes (for example, programs and treatments) affect the outcomes of interest. In fact, the era of evi¬dence-based practice, described fully in the final chapter, has elevated the status of causal studies in every field and in every• part of the world. Evidence-based practice means that what we do to intervene in the lives of others is a result of studies that have given us a strong empirical base for predicting that a program or treatment will cause a specific kind of change in the lives of participants, clients, or patients.

6 PART 1 Foundations

1-1 b Time in Research
Time is an important element of any research design, and here I want to introduce one of the most fundamental distinctions in research design nomenclature: cross-sectional versus longitudinal studies. A cross-sectional study is one that takes place at a single point in time. In effect, you arc taking a slice or cross-section of whatever it is you’re observing or measuring. A longitudinal study is one that takes place over time—you have at least two (and often more) waves (distinct times when observations are made) of measurement in a longitudinal design.
A further distinction is made between two types of longitudinal designs: repeated measures and time series. There is no universally agreed upon rule for dis¬tinguishing between these two terms; but in general, if you have two or a few waves of measurement, you are using a repeated measures design. If you have many waves of measurement over time, you have a time series. How many is many? Usually, you wouldn’t use the term time series unless you had at least twenty waves of measure¬ment, and often far more. Sometimes the way you distinguish between these is with the analysis methods you would use. Time series analysis requires that you have at least twenty or so observations over time. Repeated measures analyses aren’t often used with as many as twenty waves of measurement.
1-1c Types of Relationships
A relationship refers to the correspondence between two variables (see the section on variables later in this chapter). When you talk about types of relationships, you can mean that in at least two ways: the nature of the relationship or the pattern of it.
The Nature of a Relationship Although all relationships tell about the corre¬spondence between two variables, one special type of relationship holds that the two variables are not only in correspondence, but that one causes the other. This is the key distinction between a simple correlational relationship and a causal rela¬tionship. A correlational relationship simply says that two things perform in a synchronized manner. For instance, economists often talk of a correlation between inflation and unemployment. When inflation is high, unemployment also tends to be high. When inflation is low, unemployment also tends to be low. The two varia¬bles are con-elated; but knowing that two variables are correlated does not tell whether one causes the other. It is documented, for instance, that there is a correla¬tion between the number of roads built in Europe and the number of children born in the United States. Does that mean that if fewer children are desired in the United States there should be a cessation of road building in Europe? Or, does it mean that if there aren’t enough roads in Europe, U.S. citizens should be encour¬aged to have more babies? Of course not. (At least, I hope not.) While there is a relationship between the number of roads built and the number of babies, it’s not likely that the relationship is a causal one.
This leads to consideration of what is often termed the third-variable problem. In this example, it may be that a third variable is causing both the building of roads and the birthrate and causing the correlation that is observed. For instance, per¬haps the general world economy is responsible for both. When the economy is good, more roads are built in Europe and more children are born in the United
States. The key lesson here is that you have to be careful when you interpret correla¬tions. If you observe a correlation between the number of hours students use the
computer to study and their grade point averages (with high computer users get
ting higher grades), you cannot assume that the relationship is causal—that com¬puter use improves grades. In this case, the third variable might be socioeconomic status—richer students, who have greater resources at their disposal, tend to both use computers and make better grades. Resources drive both use and grades; com¬puter use doesn’t cause the change in the grade point averages.

FIGURE 1-1d
A curvilinear relationship

rn
rn
a)
O
__
a)
dosage level
Patterns of Relationships Several terms describe the major different types of patterns one might find in a relationship. First, there is the case of no relatirmship at all. When there is no relationship between two variables, if you know the values on one variable, you don’t know anything about the values on the other. For instance, I suspect that there is no relationship between the length of the lifeline on your hand and your grade point average. If I know your GPA, I don’t have any idea how long your lifeline is. Figure 1-la shows the case where there is no relationship.
Then, there is the positive relationship. In a positive relationship, high values on one variable are associated with high values on the other and low values on one arc associated with low values on the other. Figure 1-1.b shows an idealized positive relationship between years of education and the salary one might expect to be making.
On the other hand, a negative relationship implies that high values on one vari¬able are associated with low values on the other. This is also sometimes termed an inverstrelationship. Figure 1-Ic shows an idealized negative relationship between a measure of self-esteem and a measure of paranoia in psychiatric patients.
These are the simplest types of relationships that might typically be estimated in research. However, the pattern of a relationship can be more complex than these. For instance, Figure 1-1d shows a relationship that changes over the range of both variables, a curvilinear relationship. In this example, the horizontal axis represents dosage of a drug for an illness and the vertical axis represents a severity of illness measure. As the dosage rises, the severity of illness goes down; but at some point, the patient begins to experience negative side effects associated with too high a dosage. and the severity of illness begins to increase again.

8 PART 1 Foundations

1-1d Variables
You won’t be able to do much in research unless you know how to talk about varia-bles. A variable is any entity that can take on different values. Okay, so what does that mean? Anything that can vary can be considered a variable. For instance, agr can be considered a variable because age can take different values for different peo-ple or for the same person at different times. Similarly, country can be considered a variable because a person’s country can be assigned a value.
Variables aren’t always quantitative data or numerical. The variable gender con¬sists of two text values: male and female, which we would naturally think of as a quali¬tative variable because we are distinguishing between qualities of a variable rather than quantities. If it is useful, quantitative values can be assigned instead of (or in place of) the text values, but it’s not necessary to assign numbers for something to be a variable. It’s also important to realize that variables aren’t the only things meas¬ured in the traditional sense. For instance, in much social research and in program evaluation, the treatment or program is considered to consist of one or more varia¬bles. (That is, the cause can he considered a variable.) An educational program can have varying amounts of time on task, classroom settings, student-teacher ratios, and so on. Therefore, even the program can be considered a variable, which can be made up of a number of subvariables.
An attribute is a specific value on a variable. For instance, the variable sex or grn¬der has two attributes: male and female, or, the variable agreement might be defined as having five attributes:
1 – strongly disagree
2 = disagree
3 – neutral
4 – agree
5 = strongly agree
Another important distinction having to do with the term variable is the distinction between an independent and dependent variable. This distinction is particularly relevant when you are investigating cause-effect relationships. It took me the lon¬gest time to learn this distinction. (Of course, I’m someone who gets confused about the signs for arrivals and departures at airports—do I go to arrivals because I’m arriving at the airport or does the person I’m picking up go to arrivals because they’re arriving on the plane?) I originally thought that an independent variable was one that would be free to vary or respond to some program or treatment and that a dependent variable must be one that depends on my efforts (that is, it’s the treatment). However, this is entirely backward! In fact the independent variable is what you (or nature) manipulates—a treatment or program or cause. The dependent variable is what you presume to be affected by the independent variable—your effects or outcomes. For example, if you are studying the effects of a new educational pro¬gram on student achievement, the program is the independent variable and your measures of achievement are the dependent ones.
Finally, there arc two traits of variables that should always be achieved. Each variable should be exhaustive, meaning that it should include all possible answer¬able responses. For instance, if the variable is religion and the only options are Protes¬tant, Jewish, and Muslim, there are quite a few religions that haven’t been included. The list does not exhaust all possibilities. On the other hand, if you exhaust all the possibilities with some variables—religion being one of them—you would simply have too many responses. The way to deal with this is to explicitly list the most com¬mon attributes and then use a general category like Other to account for all remain¬ing ones. In addition to being exhaustive, the attributes of a variable should be mutually exclusive, meaning that no respondent should be able to have two attrib¬utes simultaneously. While this might seem obvious, it is often rather tricky in prac¬tice. For instance, you might be tempted to represent the variable Employment Status

with the two attributes employed and unemployed. However, these attributes are not necessarily mutually exclusive—a person who is looking for a second job while employed might be able legitimately to check both attributes! But don’t research¬ers often use questions on surveys that ask the respondent to check all that apply and then list a series of categories? Yes, but technically speaking, each of the catego¬ries in a question like that is its own variable and is treated dichotomously as either checked or unchecked—as attributes that are mutually exclusive.
1-1e Hypotheses
A hypothesis is a specific statement of prediction. It describes in concrete (rather than theoretical) terms what you expect to happen in your study. Not all studies have hypotheses. Sometimes a study is designed to be exploratory (see Section 1¬2b, Deduction and Induction, later in this chapter). There is no formal hypothesis, and perhaps the purpose of the study is to explore some area more thoroughly to develop some specific hypothesis or prediction that can be tested in future research. A single study may have one or many hypotheses.
Actually, whenever I talk about a hypothesis, I am really thinking simultane¬ously about two hypotheses. Let’s say that you predict that there will be a relation¬ship between two variables in your study. The way to set up the hypothesis test is to formulate two hypothesis statements: one that describes your prediction and one that describes all the other possible outcomes with respect to the hypothesized rela¬tionship. Your prediction is that variable A and variable B will be related. (You don’t care whether it’s a positive or negative relationship.) Then the only other possible outcome would be that variable A and variable B are not related. Usually, the hr pothesis that you support (your prediction) is called the alternative hypothesis, and the hypothesis that describes the remaining possible outcomes is termed the null hypothesis. Sometimes a notation such as HA or Hi is used to represent the alterna¬tive hypothesis or your prediction, and Ho or Ho to represent the null case. You have to be careful here, though. In some studies, your prediction might well be that there will be no difference or change. In this case, you are essentially trying to find support for the null hypothesis and you are opposed to the alternative.
If your prediction specifies a direction, the null hypothesis is the no-difference prediction and the prediction of the opposite direction. This is called a one-tailed hypothesis. For instance, let’s imagine that you are investigating the effects of a new employee-training program and that you believe one of the outcomes will be that there will be less employee absenteeism. Your two hypotheses might be stated some¬thing like this:
The null hypothesis for this study is
Ho: As a result of the XYZ company employee- training program, there will either be no
significant difference in employee absenteeism or there will be a significant increase,
which is tested against the alternative hypothesis:
HA: As a result of the XYZ company employee-training program, there will be a signifi-cant decrease in employee absenteeism.
In Figure 1-2, this situation is illustrated graphically. The alternative hypoth¬esis—your prediction that the program will decrease absenteeism—is shown there. The null must account for the other two possible conditions: no difference, or an increase in absenteeism. The figure shows a hypothetical distribution of absentee¬ism differences. The term one-tailed refers to the tail of the distribution on the out¬come variable.
When your prediction does not specify a direction, you have a two-tailed hypoth¬esis. For instance, let’s assume you are studying a new drug treatment for depres¬sion. The drug has gone through some initial animal trials but has not yet been tested on humans. You believe (based on theory and the previous research) that

FIGURE 1-2 A one-tailed hypothesis
no change
one•tail
less more
0
absenteeism

FIGURE 1-3 A two-tailed hypothesis
the drug will have an effect, but you are not confident enough to hypothesize a direction and say the drug will reduce depression. (After all, you’ve seen more than enough promising drug treatments come along that eventually were shown to have severe side effects that actually worsened symptoms.) In this case, you might state the two hypotheses like this:
The null hypothesis for this study is:
No: Asa result of 300ntg/day of the ABC drug, there will be no significant difference in depression,
which is tested against the alternative hypothesis:
fl,k: As a result of 300Ing/day of the ARC drug, there will be a significant difference in depression.
Figure 1-3 illustrates this two-tailed prediction for this case. Again, notice that the term two-tailed refers to the tails of the distribution for your outcome variable.
The important thing to remember about stating hypotheses is that you formulate your prediction (directional or not), and then you formulate a second hypothesis that is mutually exclusive of the first and incorporates all possible alternative

outcomes for that case. When your study analysis is completed, the idea is that you will have to choose between the two hypotheses. If your prediction was correct, you would (usually) reject the null hypothesis and accept the alternative. If your original predic¬tion was not supported in the data, you will accept the null hypothesis and reject the alternative. The logic of hypothesis testing is based on these two basic principles:
• Two mutually exclusive hypothesis statements that, together, exhaust all possi¬ble outcomes need to be developed.
• The hypotheses must be tested so that one is necessarily accepted and the other rejected.
Okay, I know it’s a convoluted, awkward, and formalistic way to ask research questions, but it encompasses a long tradition in statistics called the hypothetical-deductive model, and sometimes things are just done because they’re traditions. And anyway, if all of this hypothesis testing was easy enough that anybody could under-stand it, how do you think statisticians and methodologists would stay employed?
1-1f Types of Data
Data will be discussed in lots of places in this text. but here I just want to make a fun-damental distinction between two types of data: qualitative and quantitative. Typically data is called quantitative data if it is in numerical form and qualitative data if it is not. Notice that qualitative data could be much more than just words or text. Photo¬graphs, videos, sound recordings, and so on, can he considered qualitative data
Personally, while I find the distinction between qualitative and quantitative data to have some utility, I think most people draw too hard a distinction, and that can lead to all sorts of confusion. In some areas of social research, the qualitative-quantitative distinction has led to protracted arguments with the proponents of each arguing the superiority of their kind of data over the other. The quantitative types argue that their data is hard, rigorous, credible, and scientific. The qualitative proponents counter that their data is sensitive, nuanced, detailed, and contextual.
For many of us in social research, this kind of polarized debate has become less than productive. In addition, it obscures the fact that qualitative and quantitative data are intimately related to each other. All quantitative data is based upon qualitative judg¬nuorts; and all qualitative data can be described and manipulated numerically. For instance, think about a common quantitative measure in social research—a self-esteem scale where the respondent rates a set of self-esteem statements on a l-to-5 scale. Even though the result is a quantitative score, think of how qualitative such an instrument is. The researchers who developed such instruments had to make coundess judgments in constructing them: how to define self-esteem; how to distinguish it from other related concepts: how to word potential scale items; how to make sure the items would be understandable to the intended respondents; what kinds of contexts they could be used in; what kinds of cultural and language constraints might be present: and so on. Researchers who decide to use such a scale in their studies have to make another set of judgments: how well the scale measures the intended concept; how reliable or con¬sistent it is; how appropriate it is for the research context and intended respondents; and so on. Believe it or not, even the respondents make many judgments when filling out such a scale: what various terms and phrases mean; why the researcher is giving this scale to them; how much energy and effort they want to expend to complete it; and so on. Even the consumers and readers of the research make judgments about the self-esteem measure and its appropriateness in that research context. What may look like a simple, straightforward, cut-and-dried quantitative measure is actually based on lots of qualitative judgments made by many different people.
On the other hand, all qualitative information can be easily converted into quantitative, and many times doing so would add considerable value to your research. The simplest way to do this is to divide the qualitative information into categories and number them! I know that sounds trivial, but even that simple

12 PART 1 Foundations

FIGURE 1-4 Example of how you can convert qualitative sorting information into quantitative data

nominal enumeration can enable you to organize and process qualitative infor¬mation more efficiently. As an example, you might take text information (say, excerpts from transcripts) and pile these excerpts into piles of similar state¬ments. When you perform something as easy as this simple grouping or piling task, you can describe the results quantitatively. For instance, Figure 1-4 shows that if you had ten statements and grouped these into five piles, you could describe the piles using a 10 x 10 table of Os and 1 s. If two statements were placed together in the same pile, you would put a 1 in their row-column junc¬ture. If two statements were placed in different piles, you would use a 0. The resulting matrix or table describes the grouping of the ten statements in terms of their similarity. Even though the data in this example consists of qualitative statements (one per card), the result of this simple qualitative procedure (grouping similar excerpts into the same piles) is quantitative in nature. “So what?” you ask. Once you have the data in numerical form, you can manipulate it numerically. For instance, you could have five different judges sort the ten excerpts and obtain a 0-1 matrix like this for each judge. Then you could aver¬age the five matrices into a single one that shows the proportions of judges who grouped each pair together. This proportion could be considered an estimate of the similarity (across independent judges) of the excerpts. While this might not seem too exciting or useful, it is exactly this kind of procedure that is used as an integral part of the process of developing concept maps of ideas for groups of people (something that is useful). Concept mapping is described later in this chapter.
1-1g The Unit of Analysis
One of the most important ideas in a research project is the unit of analysis. The unit of analysis is the major entity that you are analyzing in your study. For instance, any of the following could be a unit of analysis in a study:
• Individuals
• Groups
• Artifacts (books, photos, newspapers)

• Geographical units (town, census tract, state)
• Social interactions (dyadic relations, divorces, arrests)
Why is it called the unit of analysis and not something else (like, the unit of sam-pling)? Because it is 11w analysis you do in your study that determines what the unit is. For instance, if you are comparing the children in two classrooms on achievement test scores, the unit is the individual child because you have a score for each child. On the other hand, if you are comparing the two classes on classroom climate, your unit of analysis is the group, in this case the classroom, because you have a classroom climate score only for the class as a whole and not for each individual student.
For different analyses in the same study, you may have different units of analy¬sis. If you decide to base an analysis on student scores, the individual is the unit. However you might decide to compare average classroom performance. In this case, since the data that goes into the analysis is the average itself (and not the indi¬viduals’ scores) the unit of analysis is actually the group. Even though you had data at the student level, you use aggregates in the analysis. In many areas of social research, these hierarchies of analysis units have become particularly important and have spawned a whole area of statistical analysis sometimes referred to as hier¬archical modeling. This is true in education, for instance, where a researcher might compare classroom performance data but collect achievement data at the individ¬ual student level.
1-1h Research Fallacies
A fallacy is an error in reasoning, usually based on mistaken assumptions. Research¬ers are Familiar with all the ways they could go wrong and the fallacies they are sus¬ceptible to. Here, I discuss two of the most important.
The ecological fallacy occurs when you make conclusions about individuals based only on analyses of group.data. For instance, assume that you measured the math scores of a particular classroom and found that they had the highest average score in the district. Later (probably at the mall) you run into one of the kids from that class and you think to yourself, “She must be a math whiz.” Aha! Fallacy! Just because she comes from the class with the highest average doesn’t mean that she is automatically a high-scorer in math. She could be the lowest math scorer in a class that otherwise consists of math geniuses.
An exception fallacy is sort of the reverse of the ecological fallacy. It occurs when you reach a group conclusion on the basis of exceptional cases. This kind of fallacious reasoning is at the core of a lot of sexism and racism. The stereotype is of the guy who sees a woman make a driving error and concludes that women are terrible drivers. Wrong! Fallacy!
Both of these fallacies point to some of the traps that exist in research and in everyday reasoning. They also point out how important it is to do research. It is im-portant to determine empirically how individuals perform, rather than simply rely on group averages. Similarly, it is important to look at whether there are correla¬tions between certain behaviors and certain groups.
1-2 Philosophy of Research
You probably think of research as something abstract and complicated. It can be, but you’ll see (I hope) that if you understand the different parts or phases of a research project and how these fit together, it’s not nearly as complicated as it may seem at first glance. A research project has a well-known structure: a beginning, middle, and end. I introduce the basic phases of a research project in Section 1-2a, Structure of Research. Here, I also introduce some important distinctions in research: the different types of questions you can ask in a research project; and, the major components or parts of a research project.

14 PART 1 Foundations

Before the modern idea of research emerged, there was a term for what philos¬ophers used to call research: logical reasoning. So, it should come as no surprise that some of the basic distinctions in logic have carried over into contemporary research. In Section l-2b, Deduction and Induction, [ discuss how two major logical systems, the inductive and deductive methods of reasoning, are related to modern research.
Okay, you knew that no introduction would be complete without considering something having to do with assumptions and philosophy. (I thought I very cleverly snuck in the stuff about logic in the last paragraph.) All research is based on assumptions about how the world is perceived and how you can best come to under¬stand it. Of course, nobody really knows how you can best understand the world, and philosophers have been arguing about that question for at least two millennia now, so all I’m going to do is look at how most contemporary social scientists approach the question of how you know about the world around you. Two major philosophical schools of thought are considered—positivism and post-positivism¬that are especially important perspectives for contemporary social research. (I’m only considering positivism and post-positivism here because these are the major schools of thought. Forgive me for not considering the hotly debated alternatives like relativism, subjectivism, hermeneutics, deconstructivism, constructivism, femi¬nism. and so on.)
Quality is one of the most important issues in research. I introduce the idea of validity to refer to the quality of various conclusions you might reach based on a research project. Here’s where I have to give you the pitch about validity. When I mention validity, most students roll their eyes, curl up into a fetal position, or go to sleep. They think validity is just something abstract and philosophical (and I guess it is at some level). But I think if you can understand validity—the principles that are used to judge the quality of research—you’ll be able to do much more than just complete a research project. You’ll be able to be a virtuoso at research because you’ll have an understanding of why you need to do certain things to ensure quality. You won’t just he plugging in standard procedures you learned in school—sampling method X, measurement tool Y—you’ll be able to help create the next generation of research technology.
1-2a Structure of Research
Most research projects share the same general structure. You might think of this structure as following the shape of an hourglass as shown in Figure I-5. The research process usually starts with a broad area of interest, the initial problem that the researcher wishes to study. For instance, the researcher could be interested in how to use computers to improve the performance of students in mathematics, but this initial interest is far too broad to study in any single research project. (It might not even be addressable in a lifetime of research.) The researcher has to narrow the question down to one that can reasonably be studied in a research project. This might involve formulating a hypothesis or a focus question. For instance, the researcher might hypothesize that a particular method of computer instruction in math will improve the ability of elementary school students in a specific district. At the narrowest point of the research hourglass, the researcher is engaged in direct measurement or observation of the question of interest. This is what makes research empirical, meaning that it is based on observations and measurements of reality—on what you perceive of the world around you.
Once the basic data is collected, the researcher begins trying to understand it, usually by analyzing it in a variety of ways. Even for a single hypothesis, there are a number of analyses a researcher might typically conduct. At this point. the researcher begins to formulate some initial conclusions about what happened as a result of the computerized math program. Finally, the researcher often attempts to address the original broad question of interest by generalizing from the results of

FIGURE 1-5 The hourglass metaphor for the research process

The “hourglass” notion of research
begin with broad questions
narrow down, focus in
operationalize
observe
analyze data
reach conclusions
generalize back to questions
this specific study to other related situations. For instance, on the basis of strong results indicating that the math program had a positive effect on student perform¬ance. the researcher might conclude that other school districts similar to the one in the study might expect similar results.
Components of a Study What are the basic components or parts of a research study? I Jere, I’ll describe the basic components involved in a causal study. Because causal studies presuppose descriptive and relational questions, many of the compo¬nents of causal studies will also be found in descriptive and relational studies.
Most social research originates from some general problem or question. You might, for instance, be interested in which programs enable the unemployed to get jobs. Usually, the problem is broad enough that you could not hope to address it adequately in a single research study. Consequently, the problem is typically nar¬rowed down to a more specific research question that can be addressed. Social research is theoretical, meaning that much of it is concerned with developing, exploring, or testing the theories or ideas that social researchers have about how the world operates. The research question is often stated in the context of one or more theories that have been advanced to address the problem. For instance, you might have the theory that ongoing support services are needed to assure that the newly employed remain employed. The research question is the central issue being addressed in the study and is often phrased in the language of theory. For instance, a research question might be:
Is a program of supported employment more effective (than no program at all) at keep-ing newly employed persons on the job?
The problem with such a question is that it is still too general to be studied directly. Consequently, in most research. an even more specific statement, called a hypothesis is developed that describes in operational terms exactly what you think will happen in the study (see Section 1-1e, Hypotheses). For instance, the hypothe¬sis for your employment study might be something like the following:
The Metropolitan Supported Employment Program will significantly increase rates of employment after six months for persons who are newly employed (after being out of work for at least 1 year) compared with persons who receive no comparable program.

16 PART 1 Foundations

Notice that this hypothesis is specific enough that a reader can understand quite well what the study is trying to assess.
In causal studies, there arc at least two major variables of interest the cause and the effect. Usually the cause is some type of event, program, or treatment. A dis
tinction is made between causes that the researcher can control (such as a pro¬gram) versus causes that occur naturally or outside the researcher’s influence (such as a change in interest rates, or the occurrence of an earthquake). The effect is the outcome that you wish to study. For both the cause and effect, a distinction is made between the idea of them (the construct) and how they are actually manifested
in reality. For instance, when you think about what a program of support services for the newly employed might be, you are thinking of the construct. On the other
hand, the real world is not always what you think it is. In research, a distinction is
made between your view of an entity (the construct) and the entity as it exists (the operationalization). Ideally, the two should agree. Social research is always con
ducted in a social context. Researchers ask people questions, observe families inter
acting, or measure the opinions of people in a city. The units that participate in the project are important components of any research project. Units are directly
related to the question of sampling. In most projects, it’s impossible to involve all
of the people it is desirable to involve. For instance, in studying a program of sup¬port services for the newly employed, you can’t possibly include in your study every
one in the world, or even in the country, who is newly employed. Instead, you have
to try to obtain a representative sample of such people. When sampling, a distinc¬tion is made between the theoretical population of interest and the final sample
that is actually included in the study. Usually the term units refers to the people who are sampled and from whom information is gathered, but for some projects the units are organizations, groups. or geographical entities like cities or towns. Some¬times the sampling strategy is multilevel; a number of cities are selected and within them families are sampled.
In causal studies, the interest is in the effects of some cause on one or more out-comes. The outcomes arc directly related to the research problem: usually the greatest
interest is in outcomes that are most reflective of the problem. In the hypothetical supported-employment study, you would probably be most interested in measures of employment—is the person currently employed, or, what is his or her rate of absenteeism?
Finally, in a causal study. the effects of the cause of interest (for example, the program) are usually compared to other conditions (for example, another pro
gram or no program at all). Thus, a key component in a causal study concerns how
you decide which units (people) receive the program and which are placed in an al-ternative condition. This issue is directly related to the research design that you use
in the study. One of the central themes in research design is determining how peo¬pie wind up in or are placed in various programs or treatments that you are compar¬ing. These, then, are the major components in a causal study:
• The research problem.
• The research question.
• The program (cause).
• The units.
• The outcomes (effect).
• The design.

deductive
Top-down reasoning that works from the more general to the more specific. 1-2b Deduction and Induction
In logic, a distinction is often made between two broad methods of reasoning known as the deductive and inductive approaches.
Deductive reasoning works from the more general to the more specific (see Figure 1-6). Sometimes this is informally called a top-down approach. You might

FIGURE 1-6 A schematic representation of deductive reasoning
Theory
Hypothesis
Observation
Confirmation

FIGURE 1-7 A schematic representation of inductive reasoning
begin with thinking up a theory about your topic of interest. You then narrow that down into more specific hypotheses that you can test. You narrow down even fur¬ther when you collect observations to address the hypotheses. This ultimately leads you to he able to test the hypotheses with specific data—a confirmation (or not) of your original theories.
Inductive reasoning works the other way, moving from specific observations to broader generalizations and theories (see Figure 1-7). Informally, this is sometimes called a bottom-up approach. (Please note that it’s bottom up and not bottoms up, which is the kind of thing the bartender says to customers when he’s trying to close for the night!) In inductive reasoning, you begin with specific observations and measures, begin detecting patterns and regularities, formulate some tentative hypotheses that you can explore, and finally end up developing some general con-clusions or theories.
Deductive and inductive reasoning correspond to other ideas that have been around for a long time: nomothelic, which denotes laws or rules that pertain to the general case (nomos in Greek): and idiographic, which refers to laws or rules that relate to individuals. In any event, the point here is that most social research is con¬cerned with the nomothetic—the general case—rather than the individual. Individ¬uals are often studied, but usually there is interest in generalizing to more than just the individual.
These two methods of reasoning have a different feel to them when you’re conducting research. Inductive reasoning, by its nature, is more open-ended and

18 PART 1 Foundations

exploratory, especially at the beginning. Deductive reasoning is narrower in nature and is concerned with testing or confirming hypotheses. Even though a particular study may look like it’s purely deductive (for example. an experiment designed to test the hypothesized effects of some treatment on some outcome), most social research involves both inductive and deductive reasoning processes at some time in the project. In fact, it doesn’t take a rocket scientist to see that you could assemble the two graphs from Figures 1-6 and 1-7 into a single circular one that continually cycles from theories down to observations and back up again to theories. Even in the most constrained experiment, the researchers might observe patterns in the data that lead them to develop new theories.
1-2c Positivism and Post-Positivism
Let’s start this brief discussion of philosophy of science with a simple distinction between epistemology and methodology. The term epistemology comes from the Greek word episterni, their term for knowledge. In simple terms, epistemology is the philosophy of knowledge or of how you come to know. Methodology is also con¬cerned with how you come to know, but is much more practical in nature. Method¬ology is focused on the specific ways—the methods—you can use to try to understand the world better. Epistemology and methodology are intimately related: the former involves the philosophy of how you come to know the world and the latter involves the predict;
When most people in society think about science, they think about someone in a white lab coat working at a lab bench mixing up chemicals. They think of science as boring and cut-and-dried, and they think of the scientist as narrow-minded and esoteric (the ultimate nerd—think of the humorous but nonetheless mad scientist in the Back to the Future movies, for instance). Many of the stereotypes about science come from a period when science was dominated by a particular philosophy—positivism—that tended to support some of these views. Here, I want to suggest (no matter what the movie industry may think) that science has moved on in its thinking into an era of post-positivism, where many of those stereotypes of the scientist no longer hold up.
Let’s begin by considering what positivism is. In its broadest sense, positivism is a rejection of metaphysics (I leave it to you to look up that term if you’re not famil¬iar with it). Positivism holds that the goal of knowledge is simply to describe the phenomena that are experienced. The purpose of science is simply to stick to what can be observed and measured. Knowledge of anything beyond that, a positivist would hold, is impossible. When I think of positivism (and the related philosophy of logical positivism), I think of the behaviorists in mid-20th century psychology. These were the mythical rat runners who believed that psychology could study only what could be directly observed and measured. Since emotions, thoughts, and so on. can’t be directly observed (although it may be possible to measure some of the physical and physiological accompaniments), these were not legitimate topics for a scientific psychology. B. F. Skinner argued that psychology needed to concentrate only on the positive and negative reinforcets of behavior to predict how people will behave; everything else in between (like what the person is thinking) is irrelevant because it can’t be measured.
In a positivist view of the world, science was seen as the way to get at truth, to understand the world well enough to predict and control it. The world and the uni¬verse were deterministic; they operated by laws of cause and effect that scientists could discern if they applied the unique approach of the scientific method. Science was largely a mechanistic or mechanical affair. Scientists use deductive reasoning to postulate theories that they can test. Based on the results of their studies, they may learn that their theory doesn’t fit the Facts well and so they need to revise their theory to better predict reality. The positivist believed in empiricism—the idea that observation and measurement was the core of the scientific endeavor. The key

approach of the scientific method is the experiment, the attempt to discern natural laws through direct manipulation and observation.
Okay, I am exaggerating the positivist position (although you may be amazed at how close to this some of them actually came) to make a point. Things have changed in the typical views of science since the middle part of the 20th century. Probably the most important has been the shift away from positivism into what is termed post-positivism. By post-positivism, I don’t mean a slight adjustment to or revi¬sion of the positivist position; post-positivism is a wholesale rejection of the central tenets of positivism. A post-positivist might begin by recognizing that the way scien¬tists think and work and the way you think in your everyday life are not distinctly different. Scientific reasoning and common sense reasoning are essentially the same process. There is no essential difference between the two, only a difference in degree. Scientists, for example, follow specific procedures to assure that observa¬tions are verifiable, accurate, and consistent. In everyday reasoning, you don’t always proceed so carefully. (Although. if you think about it, when the stakes are high. even in everyday life you become much more cautious about measurement. Think of the way most responsible parents keep continuous watch over their infants, noticing details that nonparents would never detect.)
In a post-positivist view of science, certainty is no longer regarded as attainable. Thus, much contemporary social research is probabilistic, or based on probabil¬ities. The inferences made in social research have probabilities associated with them; they are seldom meant to be considered as covering laws that pertain to all cases. Part of the reason statistics has become so dominant in social research is that it enables the estimation of the probabilities for the situations being studied.
One of the most common forms of post-positivism is a philosophy called critical realism. A critical realist believes that there is a reality independent of a person’s thinking about it that science can study. (This is in contrast with a subjectivist, who would hold that there is no external reality—each of us is making this all up.) Positi-vists were also realists. The difference is that the post-positivist critical realist recog-nizes that all observation is fallible and has error and that all theory is revisable. In other words, the critical realist is critical of a person’s ability to know reality with cer-tainty. Whereas the positivist believed that the goal of science was to uncover the truth, the post-positivist critical realist believes that the goal of science is to hold steadfastly to the goal of getting it right about reality, even though this goal can never be perfectly achieved.
Because all measurement is fallible, the post-positivist emphasizes the impor-tance of multiple measures and observations, each of which may possess different types of error, and the need to use triangulation across these multiple error sources to try to get a better bead on what’s happening in reality. The post-positivist also believes that all observations are theory-laden and that scientists (and everyone else, for that matter) are inherently biased by their cultural experiences, world-views, and so on. This is not cause to despair, however. Just because I have my world-view based on my experiences and you have yours doesn’t mean that it is impossible to translate from each other’s experiences or understand each other. That is, post-positivism rejects the relativist idea of the incommensurability of different perspec-tives, the idea that people can never understand each other because they come from different experiences and cultures. Most post-positivists are constructivists who believe that you construct your view of the world based on your perceptions of it. Because perception and observation are fallible, all constructions must be imper¬fect. So what is meant by objectivity in a post-positivist world? Positivists believed that objectivity was a characteristic that resided in the individual scientist. Scientists are
responsible for putting aside their biases and beliefs and seeing the world as it really is. Post-positivists reject the idea that any individual can see the world perfectly as it
really is. Everyone is biased and all observations are affected (theory-laden). The best hope for achieving objectivity is to triangulate across multiple fallible perspec¬tives. Thus, objectivity is not the characteristic of an individual; it is inherently a

20 PART 1 Foundations

social phenomenon. it is what multiple individuals are trying to achieve when they criticize each other’s work. Objectivity is never achieved perfectly, but it can be approached. The best way to improve objectivity is to work publicly within the con-text of a broader contentious community of truth-seekers (including other scien¬tists) who criticize each other’s work. The theories that survive such intense scrutiny are a bit like the species that survive in the evolutionary struggle. (This theory is sometimes called evolutionary epistemology or the natural selection theory of knowledge and holds that ideas have survival value and that knowledge evolves through a process of variation, selection, and retention.) These theories have adaptive value and arc probably as close as the human species can come to being objective and understanding reality.
Clearly, all of this stuff is not for the faint of heart. I’ve seen many a graduate stu¬dent get lost in the maze of philosophical assumptions that contemporary philoso-phers of science argue about. Don’t think that I believe this is not important stuff; but, in the end, I tend to turn pragmatist on these matters. Philosophers have been debating these issues for thousands of years, and there is every reason to believe that they will continue to debate them for thousands of years more. Practicing researchers should check in on this debate fmm time to time. (Perhaps every hundred years or so would be about right.) Researchers should think about the assumptions they make about the world when they conduct research; but in the meantime, they can’t wait for the philosophers to settle the matter. After all, they do have their own work to do.
1-2d Introduction to Validity
Validity can be defined as the best available approximation to the truth of a given proposition, inference, or conclusion. The first thing to ask is: “validity of what?” When people think about validity in research, they tend to think in terms of research components. You might say that a measure is a valid one, that a valid sam¬ple was drawn, or that the design had strong validity, but all of those statements are technically incorrect. Measures, samples, and designs don’t have validity—only propositions can be said to be valid. Technically, you should say that a measure leads to valid conclusions or that a sample enables valid inferences, and so on. It is a proposition, inference, or conclusion that can have validity.
Researchers make lots of different inferences or conclusions while conducting research. Many of these are related to the process of doing research and are not the major hypotheses of the study. Nevertheless, like the bricks that go into building a wall, these intermediate processes and methodological propositions provide the foundation for the substantive conclusions that they wish to address. For instance, virtually all social research involves measurement or observation, and, no matter what researchers measure or observe, they are concerned with whether they are measuring what they intend to measure or with how their observations are influ-enced by the circumstances in which they are made. They reach conclusions about the quality of their measures—conclusions that will play an important role in addressing the broader substantive issues of their study. When researchers talk about the validity of research, they are often referring to the many conclusions they reach about the quality of different parts of their research methodology.
Validity is typically subdivided into four types. Each type addresses a specific methodological question. To understand the types of validity, you have to know something about how researchers investigate a research question. Because all four va-lidity types are really only operative when studying causal questions, I will use a causal study to set the context.
Figure 1-8 shows that two realms are involved in research. The first, on the top, is the land of theory. It is what goes on inside your head. It is where you keep your theories about how the world operates. The second, on the bottom, is the land of observations. It is the real world into which you translate your ideas: your programs, treatments, measures, and observations. When you conduct research, you are

FIGURE 1-8 The major realms and components of research
operationalize operationalrze
continually flitting back and forth between these two realms, between what you think about the world and what is going on in it. When you are investigating a cause-effect relationship, you have a theory (implicit or otherwise) of what the cause is (the cause construct). For instance, if you are testing a new educational program, you have an idea of what it would look like ideally. Similarly, on the effect side, you have an idea of what you are ideally trying to affect and measure (the effect construct). But each of these—the cause and the effect—have to be trans¬lated into real things, into a program or treatment and a measure or observational method. The term operatimalization is used to describe the act of translating a con¬struct into its manifestation. In effect, you take your idea and describe it as a series of operations or procedures. Now, instead of it being only an idea in your mind, it becomes a public entity that others can look at and examine for themselves. It is one thing, for instance, for you to say that you would like to measure self-esteem (a construct). But when you show a ten-item paper-and-pencil self-esteem measure that you developed for that purpose, others can look at it and understand more clearly what you intend by the term self-esteem.
Now, back to explaining the four validity types. They build on one another, with two of them (conclusion and internal) referring to the land of observation on the bottom of Figure 1-8, one of them (construct) emphasizing the linkages between the bottom and the top, and the last (external) being primarily concerned about the range of the theory on the top.
Imagine that you want to examine whether use of a World Wide Web virtual classroom improves student understanding of course material. Assume that you took these two constructs, the cause construct (the Web site) and the effect con¬struct (understanding), and operationalized them, turned them into realities by constructing the Web site and a measure of knowledge of the course material. Here are the four validity types and the question each addresses:
• Conclusion Validity: In this study, is there a relationship between the two varia¬bles? In the context of the example, the question might be worded: in this study, is there a relationship between the Web site and knowledge of course material? There are several conclusions or inferences you might draw to

FIGURE 1-9 The validity staircase, showing the major question for each type of validity
answer such a question. You could, for example, conclude that there is a rela-tionship. You might conclude that there is a positive relationship. You might infer that there is no relationship. You can assess the conclusion validity of each of these conclusions or inferences.
• Internal Validity: Assuming that there is a relationship in this study, is the rela-tionship a causal one? just because you find that use of the Web site and knowl-edge are correlated, you can’t necessarily assume that Web site use causes the knowledge. Both could, for example, be caused by the same factor. For instance, it may be that wealthier students, who have greater resources, would be more likely to have access to a Web site and would excel on objective tests. When you want to make a claim that your program or treatment caused the outcomes in your study, you can consider the internal validity of your causal claim.
• Construct Validity: Assuming that there is a causal relationship in this study, can you claim that the program reflected your construct of the program well and that your measure reflected well your idea of the construct of the measure? In simpler terms, did you implement the program you intended to implement and did you measure the outcome you wanted to measure? In yet other terms, did you operationalize well the ideas of the cause and the effect? When your research is over, you would like to be able to conclude that you did a credible job of operationalizing your constructs—you can assess the construct validity of this conclusion.
• External Validity: Assuming that there is a causal relationship in this study between the constructs of the cause and the effect, can you generalize this effect to other persons, places, or times? You are likely to make some claims that your research findings have implications for other groups and individuals in other settings and at other times. When you do, you can examine the external valid¬ity of these claims.
Notice how the question that each validity type addresses presupposes an af¬firmative answer to the previous one. This is what I mean when I say that the validity types build on one another. Figure 1-9 shows the idea of cumulativeness as a stair¬case, along with the key question for each validity type.
For any inference or conclusion, there are always possible threats to validity—reasons the conclusion or inference might be wrong. Ideally, you try to reduce the plausibility of the most likely threats to validity, thereby leaving as most plausible the conclusion reached in the study. For instance, imagine a study examining

whether there is a relationship between the amount of training in a specific tech¬nology and subsequent rates of use of that technology. Because the interest is in a relationship, it is considered an issue of conclusion validity. Assume that the study is completed and no significant correlation between amount of training and adop¬tion rates is found. On this basis, it is concluded that there is no relationship between the two. How could this conclusion be wrong—that is, what are the threats to valid¬ity? For one, it’s possible that there isn’t sufficient statistical power to detect a rela¬tionship even if it exists. Perhaps the sample size is too small or the measure of amount of training is unreliable. Or maybe assumptions of the correlational test arc violated given the variables used. Perhaps there were random irrelevancies in the study setting or random heterogeneity in the respondents that increased the variability in the data and made it harder to see the relationship of interest. The in¬ference that there is no relationship will be stronger—have greater conclusion validity—if you can show that these alternative explanations are not credible. The distributions might be examined to see whether they conform with assumptions of the statistical test, or analyses conducted to determine whether there is sufficient statistical power.
The theory of validity and the many lists of specific threats provide a useful scheme for assessing the quality of research conclusions. The theory is general in scope and applicability, well-articulated in its philosophical suppositions, and virtu¬ally impossible to explain adequately in a few minutes. As a framework for judging the quality of evaluations, it is indispensable and well worth understanding.
1-3 Ethics in Research
This is a time of profound change in the understanding of the ethics of applied social research. From the time immediately after World War 11 until the early 1990s, there was a gradually developing consensus about the key ethical principles that should underlie the research endeavor. Two marker events stand out (among many others) as symbolic of this consensus. The Nuremberg War Crimes Trial following World War H brought to public view the ways German scientists had used captive human subjects as subjects in often gruesome experiments. In the 1950s and 1960s, the Tuskegee Syphilis Study involved the withholding of known effective treatment for syphilis from African-American participants who were infected. Events like these forced the reexamination of ethical standards and the gradual development of a consensus that potential human subjects needed to he protected from being used as guinea pigs in scientific research.
By the 1990s. the dynamics of the situation changed. Cancer patients and per¬sons with acquired immunodeficiency syndrome (AIDS) fought publicly with the medical research establishment about the length of time needed to get approval for and complete research into potential cures for fatal diseases. In many cases, it is the ethical assumptions of the previous thirty years that drive this go-slow mentality. According to previous thinking. it is better to risk denying treatment for a while until there is enough confidence in a treatment, than risk harming innocent peo¬ple (as in the Nuremberg and Tuskegee events). Recently, however, people threat¬ened with fatal illness have been saying to the research establishment that they want to be test subjects, even under experimental conditions of considerable risk. Sev¬eral vocal and articulate patient groups who wanted to be experimented on came up against an ethical review system designed to protect them from being the sub¬jects of experiments!
Although the past few years in the ethics of research have been tumultuous ones, a new consensus is beginning to evolve that involves the stakeholder groups most affected by a problem participating more actively in the formulation of guide¬lines for research. Although, at present, it’s not entirely clear what the new consen¬sus will be, it is almost certain that it will not fall at either extreme: protecting

24 PART 1 Foundations

against human experimentation at all costs versus allowing anyone who is willing to be the subject of an experiment
1-3a The Language of Ethics
As in every other aspect of research, the area of ethics has its own vocabulary. In this section, I present some of the most important language regarding ethics in research.
The principle of voluntary participation requires that people not be coerced into participating in research. This is especially relevant where researchers had previously relied on captive audiences for their subjects—prisons, universities, and places like that Closely related to the notion of voluntary participation is the requirement of informed consent. Essentially, this means that prospective research participants must be fully informed about the procedures and risks involved in research and must give their consent to participate. Ethical standards also require that researchers not put participants in a situation where they might be at risk of harm as a result of their partic¬ipation. Harm can be defined as both physical and psychological.
Two standards are applied to help protect the privacy of research participants. Almost all research guarantees the participants confidentiality; they are assured that identifying information will not he made available to anyone who is not directly involved in the study. The stricter standard is the principle of anonymity, which essentially means that the participant will remain anonymous throughout the study, even to the researchers themselves. Clearly, the anonymity standard is a stronger guarantee of privacy, but it is sometimes difficult to accomplish, especially in situa¬tions where participants have to be measured at multiple time points (for example in a pre-post study). Increasingly, researchers have had to deal with the ethical issue of a person’s right to service. Good research practice often requires the use of a no-treatment control group—a group of participants who do ma get the treatment or program that is being studied. But when that treatment or program may have bene-ficial effects, persons assigned to the no-treatment control may feel their rights to equal access to services are being curtailed.
Even when clear ethical standards and principles exist, at times the need to do accurate research runs up against the rights of potential participants. No set of standards can possibly anticipate every ethical circumstance. Furthermore, there needs to be a procedure that assures that researchers will consider all relevant ethi¬cal issues in formulating research plans. To address such needs most institutions and organizations have formulated an Institutional Review Board (IRB), a panel of persons who review grant proposals with respect to ethical implications and decide whether additional actions need to be taken to assure the safety and rights of partic-ipants. By reviewing proposals for research, IRBs also help protect the organization and the researcher against potential legal implications of neglecting to address im-portant ethical issues of participants.
1-4 Conceptualizing
One of the most difficult aspects of research—and one of the least discussed—is how to develop the idea for the research project in the first place. In training stu¬dents, most faculty members simply assume that if students read enough of the research in an area of interest, they will somehow magically be able to produce sen¬sible ideas for further research. Now, that may be true. And heaven knows that’s the way researchers have been doing this higher education thing for some time now; but it troubles me that they haven’t been able to do a better job of helping their stu¬dents learn how to formulate good research problems. One thing they can do (and some texts at least cover this at a surface level) is give students a better idea of how professional researchers typically generate research ideas. Some of this is intro¬duced in the discussion of problem formulation that follows.

But maybe researchers can do even better than that. Why can’t they turn some of their expertise in developing methods into methods that students and research¬ers can use to help them formulate ideas for research. I’ve been working on that area intensively for over a decade now, and I came up with a structured approach that groups can use to map out their ideas on any topic. This approach, called con¬cept mapping (see Section l-4b, Concept Mapping) can be used by research teams to help them clarify and map out the key research issues in an area, to help them operationalize the programs or interventions or the outcome measures for their study. The concept-mapping method isn’t the only method around that might help researchers formulate good research problems and projects. Virtually any method that’s used to help individuals and groups think more effectively would probably be useful in research formulation; but concept mapping is a good example of a struc-tured approach and will introduce you to the idea of conceptualizing research in a more formalized way.
1-4a Problem Formulation
“Well begun is half done.” —Aristotle, quoting an old proverb
Where Research Topics Come From So how do researchers come up with the idea for a research project: Probably one of the most common sources of research ideas is the experience of practical problems in the field. Many researchers are directly engaged in social, health, or human service program implementation and come up with their ideas based on what they see happening around them. Others aren’t directly involved in service contexts but work with (or survey) people to learn what needs to be better understood. Many of the ideas would strike the outsider as silly or worse. For instance, in health services areas, there is great interest in the prob¬lem of back injuries among nursing staff. Its not necessarily the thing that comes first to mind when you think about the health care field; but if you reflect on it for a minute longer, it should be obvious that nurses and nursing staff do an awful lot of lifting while performing their jobs. They lift and push heavy equipment, and they lift and push heavy patients! If 5 or 10 out of every 100 nursing staff were to strain their backs on average over the period of 1 year, the costs would be enormous and that’s pretty much what’s happening. Even minor injuries can result in increased absenteeism. Major ones can result in lost jobs and expensive medical bills. The nursing industry figures this problem costs tens of millions of dollars annually in increased health care. In addition, the health-care industry has devel¬oped a number of approaches. many of them educational, to try to reduce the scope and cost of the problem. So, even though it might seem silly at first, many of these practical problems that arise in practice can lead to extensive research efforts.
Another source for research ideas is the literature in your specific field. Certainly, many researchers get ideas for research by reading the literature and thinking of ways to extend or refine previous research. Another type of literature that acts as a source of good research ideas is the Requests for Proposals (RFPs) that are pub¬lished by government agencies and some companies. These RFPs describe some
problem that the agency would like researchers to address; they are virtually hand¬ing the researcher an idea. Typically, the RFP describes the problem that needs
addressing, the contexts in which it operates, the approach they would like you to take to investigate to address the problem, and the amount they would be willing to pay for such research. Clearly, there’s nothing like potential research funding to get researchers to focus on a particular research topic.
Finally, let’s not forget the fact that many researchers simply thin* up their research topic on their own. Of course, no one lives in a vacuum, so you would expect that the ideas you come up with on your own are influenced by your back¬ground, culture, education. and experiences.

Feasibility Soon after you get an idea for a study, reality begins to kick in and vou begin to think about whether the study is feasible at all. Several major considera¬tions come into play. Many of these involve making trade-offs between rigor and practi¬cality. Performing a scientific study may force you to do things you wouldn’t do normally. You might want to ask everyone who used an agency in the past year to fill in your evaluation survey only to find that there were thousands of people and it would be prohibitively expensive. Or, you might want to conduct an in-depth inter¬view on your subject of interest only to learn that the typical participant in your study won’t willingly take the hour that your interview requires. If you had unlim¬ited resources and unbridled control over the circumstances, you would always be able to do the best quality research; but those ideal circumstances seldom exist, and researchers are almost always forced to look for the best trade-offs they can find to get the rigor they desire.
When you are determining the project’s feasibility, you almost always need to bear in mind several practical considerations. First, you have to think about how long the research will take to accomplish. Second, you have to question whether any impor¬tant ethical constraints require consideration. Third, you must determine whether you can acquire the cooperation needed to take the project to its successful conclu¬sion. And finally, you must determine the degree to which the costs will be manage¬able. Failure to consider any of these factors can mean disaster later.
The Literature Review One of the most important early steps in a research pro¬ject is the conducting of the literature review. This is also one of the most humbling experiences you’re likely to have. Why? Because you’re likely to find out that just about any worthwhile idea you will have has been thought of before, at least to some degree. frequently have students who come to me complaining that they couldn’t find anything in the literature that was related to their topic. And virtually every time they have said that, I was able to show them that was true only because they looked only for articles that were exactly the same as their research topic. A litera¬ture review is designed to identify related research, to set the current research pro¬ject within a conceptual and theoretical context. When looked at that way, almost no topic is so new or unique that you can’t locate relevant and informative related research.
Here are some tips about conducting the literature review. First, concentrate your efforts on the scientific literature. Try to determine what the most credible research journals are in your topical area and start with those. Put the greatest emphasis on research journals that use a blind or juried review system. In a blind or juried review, authors submit potential articles to a journal editor who solicits several reviewers who agree to give a critical review of the paper. The paper is sent to these reviewers with no identification of the author so that there will he no per¬sonal bias (either for or against the author). Based on the reviewers’ recommenda¬tions, the editor can accept the article, reject it, or recommend that the author revise and resubmit it. Articles in journals with blind review processes are likely to have a fairly high level of credibility. Second, do the review early in the research proc¬ess. You are likely to learn a lot in the literature review that will help you determine what the necessary trade-offs are. After all, previous researchers also had to face trade-off decisions.
What should you look for in the literature review? First, you might be able to find a study that is quite similar to the one you are thinking of doing. Since all cred¬ible research studies have to review the literature themselves, you can check their literature review to get a quick start on your own. Second, prior research will help ensure that you include all of the major relevant constructs in your study. You may find that other similar studies routinely look at an outcome that you might not have included. Your study would not be judged credible if it ignored a major construct. Third, the literature review will help you to find and select appropriate measure¬ment instruments. You will readily see what measurement instruments researchers

CHAPTER 1 Foundations 27

used themselves in contexts similar to yours. Finally, the literature review will help you to anticipate common problems in your research context. You can use the prior experiences of others to avoid common traps and pitfalls.
1-4b Concept Mapping
Social scientists have developed a number of methods and processes that might help you formulate a research project. I would include among these at least the fol¬lowing: brainstorming, brainwriting, nominal group techniques, focus groups, affinity mapping, Delphi techniques, facet theory. and qualitative text analysis. Here, I’ll show you a method that I have developed, called concept mapping (Kane and Trochim, 2007)*, which is especially useful for research problem formulation and illustrates some of the advantages of applying social-science methods to conceptualizing research problems.
Concept mapping is a general method that can be used to help any individual or group to describe ideas about some topic in a pictorial form. Several methods cur¬rently go by names such as concept mapping, mental mapping, or concept webbing. All of them are similar in that they result in a picture of someone’s ideas, but the kind of concept mapping I want to describe here is different in a number of important ways. First, it is primarily a group process, so it is especially well suited for situations where teams or groups of researchers have to work together. The other methods work primarily with individuals. Second, it uses a structured facilitated approach. A trained facilitator follows specific steps in helping a group articulate its ideas and understand them more clearly. Third, the core of concept mapping consists of sev¬eral state-of-the-art multivariate statistical methods that analyze the input from all of the individuals and yield an aggregate group product, Finally, the method requires the use of specialized computer programs that can handle the data from this type of process and accomplish the correct analysis and mapping procedures.
Although concept mapping is a general method, it is particularly useful for helping social researchers and research teams develop and detail ideas for research. It is especially valuable when researchers want to involve relevant stake¬holder groups in the act of creating the research project. Although concept map¬ping is used for many purposes—strategic planning, product development, market analysis. decision making, measurement development—I concentrate here on its potential for helping researchers formulate their projects.
So what is concept mapping? Essentially, concept mapping is a structured pro¬cess. focused on a topic or construct of interest, involving input from one or more participants, that produces an interpretable pictorial view (concept map) of their ideas and concepts and how these are interrelated. Concept mapping helps people to think more effectively as a group without losing their individuality. It helps groups capture complex ideas without trivializing them or losing detail (see Figure 1-10).
A concept-mapping process involves six steps that can take place in a single day or can be spread out over weeks or months depending on the situation. The proc¬ess can be accomplished with everyone sitting around a table in the same room or with the participants distributed across the world using the Internet. The steps are as follows:
• Preparation: Step one accomplishes three things. The facilitator of the map¬ping process works with the initiator(s) (those who requested the process ini¬tially) to identify who the participants will be. A mapping process can have hundreds or even thousands of stakeholders participating, although there is usually a relatively small group of between ten and twenty stakeholders involved. Second, the initiator works with the stakeholders to develop the focus for the project. For instance, the group might decide to focus on defining a
‘Kane, M., & Trochim, W. (20(7). Co’ inept mappingfor planning and nd evaluation. Thousand Oaks, CA: Sage Publications.

program or treatment, or it might choose to map all of the expected outcomes. Finally, the group decides on an appropriate schedule for the mapping.
• Generation: The stakeholders develop a large set of statements that address the focus. For instance, they might generate statements describing all of the specific activities that will constitute a specific social program, or generate statements describing specific outcomes that could result from participating in a program. A variety of methods can be used to accomplish this including traditional brainstorming, hrainwriting, nominal group techniques. focus groups, qualitative text analysis, and so on. The group can generate hundreds of statements in a concept-mapping project. In most situations, around 100 statements is the practical limit in terms of the number of statements they can reasonably handle.
• Structuring: The participants do two things during structuring. First, each par¬ticipant sorts the statements into piles of similar statements. They often do this by sorting a deck of cards that has one statement on each card, but they can also do this directly on a computer by dragging the statements into piles that they create. They can have as few or as many piles as they want. Each partici¬pant names each pile with a short descriptive label. Then each participant rates each of the statements on some scale. Usually the statements are rated on a 1 to 5 scale for their relative importance, where a 1 means the statement is relatively unimportant compared to all the rest, a 3 means that it is moder¬ately important, and a 5 means that it is extremely important.
• Representation: This is where the analysis is done; this is the process of taking the sort and rating input and representing it in map form. Two major statisti¬cal analyses arc used. The first—multidimensional scaling—takes the sort data across all participants and develops the basic map where each statement is a

CHAPTER 1 Foundations 29

point on the map and statements that were piled together by more people are closer to each other on the map. The second analysis—cluster analysis—takes the output of the multidimensional scaling (the point map) and partitions the map into groups of statements or ideas, into clusters. If the statements describe program activities, the clusters show how to group them into logical groups of activities. If the statements are specific outcomes. the clusters might he viewed as outcome constructs or concepts.
• Interpretation: The facilitator works with the stakeholder group to help develop its own labels and interpretations for the various maps.
• Utilization: The stakeholders use the maps to help address the original focus. On the program side, stakeholders use the maps as a visual framework for operationalizing the program: on the outcome side, the maps can be used as the basis for developing measures and displaying results.
The concept-mapping process described here is a structured approach to concep¬tualizing. However, even researchers who do not appear to be following a struc¬tured approach are likely to be using similar steps informally. For instance, all researchers probably go through an internal exercise that is analogous to the brain¬storming step described previously. They may not actually brainstorm and write their ideas down, but they probably do something like that informally. After they’ve generated their ideas, they structure or organize them in some way. For each step in the formalized concept-mapping process you can probably think of analogous ways that researchers accomplish the same task, even if they don’t follow such for¬mal approaches. More formalized methods like concept mapping have benefits over the typical informal approach. For instance, with concept mapping there is an objective record of what was done in each step. Researchers can be both more pub¬lic and more accountable. A structured process also opens up new possibilities. With concept mapping, it is possible to imagine more effective multiple researcher conceptualization and involvement of other stakeholder groups such as program developers, funders, and clients.
1-4c Logic Models
Another method of conceptualizing research uses graphics to express the basic idea of what is supposed to happen in a program. This graphic representation can then be used to guide researchers in the process of identifying indicators or meas¬ures of the components of the graphic model. The idea is straightforward: identify the components of the program in terms of specific inputs (what goes into a pro¬gram), relationships (how the components should be related to each other), and outputs (what should happen as a result of the program).
A very nice basic example of such a model was produced by the Kellogg Foun¬dation in their Logic Model Development Guide (2004). This example is shown in Figure 1-11. Notice that the steps are shown in a left-to-right order that indicates the step-by-step logic used in planning the program, with the arrows suggesting a causal sequence of influences. A second example is shown in Figure 1-12. This model was developed to illustrate how a program could identify needs of children and families, then provide certain kinds of services, resulting in some positive
FIGURE 1-11
A basic logic model

FIGURE 1-12 A basic logic model with indicators shown

outcomes for program participants. In this model, both the program components and some indicators (measures) are shown. This figure illustrates something very general about social research, too; that we must think at two levels: the abstract idea and the observable indicator of the idea. Of course, life and research are not this simple. If you would like to read more about logic models, particularly with regard to the translation of the idea to real life, I recommend reading Renger and Hurley’s paper on this topic (2006).
1-4d Summary
We’ve covet-etl a lot of territory in this initial chapter, grouped roughly into four broad areas. First, we considered the language of research and enhanced your vocabulary by introducing key terms like variable, attribute, causal relationship, hypothesis, and unit of analysis. We next considered the rationale or logic of research and discussed how research is structured, the major components of a research project, deductive and inductive reasoning, several major research falla¬cies, and the critical topic of validity in research. This was followed by a discussion of ethics in research that introduced issues like informed consent and anonymity. Finally, we briefly considered how research is thought up or conceptualized. It certainly is a formidable agenda. But it provides you with the basic foundation for the material that’s coming in subsequent chapters.
Login to the Online Edition of your text at www.atomicdog.com to find additional resources located in the Study Guide at the end of each chapter.
PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET AN AMAZING DISCOUNT 🙂

find the cost of your paper