Saturday, November 3, 2007

Comparative Method

Comparative method is defined by Lijphart as “a method of discovering empirical relationship among variables.”(683). Scholars have been discussing the relations between comparative method and comparative politics, its strengths and weaknesses, and examples of study that used it as a method of inquiry. As far as the readings go, I see several points in which comparativists are still debating on.

What is the goal of comparative method? Lijphart regards comparative method as “one of the basic scientific methods, not the scientific method” (682). He also contrasted this notion with Lasswell and Almond who clearly view comparative as a science. If Lijphart is true, the goal of comparative method can be narrower than seeking for causal inference, the main goal of scientific method. If comparative method, given its main weakness (many variables, small number of cases) can only reach “systematic comparative illustration”(Jackman: 164), it will not be a problem.

To reach causal inference, comparative method can benefit from other basic scientific methods (experimental and statistical method). Therefore, we can agree with Lijphart when he insisted that case studies can be part of the works of comparativists which can still contribute to hypotheses testing and theory building. Jackman’s claim that comparative method should be concerned with the causal relationship among variables (1985: 166-167), is not always necessary. As has been discussed for more than two decades, however, main problems of “many variables, small N” should be fixed.

Generalization or deep analysis. In advocating cross national statistical research, Jackman (166) maintains that an attempt to develop generalization is crucial to comparative method. Comparativist, Jackman continues, should focus on as many similarities as possible and not spend too much time assessing the “exceptional performance” or deviant cases. It has been common in comparative method that when one or two cases in the study deviate from the general proposition, then the proposition will be invalidated. This will need a large number of cases, which will not be always possible especially in countries where aggregate data is hardly available. This also will require simplification which can be done at the expense of descriptive accuracy. Over-simplification can cause the lose of the richness of the data.

To reach generalization also requires “the valid application of concepts across diverse contexts” (Collier: 110) which will face the problem that Sartori calls “conceptual stretching.” The concept that can be applied is so general and therefore cannot grab the similarities and contrast of the variables involved in the comparative inquiry. Even after the general concept is formed, it should be treated very carefully to be applied to different set of cases. This leads us to the impression that having small number of cases will be more interesting.

Meanwhile focusing on deep analysis will make the study only involve the key variable. As noted by Lijphart, this kind of comparative method has been very well applied in anthropological research. This is very possible since anthropological study deals with mainly non-advanced (even primitive) societies where variables are still not as large as in advanced societies. The main problem here is how to resemble this kind of research in anthropology for political science where variables that need to be considered are so large. Phrased differently, how do we know that the key variables of our research are really what we chose and how do we make sure that other variables will be treated constant.

What to compare? “Can we compare apple and orange?” has been very well known phrase when dealing with comparative method. Can we compare apple and orange as the fruit not as apple and orange? The simple answer about what to compare is of course: variables. How we define variables and how do we interpret the relationship among/between variables are not clearly conclusive yet.

Sartori and Kalleberg (in Jackman: 167) states that “two items being compared must be of the same class—they must either have an attribute or not. If they have it and, only if they have it, may they be compared as to which has it more and which has it less.” Reacting to this definition, I would agree with Jackman that comparable does not always mean comparing similarities but also comparing similarities and differences. With regard to comparison of the same class is still problematic to me. Can I for instance compare the legislature of the United States with the Legislature of Indonesia while both countries are not from the same class (the former is advanced country while the latter is developing country)? If the answer is I cannot compare them, what if I study the US Congress and look at Indonesian legislature then I find that there are similarities and differences. Did I do a comparative method in that respect?

I would also agree with Lijphart’s saying that “comparable means: similar in a large number of important characteristic (variables) which one wants to treat as constant, but dissimilar as far as those variables are concerned which one wants to relate each other” (687). I would like to add however, that similarity and dissimilarity of the variables depends on the level of comparison we want to do. We can compare apple and orange as fruit (its shape, taste, texture, etc.).

What method must be used? Jackman’s study (1987) on “Political Institution and Voter Turn Out in the Industrial Democracies is an example of how historical comparative method is applied to a cross national study. He demonstrates that his study could challenge the idea that national differences in voter turnout reflect national differences of political culture. This is a proof that comparative method can be useful for finding alternative explanation on variables relationship and hypotheses testing.

What puzzles me is Putnam’s (et.al) work on institutional performance in Italy (1993) which shows the usage of different kind of methods and techniques in conducting comparative study. My question is what kind of method are they using? Is this example of multiple methods? In their work, Putnam and his colleagues combined observation, case study, statistical analysis, experiment, and quantitative techniques to reach to the conclusion on conditions for creating strong, responsive and effective representative institutions. Their works are massive, involving a lot of individual interviews and national surveys and in a relatively long time (a decade). Their cases however are still limited and most importantly, they are only in Italy. In the end I would say that this is still a case study. Geddes (1990) has been reminding the students of politics like us to be very careful using case studies since it is prone to selection bias.

Overall I would agree, again, with Lijphart that in comparative politics other methods can be employed and comparative method is not only of the comparative politics (690).

Thursday, November 1, 2007

Reaction Paper: Interviews and Questionnaires

Crafting questionnaire and conducting interview are complicated but key to survey research. Both require a very careful methodological consideration but in the end it is simply an art (Converse, 1986: 7). Many readings discussed general guidelines on how to craft the standardized questionnaire, problems and challenges need to be considered in mind when crafting it, and problems in conducting face-to-face interview and how to deal with them. In this short note, I would like to raise several issues that can be problematic when crafting the questionnaire and conducting interviews.

Meaning. How to make sure that the question in the questionnaire is interpreted by the respondent exactly the same as intended by the researcher/the question writer is the main problem in crafting the questionnaire. Meaning relates to many things. It can relate to certain concept/reference, to certain context (time and space), etc. Converse (1987) suggests that the question needs not only to provide a frame of reference (since it may not be that the respondent commonly uses) but also a detailed explanation, such as: “By ‘family’ we mean…” (18). A more general guidelines from Converse to deal with this issue is by providing clear and straightforward questions by using simple language, common concept, and widespread information and then pretest it.

Tourangeau (et.al), Schwart (et.al), and Feldman reiterate that understanding the process of how respondents react to the question is a key issue so that we will be able to predict the answer to the question (the theory of response process or response effect). Respondent, according to this theory, will react through the process of comprehension, retrieval, judgment, and response to the question. These whole processes are not necessarily sequential and can be affected by many things such as the wording of the question, the order of the questions, and types of response options. Respondents, for instance, “are likely to draw on the content of preceding questions in interpreting subsequent ones” (Schwart, et.al,:37).

In my view however, crafting straightforward and clear questions and understanding the response process can only predict reactions of the respondents in general. In the end, the questions in the questionnaire will be answered by each respondent with his/her unique situation and response process. This is still problematic to the meaning of the researcher’s question. The desire to come to exactly the same interpretation between researcher and respondent is still difficult.

Social and cultural context. This issue is not very much discussed by the readings. Standardized questionnaire assumes that all questions are valid and appropriate to all respondents. What if the survey is conducted in a country governed by repressive regime or military junta such as in Burma? Can we ask standard questions? Or we can simply answer that the survey is not possible to be conducted in such a situation.

In certain cultural context, such as in some countries in Southeast Asia, asking clear and straightforward question as suggested by Converse is not always useful. For example, individual information in this situation is categorized into several layers. First, individual information that can be shared to others such as age and number of children. Second, information that individual hesitates to share such as whether or not she/he uses condom. Third, information on which individual is not sure enough and therefore she/he does not want others to know such as: Is he/she religious enough? Fourth, when evaluating others (including the government), the people tend to say it in indirect way. In the situation number 2 to 4 above, the questions cannot be straightforward. Therefore in crafting the question the researcher should also think of whether using proxy-question or not. For example, when asking whether or not someone like the president, the researcher can ask how he/she feels about life under the current government.

Public mood. According to Feldman, in interpreting the question and responding, respondents will activate only one interpretation (out of his/her multiple interpretations) to provide the answer. Therefore, “when respondent hear or read a survey question, the first thing they do (most likely automatically) is to activate an interpretative framework to make sense of the question” (10-11).

This process, however, assumes that the respondent’s situation is normal. It will be problematic when the question is asked in a certain public mood. What I mean by public mood here is a general feeling of the public related to certain situation. For example, again, in countries in Southeast Asia which are still affected by the economic crisis, the pessimistic feeling about economic life is widely spread across the country. In this kind of situation, conducting a survey on public opinion of economic performance of government tends to result in pessimistic evaluation of the government economic performance.

Interview: meaning or strict procedure? Face-to-face interview is commonly used in conducting a survey. The main issue here is how to make sure that the intended meaning is gained while the process of gaining it does not fall into daily conversation between interviewer and interviewee (thus violating the research procedure). Schuman and Jordan, however, found that meaning and procedure in interview are not always compatible each other. When a strict procedure is emphasized, there is a high possibility of having “error of the third kind”, that is “error that arises from the discrepancy between the concept of interest to the researcher and the quantity actually measured in the survey” (262). When the meaning is emphasized, the interviewer should have flexibility in clarifying and modify the question in a way that daily conversation happens. The question then whether or not the interview is still part of scientific research.

To overcome the problem, Schuman and Jordan suggest a collaborative approach in which not only the questions are crafted properly to maximize the intended interpretation, but also there is a flexibility for the interviewer to clarify the meaning when necessary (262). The fact that it will fall into ordinary conversation, following Schuman and Jordan, is not a problem as long as the interview is still structured as the standardized questionnaire required. The problem here however, how do we know that the conversation is still structured, and how do we know that the interview needs to clarify the meaning, and how do we make sure that it does not happen arbitrarily.

Other issues that related to interview are bias from the interviewer when several of these situations happen: 1) interviewer is not neutral or having and expressing certain interests through the ways of asking question, intonation, and attitudes; 2) interviewer does not have the same interpretation as the researcher; 3) interviewer does not qualify for the survey (too curious beyond the need of the survey, poor communication skills); 4) interviewer’s appearance is too attractive; 5) interviewer lies. To deal with these issues, the researcher usually conducts trainings for the interviewers and controls the procedure of survey through auditing.

Having said those several issues, I would like to acknowledge here that the usage of survey research is obvious. Standardized questionnaire is its main instrument of gauging data. Given its ultimate goal of having the same meaning between the researcher and the respondent, endeavor of getting better guidelines of crafting questionnaire needs to be encouraged more so that we do not have to say that it is simply an art.