What data collection method is used in face to face interview or written questionnaires

Jessica Clark Newman, MPH, Don C. Des Jarlais, PhD, Charles F. Turner, PhD, Jay Gribble, Sc.D, Phillip Cooley, M.S., and Denise Paone, EdD

Jessica Clark Newman

Jessica Clark Newman, Don C. Des Jarlais, and Denise Paone are with the Chemical Dependency Institute, Beth Israel Medical Center, New York, NY. At the time of this study, Charles F. Turner and Jay Gribble were with Research Triangle Institute, Washington, DC. Phillip Cooley is with Research Triangle Institute, Research Triangle, NC.

Find articles by Jessica Clark Newman

Don C. Des Jarlais

Jessica Clark Newman, Don C. Des Jarlais, and Denise Paone are with the Chemical Dependency Institute, Beth Israel Medical Center, New York, NY. At the time of this study, Charles F. Turner and Jay Gribble were with Research Triangle Institute, Washington, DC. Phillip Cooley is with Research Triangle Institute, Research Triangle, NC.

Find articles by Don C. Des Jarlais

Charles F. Turner

Jessica Clark Newman, Don C. Des Jarlais, and Denise Paone are with the Chemical Dependency Institute, Beth Israel Medical Center, New York, NY. At the time of this study, Charles F. Turner and Jay Gribble were with Research Triangle Institute, Washington, DC. Phillip Cooley is with Research Triangle Institute, Research Triangle, NC.

Find articles by Charles F. Turner

Jay Gribble

Jessica Clark Newman, Don C. Des Jarlais, and Denise Paone are with the Chemical Dependency Institute, Beth Israel Medical Center, New York, NY. At the time of this study, Charles F. Turner and Jay Gribble were with Research Triangle Institute, Washington, DC. Phillip Cooley is with Research Triangle Institute, Research Triangle, NC.

Find articles by Jay Gribble

Phillip Cooley

Jessica Clark Newman, Don C. Des Jarlais, and Denise Paone are with the Chemical Dependency Institute, Beth Israel Medical Center, New York, NY. At the time of this study, Charles F. Turner and Jay Gribble were with Research Triangle Institute, Washington, DC. Phillip Cooley is with Research Triangle Institute, Research Triangle, NC.

Find articles by Phillip Cooley

Denise Paone

Jessica Clark Newman, Don C. Des Jarlais, and Denise Paone are with the Chemical Dependency Institute, Beth Israel Medical Center, New York, NY. At the time of this study, Charles F. Turner and Jay Gribble were with Research Triangle Institute, Washington, DC. Phillip Cooley is with Research Triangle Institute, Research Triangle, NC.

Find articles by Denise Paone

Author information Article notes Copyright and License information Disclaimer

Jessica Clark Newman, Don C. Des Jarlais, and Denise Paone are with the Chemical Dependency Institute, Beth Israel Medical Center, New York, NY. At the time of this study, Charles F. Turner and Jay Gribble were with Research Triangle Institute, Washington, DC. Phillip Cooley is with Research Triangle Institute, Research Triangle, NC.

Requests for reprints should be sent to Jessica Clark Newman, MPH, 390 Riverside Dr, Apt 10C, New York, NY 10025 (e-mail: [email protected]).

Accepted January 14, 2001.

Copyright © American Journal of Public Health 2002

Abstract

Objectives. This study assessed the differential effects of face-to-face interviewing and audio-computer assisted self-interviewing (audio-CASI) on categories of questions.

Methods. Syringe exchange program participants (n = 1417) completed face-to-face interviews or audio-CASI. The questionnaire was categorized into the groups “stigmatized behaviors,” “neutral behaviors,” and “psychological distress.” Interview modes were compared for questions from each category.

Results. Audio-CASI elicited more frequent reporting of “stigmatized behaviors” than face-to-face interviews. Face-to-face interviewing elicited more frequent reporting of “psychological distress” than audio-CASI.

Conclusions. Responding to potentially sensitive questions should not be seen as merely “providing data,” but rather as an activity with complex motivations. These motivations can include maintaining social respect, obtaining social support, and altruism. Ideally, procedures for collecting self-report data would maximize altruistic motivation while accommodating the other motives.

Many areas of health and behavioral research rely on self-report data, despite the knowledge that such data may not always be accurate and complete. Factors that motivate participation in research are complex1 and may lead to differential responding within different interview modes. For example, response bias can occur as a result of respondents' desire to present themselves in a favorable light.2

There is substantial evidence that selfreports of drug use and other stigmatized behaviors vary by mode of interview.3–5 Studies have shown that the level of information revealed by a respondent is positively related to the level of privacy of the interview. Methodological problems regarding self-report questionnaires can have a profound impact in fields such as HIV/AIDS research, where such questionnaires are the primary means of obtaining information on risk behaviors.

New interview methods are being developed to improve the quality of self-report data. One such innovation is computer assisted self-interviewing (CASI), in which respondents read survey questions on a computer screen and then directly enter their responses into the computer. In audio-CASI, the questions are presented on the computer screen and read to the respondent through headphones, facilitating use by respondents who are not literate in the interview language.

Several studies have addressed the effects of CASI, generating complex and, at times, contradictory findings. Comparisons of CASI with face-to-face interviewing have concluded that subjects completing computer interviews disclose more socially undesirable attitudes, facts, and behaviors.6–9 Others have reported contrary information, finding that respondents reported more socially undesirable behavior in the face-to-face interview modes than with CASI.10 Little or no difference between CASI and face-to-face interviews also has been reported.11–13

A recent study by Williams et al.14 comparing the reliability of self-reports of risk behaviors using CASI and face-to-face interviewing underscores the complexity of mode effects. The investigators did not find that CASI elicited more reporting of risk behaviors—the 2 modes were comparable in terms of the reliability of self-reports of HIV risk behaviors—but biases were detected in the reported number of times participants engaged in risk behavior.

Additionally, there may be some circumstances in which respondents find answering to a computer to be “impersonal,” and this may affect reporting of specific attitudes and behaviors. In one study, individuals interviewed face-to-face were more likely to report psychiatric symptoms and depression than individuals interviewed by telephone—which, like audio-CASI, is a more anonymous mode.15

Increased disclosure of psychiatric symptoms in a face-to-face interview may demonstrate the use of the interview process by patients as a “cry for help.”16,17 Respondents may use the interview as an opportunity to garner sympathy or social support for their emotional problems.18,19 Thus, the interview process may, in fact, serve as a medium for interpersonal connection, motivating respondents to express their true problems. Reducing the role of the human interviewer may therefore make the interview process “impersonal” for respondents and may reduce the likelihood that they will disclose the types of psychological distress for which sympathy or social support might be expected. No study to date, however, has addressed the effects of audio-CASI on distress questions.

We examined the effects of interview mode on self-disclosure for heavily “stigmatized behaviors,” for which embarrassment would be very likely and social support unlikely, and for “psychological distress,” for which social support would be likely and embarrassment less likely.

METHODS

This report is a secondary analysis of data collected to assess the differences between face-to-face interviewing and audio-CASI on self-reports of HIV risk behavior among injecting drug users attending syringe exchange programs in 4 US cities. A full presentation of the methods is provided by Des Jarlais et al.8

Data Collection

Interviews were completed during 1997 and 1998 with participants of syringe exchange programs in New York, NY; Chicago, Ill; Tacoma, Wash; and Los Angeles, Calif. Participants were recruited from exchange lines. At each site, field workers used random-number tables to select a number, n (from 1 to 6). The nth person in line to exchange syringes was then asked if he or she was willing to participate in a research study. The study was explained and an oral informed consent was obtained.

Audio-CASI and face-to-face interview modes were used in alternate weeks at each exchange. For the audio-CASI interviews, the staff instructed the respondent on the use of the computer and then allowed the respondent to complete the interview in private. For the face-to-face interviews, paper-and-pencil questionnaires were used, and data collection staff read each question and recorded the respondent's answers using traditional interviewing techniques.

The original interview instrument contained approximately 280 questions with items on sociodemographic characteristics, attitudes toward program operations and staff, drug use, sexual behaviors, and physical and mental health histories. For this secondary analysis, the questionnaire was abbreviated to 90 questions. Questions regarding drug use and HIV risk behaviors during the prior 30 days were retained, while the identical questions pertaining to the 30 days before using the exchange were eliminated. In the sections on drug use and sexual behaviors, where gateway questions were followed by a series of specific questions, only the gateway questions were retained. Finally, in the sections regarding program operations and attitudes toward staff, only every third question was retained.

To test our hypotheses, we needed to classify interview questions into 3 categories: stigmatized behaviors (category A), neutral behaviors (category B), and emotional distress (category C). We recruited 3 raters who were generally familiar with injecting drug users but were not familiar with either our hypotheses or the data. Their familiarity with injecting drug users served as background, but they were instructed to follow the criteria outlined in the rater instructions for categorizing the questions. The raters had high agreement with respect to classifying the questionnaire items into the 3 categories (pairwise κ = 0.745, 0.728, and 0.777). We chose to use only questionnaire items for which there was complete agreement among the 3 raters. This gave 51 stigmatized behavior items, 20 neutral items, and 4 emotional distress items.

We analyzed data from the first interview with each participant, using SPSS version 8.0 (SPSS Inc, Chicago, Ill). Categorical responses were classified as “presence” or “absence” of each behavior. Chi-square tests were used to compare the proportion of participants who reported the behavior on audio-CASI with the proportion who reported the behavior in the face-to-face mode. t tests were used for continuous variables. We hypothesized that audio-CASI would elicit a greater proportion of affirmative responses for category A, there would be no difference for category B, and face-to-face interviews would elicit a greater proportion of affirmative responses for category C. We calculated odds ratios with 95% confidence intervals to assess the degree of difference between behaviors reported by the 2 interview methods. We calculated adjusted odds ratios controlling for sex, age, race, and education.

RESULTS

Ninety-five percent of the syringe exchange program participants who were approached agreed to participate in the study. Agreement to participate did not vary by mode, and the amount of missing data was minimal (<1%). The sample consisted of 1581 interviews. Using the unique identifiers created by the respondents, we removed all duplicate interviews from the data set. The final sample included 1417 respondents, 688 of whom completed the audio-CASI interview and 729 of whom were interviewed face-to-face (see Table 1 for demographics).

TABLE 1

—Demographic Characteristics of Participants of Study Assessing Effects of Face-to-Face Interviewing vs Audio-Computer Assisted Self-Interviewing (Audio-CASI)

Face-to-Face, % (n)Audio-CASI, % (n)Total, % (n)Sex    Male66.4 (483)71.5 (492)68.8 (975)    Female33.1 (241)28.3 (195)30.8 (436)Age, y    <309.5 (69)12.9 (89)11.2 (158)    30–3932.2 (233)29.3 (201)35.4 (500)    ≥4058.3 (422)57.9 (397)53.4 (753)    Mean414141Race/ethnicity    White36.3 (266)39.9 (274)38.1 (540)    Black43.7 (317)44.1 (303)43.8 (620)    Hispanic14.6 (106)11.1 (76)12.8 (182)    Other5.1 (37)4.9 (34)5.0 (71)Education    <8th grade5.9 (43)9.4 (65)7.6 (108)    Some high school28 (204)31.1 (214)29.5 (418)    High school graduate37 (270)31 (213)34.1 (483)    Some college24 (175)20.9 (144)22.5 (319)    Some graduate school5.1 (37)7.6 (52)6.3 (69)

Open in a separate window

Category A contained 51 items, 40 of which were analyzed with χ2 tests and 11 of which used t tests. Seventy-three percent of these questions demonstrated increased reporting to audio-CASI (P < .05). Twenty questions were included in category B (P < .05). All 4 questions in category C were analyzed with χ2 tests, and 75% demonstrated increased reporting in the face-to-face mode. Examples of items with interview mode differences for the 3 categories of questions are presented in Table 2. Percentage differences of reported behaviors between audio-CASI and face-to-face interviewing and odds ratios are included.

TABLE 2

—Behaviors Reported by Participants for Each Response Category, by Interview Method

Affirmative ResponseA-CASI, %Face-to-Face, %Difference, %OR (95% CI)PAdjusted OR (95% CI)PCategory A    Sold clean works in past 30 days13941.6 (1.1, 2.3).0061.5 (1.1, 2.2).014    Always used alcohol pads in past 30 days3850121.5 (1.2, 1.9).0001.5 (1.2, 1.9).000    Used nonprescription methadone in last 30 days12751.7 (1.2, 2.4).0051.7 (1.2, 2.5).004    Rented/sold used works in last 30 days4221.9 (1.1, 3.4).0231.8 (1.0, 3.2).044    HIV positive11741.5 (1.0, 2.3).0351.5 (1.0, 2.3).039    Ever had TB302281.5 (1.2, 1.9).0001.5 (1.2, 1.9).001Category B    Used marijuana in last 30 days353411.1 (0.8, 1.3).6051.0 (0.8, 1.3).731    Ever tested for HIV909221.2 (0.8, 1.8).2391.2 (0.8, 1.7).344    Ever had abscess353501.0 (0.8, 1.3).968.99 (0.8, 1.2).879    Ever in drug treatment757231.1 (0.9, 1.5).2881.1 (0.9, 1.5).292Category C    Hopeless in past 30 daysa172251.4 (1.1, 1.8).0091.4 (1.1, 1.9).009    Worry in past 30 daysa283791.6 (1.3, 2.0).0001.6 (1.3, 2.0).001    Depressed in past 30 daysa232851.3 (1.0, 1.7).0241.3 (1.0, 1.7).034    Suicidal in past 30 daysa6711.1 (0.7, 1.7).5821.1 (0.7, 1.7).561

Open in a separate window

Note. A-CASI = audio-computer assisted self-interviewing; OR = odds ratio; CI = confidence interval; TB = tuberculosis.

aDefined as feeling these emotions more than two thirds of the time during the past 30 days.

DISCUSSION

There are 3 clear limitations to this study. First, we did not have any method for verifying the self-reported data. Verification of the sexual and drug-injecting behaviors in category A would be both impractical and a severe invasion of the subjects' privacy. Verification of the subjective feeling states in category C would be even more difficult. Still, it is difficult to imagine why large numbers of subjects would report either the stigmatized behaviors or the psychological distress if they were not engaging in or experiencing these behaviors and problems. Subjects in the face-to-face interviewing condition might plausibly exaggerate the extent of their psychological distress—in the hope of receiving sympathy or social support—but it does not appear plausible that large numbers of subjects would report the problems if they were not experiencing them to some degree.

Second, regarding category C, the small number of questions and the inclusion of questions solely on depression limit the generalizability of these findings. Whether similar results would be obtained for other types of psychological distress remains to be determined in future research.

Finally, participants in needle exchange programs represent a unique population, and whether the findings of this study are replicable in other populations remains open to future research.

Despite these limitations, the interview mode differences between the “stigmatized” HIV risk behaviors and “psychological distress” were notable. These differences reached conventional statistical significance levels in opposite directions—significantly more reporting of stigmatized behaviors with audio-CASI and significantly more reporting of “psychological distress” in face-to-face interviewing.

An examination of the “psychological distress” questions highlights an important point regarding the use of self-administered questionnaires in general, and computer selfadministered questionnaires in particular. It appears that the process of collecting information regarding depression is facilitated by the face-to-face interview process. It is possible that “impersonality” bias for particular types of questions does, in fact, exist. Respondents may underreport to the computer because the impersonal nature of a computer interview is incongruent with the personal nature of questions regarding one's emotional or mental health. In the current study, only depression questions seemed to be biased in that way, although other forms of data may also suffer from “impersonality” bias, particularly those related to psychological and mental health issues.

This study examined group differences in responding to audio-CASI and face-to-face interviewing. There may also be important individual differences in what is viewed as “stigmatized” vs a “problem for which social support is needed,” in the need to hide stigmatized behaviors, and in seeking social support. The context in which the interviewing occurs, as well as interviewer and respondent characteristics, may also affect the degree of stigmatization and the perceived likelihood of obtaining social support. The specific wording of a question may also determine whether the behavior is perceived as stigmatized or as a personal problem for which social support might be obtained. Further research will be needed to explore these issues.

Methodological and conceptual advances in collecting self-report data offer important opportunities for advancing behavioral and health-related science. From the research to date, audio-CASI appears to be an important advance for collecting data about stigmatized behaviors. The relationships between data collection modes and self-disclosure of various potentially sensitive behaviors will need to be systematically explored if the promise of audio-CASI is to be fulfilled.

Responding to potentially sensitive questions should not be seen as merely “providing data,” but rather as an activity with complex motivations. The motivations can include maintaining social respect, coping with stress, and altruism by providing accurate and valid data on issues such as preventing HIV infection. Ideally, procedures for collecting selfreport data would maximize the opportunities for altruistic motivation while accommodating the likely other motives.

Acknowledgments

This research was supported by grant R01 DA 09536 from the US National Institute on Drug Abuse.

An earlier version of this report was presented at the 128th annual meeting of the American Public Health Association, Boston, Mass, November 12–16, 2000.

The authors gratefully thank Sharon Schwartz, Bruce Link, Seiji Newman, Molly Yancovitz, and Julie Alperen for contributions to this report.

This study was approved by the Committee on Scientific Affairs of the Beth Israel Medical Center, New York, NY, which serves as the Institutional Review Board.

Notes

J. C. Newman was responsible for the overall formulation of the hypothesis, data analysis, and paper preparation. D. C. Des Jarlais was the principal investigator of the study and assisted in all aspects of the paper's preparation. C. F. Turner and J. Gribble developed and implemented audio-CASI for the original study. P. Cooley was responsible for audio-CASI programming of the original study. D. Paone was responsible for instrument development and training of data collection staff for the original study.

Peer Reviewed

References

1. Groves RM, Cialdini RB, Couper MP. Understanding the decision to participate in a survey. Public Opinion Q. 1992;56:475–495. [Google Scholar]

2. Catania JA. A framework for conceptualizing reporting bias and its antecedents in interviews assessing human sexuality. J Sex Res. 1999;36:25–38. [Google Scholar]

3. Catania JA, Gibson DR, Chitwood DD, Coates TJ. Methodological problems in AIDS behavioral research: influences on measurement error and participation bias in studies of sexual behavior. Psychol Bull. 1990;108:339–362. [PubMed] [Google Scholar]

4. Aquilino WS. Interview mode effects in surveys of drug and alcohol use. Public Opinion Q. 1994;58:210–240. [Google Scholar]

5. Turner CF, Lessler JT, George B, Hubbard M, Witt M. Effects of mode of administration and wording on data quality. In: Turner CF, Lessler JT, Gfroerer JC, eds. Survey Measurement of Drug Abuse: Methodological Studies. Rockville, Md: National Institutes of Health; 1992:221–244. DHHS publication ADM 92–1929.

6. Couper MP, Rowe B. Evaluation of a computerassisted self-interview component in a computerassisted personal interview survey. Public Opinion Q. 1996;60:89–107. [Google Scholar]

7. Gribble JN, Miller HG, Cooley PC, Catania JS, Pollack L, Turner CF. The impact of T-CASI interviewing on reported drug use among men who have sex with men. Subst Use Misuse. 2000;35:869–890. [PubMed] [Google Scholar]

8. Des Jarlais DC, Paone D, Milliken J, et al. Audio-computer interviewing to measure risk behaviour for HIV among injecting drug users: a quasi-randomised trial. Lancet. 1999;353:1657–1661. [PubMed] [Google Scholar]

9. Turner CF, Ku L, Rogers SM, Lindberg LD, Pleck JH, Sonenstein FL. Adolescent sexual behavior, drug use and violence: increased reporting with computer survey technology. Science. 1998;280:867–873. [PubMed] [Google Scholar]

10. Tourangeau R, Rasinski K, Jobe JB, Smith TW, Pratt WF. Sources of error in a survey on sexual behavior. J Off Stat. 1997;13:341–365. [Google Scholar]

11. Sanders GD, Owens DO, Paten N, Cardinally AB, Sullivan AN, Ease RF. A computer-based interview to identify HIV risk behaviors and to assess patient preferences for HIV-related health states. In: Ozbolt JG, ed. Proceedings From the Annual Symposium of Computer Applications in Medical Care. 18th ed. New York, NY: Institute of Electrical and Electronics Engineers; 1994:20–24. [PMC free article] [PubMed]

12. Hasley S. A comparison of computer-based and personal interviews for the gynecologic history update. Obstet Gynecol. 1995;85:494–498. [PubMed] [Google Scholar]

13. Webb PM, Zimet GD, Fortenberry JD, Blythe MJ. Comparability of a computer-assisted versus written method for collecting health behavior information from adolescent patients. J Adolesc Health. 1999;24:383–388. [PubMed] [Google Scholar]

14. Williams ML, Freeman RC, Bowen AM, et al. A comparison of the reliability of self-reported drug use and sexual behaviors using computer-assisted versus face-to-face interviewing. AIDS Educ Prev. 2000;12:199–213. [PubMed] [Google Scholar]

15. Henson R, Cannell CF, Roth A. Effect of interview mode on reporting of moods, symptoms, and need for social approval. J Soc Psychol. 1978;105:123–129. [Google Scholar]

16. Dahlstrom W, Welsh G, Dahlstrom L. An MMPI Handbook. Vol 1. Minneapolis: University of Minnesota Press; 1972.

17. Marks P, Seeman W, Haller D. The Actuarial Use of the MMPI With Adolescents and Adults. Baltimore, Md: Williams & Wilkins; 1974.

18. Veroff J, Veroff JB. Social Incentives: A Life-Span Developmental Approach. New York, NY: Academic Press; 1980.

What data collection method is used in face to face interviews or written questionnaires?

Data collection surveys collect information from a targeted group of people about their opinions, behavior, or knowledge. Common types of example surveys are written questionnaires, face-to-face or telephone interviews, focus groups, and electronic (e-mail or website) surveys.

Which type of data is collected through face to face interviews?

Face-to-face interviews allow the researcher the ability to have interviewers gather detailed information about attitudes and behavior toward a particular topic. Whenever one wants to find out more about an issue or explore an answer, interviewers should be trained how to probe to gather unbiased detailed responses.

What is the way of collecting data through questionnaires and interview?

The questionnaire method of collecting data involves emailing questionnaire to respondents in a written format. On the contrary, interview method is one wherein the interviewer communicates to the respondent orally. The questionnaire is objective while the nature of the interview is subjective.

What type of data collection is a questionnaire?

Revised on 10 October 2022. A questionnaire is a list of questions or items used to gather data from respondents about their attitudes, experiences, or opinions. Questionnaires can be used to collect quantitative and/or qualitative information.