Data Moments

A brief discussion of specific methodological issues relevant to MCH professionals:

Toggle each box to expand to the full composition, Healthy Generations publications are available for download.

[raw]

[toggle title=”Response Rate”]

Response Rate (Healthy Generations, Oct. 2003)

Response rate refers to the number of respondents who completed a survey relative to the number of people who were asked to complete the survey. A poor response rate compromises the validity of a survey because non-respondents may be different in some systematic way than respondents; thus the results may be biased because only a self-selected subgroup of the intended sample chose to respond.

It is possible that a survey with a poor response rate reflects an inordinate percentage of a specific demographic from the intended sample (e.g., only men respond to a survey whose intended sample is half men, half women). Poor response can also lead to misleading results if those who responded vary from non-respondents on key survey questions (e.g., only those who agreed with the perceived political sentiments of the surveyors responded).

The careful reader may also want to pay attention to item-response rates, which reflect the percent of respondents who answered a specific item on a survey. Item non-response can be a problem, especially if a survey contains some sensitive or difficult items that respondents may choose to skip. The careful researcher should:

  1. do her best to show how specific characteristics of the non-respondents and respondents compare, to address the representativeness of the survey sample; and
  2. address item non-response. The latter is infrequently done and is only sometimes found by the reader who may notice that the sample size for some variables is not the same as the number of survey respondents.

[/toggle]

[toggle title=”Teratogens”]

Teratogens (Healthy Generations, Feb., 2004)

Several articles in Health Generations Feb 2004 discuss teratogens. Teratogens are external agents (chemical or physical) that damage embryonic or fetal development. Our modern understanding of such agents is short. It was in 1941 that an Australian ophthalmologist first showed that infection with Rubella was associated with birth defects. An important indicator of environmental toxins occurred in Japan in 1956 with mercury exposure in fish, resulting in Minamata Disease.

Thalidomide, which was prescribed to pregnant women in the early 1960s, showed us that a non-toxic drug could cause specific malformations. However, not all drug-induced teratogens can be identified because of obvious, and specific, birth defects.Experiences with DES, prescribed in the 1940s and 1950s to reduce fetal loss in high-risk pregnancies, showed that the effects of drug exposure in utero may not all manifest at birth. By 1970, a clear association between in utero exposure and adenocarcinoma of the vagina in women was established. Later work suggested a relationship between reproductive cancers and men exposed in utero.

The researcher has several challenges in studying the teratogenic effects of prescribed drugs. First, as with DES, the outcomes may not manifest at birth—but occur decades later in adult offspring. Second, the careful researcher must also consider why drugs are prescribed. For example, an association between an antibiotic and a birth defect may not be associated with the medication, but rather to the infection for which the drug was prescribed.

Evidence-based Public Health Decisions (Healthy Generations, June, 2004)There may be no other health area as sensitive as reproductive health, with concerns ranging from sexually transmitted infections, adolescent sexual behavior, contraception, and abortion. So many reproductive health issues are both intensely private and relentlessly public.

Political will can be as forceful as social justice and public health evidence in informing public health programs and policies. Public health professionals, however, can never lose sight of our commitment to evidence-based decision-making in the interest of optimizing the health of all people. We must put aside our personal convictions and review the scientific evidence as thoroughly as we can to inform and serve the public. While it is true that scientific methods are not always perfect, they are the best we have—and they are superior to anecdotes, no matter how passionately presented. We need to trust the experts, and weigh heavily their conclusions. For example, last year the NIH convened a panel of over 100 experts on abortion and breast cancer and concluded, strongly, that there was no association between them.

The summary report can be accessed here. Furthermore, in March 2004, the Lancet, one of the premier medical journals in the world, published a meta-analysis including thousands of women and came to the very same conclusion: abortion is not related to breast cancer. Individuals have the right to be opposed to legal abortion, but it is poor public health practice to misinform the public and suggest that it may be linked to breast cancer. It takes years of study and practice to understand how to read a research report. People who are unfamiliar with reading scientific reports may want to examine a short article called “Savvy use of research: tips for policy makers” on the University of Minnesota Children, Youth & Family Consortium website.

Evaluating Program Satisfaction (Healthy Generations, Oct., 2004)Evaluating the effects of a program is difficult, which is why evaluation is the subject of many books and graduate courses. For this “Data Moment” we will consider a very specific matter in evaluation: the common practice of asking participants about their satisfaction with a program.

While it is important to know if participants liked a program, there are many things to consider in attempting such evaluation. First, evaluators should be careful to select a relevant time frame for the program evaluation, as program components (e.g., staff, services) can change over time. The time frame for evaluation should represent a period of time during which the program elements were consistent and it should be a current time frame so the evaluation data can be used to inform current practices. Second, evaluators should consider how many people were served by the program during the evaluation time frame in order to identify a representative group for evaluation, in terms of numbers and distribution of key demographic or other variables. For example, if a program served 1000 people during a specific time period and 20 people were surveyed, the findings about “satisfaction” (or any question) would be dubious because it is unlikely that the 20 respondents would be representative of program participants. Further, if 100 people were queried and only 20 responded, the findings would also be questionable. Not only would those 20 participants represent a small number of participants, but they also may represent a biased group because they (unlike most of the potential participants) chose to respond— and their opinions may not reflect the majority opinion.

Evaluators should also be sure that the participants in evaluation surveys are aware of the program: people cannot always identify the name of programs that serve them and thus they may not be able to answer questions that refer to the program by name. In addition, evaluators may want to be sure that participants have had a sufficient “dose” of the program: an individual with only one exposure to a program may have a different perspective than one with multiple exposures. Evaluators will either want to screen survey participants to be sure that they have had a minimum level of program exposure. If they do not screen participants, they will want to ask participants about exposure level, in order to adjust analysis.

Evaluators must also be careful to assure participant confidentiality and anonymity. Participants may be reluctant to have their identities attached to their responses, for fear of jeopardizing their relationship with the program. Also, “social desirability” may influence responses and must always be considered in the interpretation of survey data. Social desirability refers to the natural desire on the part of survey participants to please evaluators and say things they think the evaluators want to hear. Finally, general satisfaction with a program may be less important than satisfaction with specific components of the program.

Evaluators should try to ask detailed questions about satisfaction with key program components because they could be most useful for generating ideas about program development or modification. In sum, questions about program satisfaction are frequently asked in program evaluations, but they may not always be answered or interpreted carefully. One of many useful resources for evaluation is The American Evaluators Association, which has some useful links to on-line evaluation handbooks and text here[/toggle]

[toggle title=”Evidence-based Public Health Decisions”]

Evidence-based Public Health Decisions (Healthy Generations, June, 2004)

There may be no other health area as sensitive as reproductive health, with concerns ranging from sexually transmitted infections, adolescent sexual behavior, contraception, and abortion. So many reproductive health issues are both intensely private and relentlessly public.

Political will can be as forceful as social justice and public health evidence in informing public health programs and policies. Public health professionals, however, can never lose sight of our commitment to evidence-based decision-making in the interest of optimizing the health of all people. We must put aside our personal convictions and review the scientific evidence as thoroughly as we can to inform and serve the public. While it is true that scientific methods are not always perfect, they are the best we have—and they are superior to anecdotes, no matter how passionately presented. We need to trust the experts, and weigh heavily their conclusions. For example, last year the NIH convened a panel of over 100 experts on abortion and breast cancer and concluded, strongly, that there was no association between them.

The summary report can be accessed here. Furthermore, in March 2004, the Lancet, one of the premier medical journals in the world, published a meta-analysis including thousands of women and came to the very same conclusion: abortion is not related to breast cancer. Individuals have the right to be opposed to legal abortion, but it is poor public health practice to misinform the public and suggest that it may be linked to breast cancer. It takes years of study and practice to understand how to read a research report. People who are unfamiliar with reading scientific reports may want to examine a short article called “Savvy use of research: tips for policy makers” on the University of Minnesota Children, Youth & Family Consortium website.[/toggle]


[toggle title=”Evaluating Program Satisfaction”]

Evaluating Program Satisfaction (Healthy Generations, Oct., 2004)

Evaluating the effects of a program is difficult, which is why evaluation is the subject of many books and graduate courses. For this “Data Moment” we will consider a very specific matter in evaluation: the common practice of asking participants about their satisfaction with a program.

While it is important to know if participants liked a program, there are many things to consider in attempting such evaluation. First, evaluators should be careful to select a relevant time frame for the program evaluation, as program components (e.g., staff, services) can change over time. The time frame for evaluation should represent a period of time during which the program elements were consistent and it should be a current time frame so the evaluation data can be used to inform current practices. Second, evaluators should consider how many people were served by the program during the evaluation time frame in order to identify a representative group for evaluation, in terms of numbers and distribution of key demographic or other variables. For example, if a program served 1000 people during a specific time period and 20 people were surveyed, the findings about “satisfaction” (or any question) would be dubious because it is unlikely that the 20 respondents would be representative of program participants. Further, if 100 people were queried and only 20 responded, the findings would also be questionable. Not only would those 20 participants represent a small number of participants, but they also may represent a biased group because they (unlike most of the potential participants) chose to respond— and their opinions may not reflect the majority opinion.

Evaluators should also be sure that the participants in evaluation surveys are aware of the program: people cannot always identify the name of programs that serve them and thus they may not be able to answer questions that refer to the program by name. In addition, evaluators may want to be sure that participants have had a sufficient “dose” of the program: an individual with only one exposure to a program may have a different perspective than one with multiple exposures. Evaluators will either want to screen survey participants to be sure that they have had a minimum level of program exposure. If they do not screen participants, they will want to ask participants about exposure level, in order to adjust analysis.

Evaluators must also be careful to assure participant confidentiality and anonymity. Participants may be reluctant to have their identities attached to their responses, for fear of jeopardizing their relationship with the program. Also, “social desirability” may influence responses and must always be considered in the interpretation of survey data. Social desirability refers to the natural desire on the part of survey participants to please evaluators and say things they think the evaluators want to hear. Finally, general satisfaction with a program may be less important than satisfaction with specific components of the program.

Evaluators should try to ask detailed questions about satisfaction with key program components because they could be most useful for generating ideas about program development or modification. In sum, questions about program satisfaction are frequently asked in program evaluations, but they may not always be answered or interpreted carefully. One of many useful resources for evaluation is The American Evaluators Association, which has some useful links to on-line evaluation handbooks and text here[/toggle]

[toggle title=”Definitions of Immigrant and Refugee”]

Definitions of Immigrant and Refugee ( Healthy Generations, Feb., 2005)

As health professionals it is likely that you have an interest in immigrants and maternal/child health issues. How do we define immigrants, and how many immigrants are there in the United States, or in a given community? At first glance these may seem like straightforward questions, however, they are actually extremely complex.

Think about your own geographic community. How many immigrants live there? After you make a ‘guesstimate,’ reflect on what different populations you had in mind. What groups are included or excluded when researchers or providers count immigrants based upon each of the following variables: place of birth? language spoken? race/ethnicity? Which definitions include and exclude the U.S.-born children of foreign-born adults? Are refugees counted as ‘immigrants’? It is also important to think about the implications of using different definitions of immigration status and race/ethnicity. To do this, ask yourself the purpose or need for the definition. The measurement choice you make may be quite different if your purpose is to determine eligibility for services vs. access to care vs. differences in health behavior or susceptibility to particular diseases.

Here are some typical ways people are classified or counted:

  • Foreign born or foreign ancestry (e.g. self-denomination in census data)
  • U.S.-born children of foreign-born adults
  • INS statistics on visas issued (will you include visitors in your count?)
  • Minority group members
  • Limited English Proficiency (LEP) children and adults
  • Refugees
  • People seeking services (e.g. foreign victims of torture; attending international clinics; or seeking services from health departments, church groups, social service agencies, immigrant associations)

Given the diversity of definitions, it is important that definitions are clearly stated and consistently expressed when data are compared from more than one source.

[/toggle]

[toggle title=”Temporality in Exposure – Health Outcome Studies”]

Temporality in Exposure – Health Outcome Studies (Healthy Generations, Feb., 2006)

Data about current exposures may not be a good reflection of past exposures or of cumulative exposures.  In job strain research, it is common (but not optimal) to collect data about the current work environment in order to conduct analyses about outcomes (e.g., obesity, hypertension, substance use) that may have had their onset decades previously.

One measure of the quality of analysis is the degree of certainty that the exposure (e.g., job strain) preceded the onset of the disease or health behavior.  Despite the fact that job strain may change over time (and thus one cannot be 100% certain it came before the outcome), there is substantial research evidence that job strain is associated with cardiovascular risk factors.  However, the most compelling studies of these associations are rare.  Such studies would completely assess work history in order to: (1) examine past job strain that clearly occurred before the health behavior or disease onset; (2) evaluate the association of cumulative job strain to health; and (3) determine if changes in job strain (e.g., shifting from high strain to low strain positions) are related to health outcomes.

[/toggle]



[/raw]