- What are response rates?
- Why are response rates important?
- Do low response rates mean the results are wrong?
- What do I do with this information?

What are response rates?

This survey is a snapshot of the conditions in your school at the time of survey administration, based on the responses of the individuals who participated. It can give you a good idea of what is going on in your school in order to help you and your community with school improvement. Response rates can help you understand the quality of the data provided by teachers and students at your school. The response rate is calculated as the total number of responding individuals divided by the total number of

**eligible individuals**. Students are eligible to participate if they are enrolled in a school and are able to take the survey. Those eligible to take the teacher survey include:

- Self-contained and subject-specific classroom teachers
- Instructional coaches and subject matter specialists
- Teacher aides, paraprofessionals, and CCTs (cooperating classroom teachers)
- Special education teachers working in a single classroom or across classrooms
- Counselors, librarians, and other staff members who teach students

Why are response rates important?

For every aspect of school climate in the 5Essentials, the sample size has an effect on the results. For example, if we’re trying to measure students’ perception of safety, we’ll get more reliable information in a large school than in a smaller school—simply because there are more students, and therefore more measurements, in the larger schools. Similarly, when response rates increase, the number of measurements increases.

Do low response rates mean the results are wrong?

**Not necessarily.**Low response rates may indicate the potential for bias in the results but they do not necessarily mean that bias exists. When bias occurs, there are substantial differences between the responses coming from those who responded to the survey and the way nonresponders would have responded had they taken the survey. When those differences are substantial, the survey results do not accurately represent what everyone in the school thinks or feels.

For example, news organizations often survey citizens to determine presidential job approval ratings. Rather than ask every citizen how they feel, they take a sample that is representative of the entire population. As long as those who respond to the survey accurately reflect the population of citizens as a whole, the results will not be biased. Thus, even though less than a one hundredth of 1% of the population respond, the results reflect the feelings of the population overall.

Mathematically, bias occurs when the values of some statistic (like an average) is different between those who responded and the full sample.

If we divide by the average of the full sample, we can compare bias values across variables:

Relative bias=

*[avg(y*-

_{r})*avg(y*] /

_{n})*avg(y*

_{n})where

*avg(y*=the average of some variable

_{r})*y*(like age or GPA) for all responding individuals and

*avg(y*=the average of

_{n})*y*for all

**eligible**individuals.

A little bit of algebra reveals that the relative bias is related to the response rate:

Relative bias=(1-Response Rate) *

*(avg(y*-

_{r})*avg(y*) /

_{m})*avg(y*]

_{n})where

*avg(y*=the average of

_{m})*y*for

**nonresponding**individuals. So if the response rate is 100%, then the relative bias of

*y*is zero. But if the response rate is less than 100%

*and*if there are substantial differences between responding and nonresponding individuals, then

*y*has some bias associated with it.

The table below presents examples of what happens when the response rates are low or high and what happens when there is a large or small difference in an outcome, in this case, the average GPA. Each cell is the relative bias as calculated from the formula. For these examples, the overall average

*avg(y*is computed as the weighted average of

_{n})*avg(y*and

_{r})*avg(y*so that we can hold both the response rate and difference between respondents and nonrespondents constant.

_{m})Notice how the response rate and

*avg(y*-

_{r})*avg(y*affect each other. When there is a small difference between respondents and nonrespondents, the bias

_{m})__can__be lower at a 50% response rate than at a 75% response rate.

Response rate = 99% | Response rate = 75% | Response rate = 50% | |

SMALL DIFFERENCE between respondents and nonrespondents: avg(y=2.45_{r})avg(y=2.35_{m}) |
(1-99%)*(2.45-2.35)/2.449 =0.0004 |
(1-75%)*(2.45-2.35)/2.425 =0.0103 |
(1-50%)*(2.45-2.35)/2.40 =0.0208 |

LARGE DIFFERENCE between respondents and nonrespondents: avg(y=2.52_{r})avg(y=2.0_{m}) |
(1-99%)*(2.52-2.0)/2.515 =0.0021 |
(1-75%)*(2.52-2.0)/2.39 =0.0544 |
(1-50%)*(2.52-2.0)/2.26 =0.1150 |

We cannot calculate the relative bias for our survey measures because we do not have information on the students or teachers who do not respond. But we can look at the bias for some variables that are available for all students. If biased variables are related to our measures, then our measures likely have some bias.

#### Example 2: Higher response rates are related to *less* bias

After the 2007 Chicago Public Schools student survey, we calculated relative bias of weighted GPA and test scores within each school. In Figure 1, the school-level response rates are plotted along the horizontal axis. The vertical axis shows the relative bias of GPA.**Figure 1. School level response rates and bias of GPA.**

#### Example 2: Higher response rates are unrelated to bias

But let’s contrast this with test scores (PSAE math scores in high schools). In Figure 2, we see a different situation. As response rates increase, the bias of this variable does not change—it stays around zero in schools with 35% response rates as well as in schools with 90% response rates.**Figure 2. Response rates and standardized bias of PSAE math scores in CPS high schools in 2007.**

### What do I do with this information?

We require a minimum 50% response rate to receive reports. But whether your response rate is 51% or 91%, please interpret the results in light of this information. High response rates yield more*certainty*in our measures, but high response rates do not always yield less

*bias*. Keep in mind the over- or under-representation of certain groups when you interpret the findings.