On Friday, a COVID-19 Data Dispatch reader asked for my help in interpreting a wildly high test positivity rate: 544% in Washington, D.C. The source of this rate, she said, was Johns Hopkins University (JHU)’s COVID-19 dashboard.
Test positivity rates seem simple; they’re calculated by dividing the number of positive tests over the total tests reported in a particular place, over a particular period of time. But these rates can be hard to calculate accurately because positive tests—a.k.a. COVID-19 cases—are often reported on a different time scale from all (positive and negative) tests.
If a health department is swamped with COVID-19 data—or if it’s coming off of a holiday break—it will prioritize analyzing and reporting the case numbers over other metrics, because case reporting is most important for public health measures like contact tracing. Similarly, some labs might send in positive test results before they send in negative test results. This can lead to something like 100 cases reported on a Monday, but the tests used to find those cases not getting reported until later in the week.
States and localities that calculate their own positivity rates have systems to account for these time differences, usually by matching up the dates that tests took place. But JHU doesn’t do this, because JHU test positivity rates come from automatic data scrapes and calculations with none of the backend timing information that you’d need to actually determine an accurate positivity rate.
In short, if you see a wildly high test positivity rate sourced from JHU’s dashboard, don’t trust it. Go look at the state, city, or county’s own COVID-19 data, or check the CDC dashboard instead.
Also: I’d like to write more about test positivity next week, since this is such a confusing metric right now. If you have questions on this topic, send them my way!