Category: Uncategorized

  • National numbers, Nov. 8

    National numbers, Nov. 8

    In the past week (November 1 through 7), the U.S. reported about 715,000 new COVID-19 cases, according to the COVID Tracking Project. This amounts to:

    • An average of 102,000 new cases each day
    • 218 total new cases per 100,000 people
    • 1 in 460 Americans getting diagnosed with COVID-19
    • 2.7 times the total cases Canada has reported over the entire course of the pandemic

    While the Midwest is bearing the brunt of this recent surge, cases are rising across the country. 23 states broke their COVID-19 records in the past week, and the COVID Exit Strategy tracker currently categorizes almost every state as “trending poorly” or “uncontrolled spread.” Vermont and Hawaii are the only two exceptions.

    America also saw:

    • 6,900 new COVID-19 deaths last week (2.1 per 100,000 people)
    • 55,800 people currently hospitalized with the disease (as of November 7)

    There is no doubt that we’re seeing a third national surge, and we should not expect it to let up anytime soon. I recommend checking the COVID-19 Risk Levels Dashboard (by the Harvard Global Health Institute, Brown School of Public Health, et al.) for more detail on the current risk in your community.

    Nationwide COVID-19 metrics published in the COVID Tracking Project’s daily update on November 7. America broke its single-day new case record for the fourth day in a row.
  • Sources and updates, Nov. 1

    The sources listed here are included in my source list, along with all featured sources from past issues.

    • Detailed hospitalization data go unreportedA new story by NPR’s Pien Huang and Selena Simmons-Duffin reveals county-, city-, and individual hospital-level reports which the Department of Health and Human Services (HHS) circulates internally but does not post publicly. HHS’s public reports on hospital capacity only include data at the state level. According to Huang and Simmons-Duffin’s reporting, more local data and contextual information such as per capita calculations and time series would be incredibly useful for the public health experts who are trying to determine where aid is most needed. The NPR story also notes that hospital compliance is low: only 62% of U.S. hospitals had sent HHS all the required information in the week prior to October 30.
    • HHS Protect has expanded: For a few months now, the HHS Protect Public Data Hub has only hosted COVID-19 hospitalization data. But recently, the website expanded to include a section on national testing. Users can clearly see cumulative PCR testing numbers from the country, download the full dataset, and read documentation. This dataset has been publicly available on healthdata.gov since July, but through hosting it on the HHS Protect Public Data Hub, the agency has made it more easily accessible for Americans who are not data nerds like myself.
    • Daily testing needs: A new tool from the Brown School of Public Health helps users calculate how many tests are needed for key essential groups, both for the nation overall and state-by-state. The tool is intended for public health leaders and policymakers who are starting to scale up as antigen tests become more widely available. For example, New York would need 37,300 tests a day to screen all college and university students.
    • Pennsylvania’s antigen testsOn October 14, Pennsylvania started distributing antigen test kits to health centers, nursing homes, and other facilities throughout the state. The facilities receiving tests are reported by the state in weekly lists. I wanted to share this because it’s a great example of testing transparency; though if Pennsylvania adds antigen tests to their dashboard, their reporting will be even more comprehensive. For more information on why state antigen test reporting is important—and how states have failed at it so far—see my COVID Tracking Project blog post from last week.
    • COVID holiday FAQsEpidemiologists from Boston University, the University of Alabama, Birmingham, and the University of Miami have compiled their responses to common concerns around the holiday season. The questions included range from, “How do I talk to friends and family members about COVID and the holidays?” to, “Is it important to get my flu shot?” (P.S. It is. Get your flu shot.)
    • COVID-19 in ICE detention centers: Since March 24, researchers from the Vera Institute of Justice have been compiling data from Immigration and Customs Enforcement (ICE) on COVID-19 cases and testing in immigrant detention centers. The researchers note that ICE replaces previously reported numbers whenever its dataset is updated, making it difficult to track COVID-19 in these facilities over time.
    • Eviction LabResearchers fromPrinceton University compile data for this source by reviewing formal eviction records in 48 states and the District of Columbia. Although the source’s most recent state-level dataset is as of 2016, the group is also tracking COVID-19-related evictions in real time for a select group of cities. Houston, TX, at the top of the list, has seen over 13,000 new eviction filings since March.
    • HHS celebrity tracker: Here’s one more piece of HHS news, this one more lighthearted. This week, POLITICO’s Dan Diamond released an HHS document called the “PSA Celebrity Tracker,” which health officials were using to determine which of America’s favorite people may be useful in an ad campaign encouraging the nation to be less negative about COVID-19. (Here’s more context from POLITICO on the tracker.) Alec Baldwin, for example, is listed as a celebrity who appeals to the elderly, with the additional note: “interested but having a baby in a few weeks.” Lin-Manuel Miranda is listed as appealing to Asian-Americans, with the note: “No information regarding political affiliation.”
  • Answering readers’ COVID-19 questions

    Editor’s note, Jan. 3, 2021: On Nov. 1, 2020, I ran a Q&A thread on Substack in order to answer readers’ questions in the lead-up to the U.S. election.

    Thank you to everyone who asked questions in the thread today. I appreciated the chance to hear about your current COVID-19 concerns, and I got a few ideas for future issue topics. I hope that my answers were useful.

    Here’s one question which I wanted to broadcast to everyone:

    Ross asked: Hi Betsy—long time reader, first time asker. Have we seen significant spikes in COVID in connection with national holidays, or are spikes largely attributable to other factors? Should we be expecting a Thanksgiving spike? What about an election protest spike?

    My response: Thanks Ross, that’s a good question! First of all, I need to clarify that it’s really hard to find a causal association between case spikes and specific events in the U.S., because our contact tracing apparatus simply isn’t up to it in most places. We can’t conclusively find out how many people were infected at a given event or location unless we can test all of them and get those test results to a central location and adjust for confounding factors, like other events that people attended/traveling they did. There have been a few scientific studies that look for these associations (Stanford University researchers recently published a paper about Trump rallies, for example) but largely it is difficult to make these conclusions as events are ongoing.

    That being said, the COVID Tracking Project has noted case spikes in the South after Memorial Day, which occurred when many states were loosening lockdown orders. It’s important to note here that these kinds of case spikes are usually delayed; it takes a couple of weeks for people to notice symptoms and get tested (causing cases to spike), and then another week or two for hospitalizations to spike, and then another week or two after that for deaths to spike. (Caroline Chen has explained this lag for ProPublica.) But to answer your question of whether experts are expecting a Thanksgiving spike: yes, they definitely are. Here’s Fauci talking about it, from a couple of weeks ago.

    And as for protests—this is also difficult to say for sure, as it is difficult to even estimate how many people attend a protest, let alone to test and contact trace them all. But, to my knowledge, no protest has been a superspreader event so far. Health experts cite the fact that protests are usually outside and have high mask compliance as a possible reason why they have not proven to be as risky as, say, Trump rallies.

    And one more:

    Martha asked: Hi Betsy, In this time of pandemic fatigue, I am interested in rankings of reasonable activities to keep some economic sectors going without becoming part of the problem (i.e. infected). What are your favorite (or a favorite) source that ranks activities? Do you know of any detailed studies that gets at nuances (with my pod vs. with people not in my pod)?

    My response: Maryn McKenna has actually written a great story about COVID-19 risk charts, including the strengths and weaknesses of a couple of widely-cited resources. It has been a couple of months since this story, though, and since then, more interactive resources have popped up. One that I like is the microCOVID project, which estimates your risk based on your location, the number of people you’ll be seeing, mask types, and more. Another resource, which I’ve cited in the newsletter before, is Georgia Tech’s COVID-19 Event Risk Assessment Planning Tool. This tool is simpler, but it gets very precise about the risk levels in your state and county.

    I haven’t seen specific studies that get at the nuances of risk levels inside/outside of a pod, largely because I think this is a hard thing for epidemiologists to track. (America! Bad at contact tracing!) But I will say that it is important for you to be clear and realistic about who is in your pod. For example, I live with three roommates in Brooklyn. I sometimes visit my sister, who lives in Manhattan. Two of my roommates are commuting to their respective offices on reduced schedules. So, if one of my sister’s roommates tests positive for COVID-19, that means that, depending on the timing, I, and all of my roommates, and all of my roommates’ coworkers should consider that we may have been exposed. The bigger your pod, the more regular testing can help assuage these types of concerns.

    My comment sections are always open for questions about the week’s issue. Or, if you would like to use a less public platform, you can hit me up at betsy@coviddatadispatch.com.

  • Featured sources, Oct. 25

    These sources have been added to the COVID-19 Data Dispatch resource list, along with all sources featured in previous weeks.

    • Missing in the Margins: Estimating the Scale of the COVID-19 Attendance Crisis: This new report by Bellwether Education Partners provides estimates and analysis of the students who have been unable to participate in virtual learning during the pandemic. While the state-by-state estimates and city profiles may be useful to local reporters, the overall numbers should shock us all: three million students, now left behind.
    • The Pandemic and ICE Use of Detainers in FY 2020: The Transactional Records Access Clearinghouse (or TRAC) at Syracuse University has collected data on U.S. immigration since 2006. The project’s most recent report describes the pandemic’s impact on Immigration and Customs Enforcement (ICE)’s practice of detaining individuals as a step for apprehending and deporting them.
    • COVID-19 Risk Levels DashboardThis dashboard by the Harvard Global Health Institute and other public health institutions now includes COVID-19 risk breakdowns at the congressional district level. Toggling back and forth between the county and congressional district options allows one to see that, when risk is calculated by county, a few regions of the U.S. are in the “green”; at the congressional district level, this is not true for a single area.
    • COVID-19 at the White House: VP Outbreak: The team behind a crowdsourced White House contact tracer (discussed in my October 4 issue) is now tracking cases connected to Vice President Mike Pence.
  • Featured sources, Oct. 18

  • Contact tracing: Too little, too late, no public data

    Contact tracing: Too little, too late, no public data

    Most states are not ready to find and trace all of their new COVID-19 cases as the country heads into a new wave of outbreaks. Screenshot via Test and Trace, taken on October 18.

    On October 1, a little over two weeks ago, I received an email from New York Governor Andrew Cuomo’s office.

    The email invited me to download a new COVID-19 phone application, developed by the New York State Department of Health along with Google and Apple. The app, called COVID Alert NY, is intended to help New Yorkers contact trace themselves. (Side note: I am not entirely sure how Cuomo’s office got my email, but I suspect it has something to do with the complaints I left about his budget back in June.)

    Here’s how Cuomo’s office describes the app:

    COVID Alert NY is New York State’s official Exposure Notification App. This is a free smartphone app available to anyone 18+ who lives and/or works in New York. The app uses Bluetooth technology—not location data—to quickly alert users if they have been in close contact with someone who has tested positive for COVID-19. Once alerted, users can quickly protect themselves and others by self-quarantining, contacting their physician and getting tested.

    The app is intended to fit into New York’s contact tracing efforts by automatically informing app users that they have been exposed to COVID-19 and prompting them to take the necessary precautions. It also features a sypmtom checker, which asks users to note if they have exhibited a fever, cough, or other common COVID-19 symptoms, and a page with the latest case and testing data for every county in New York.

    Contact tracing, or the practice of limiting disease spread by personally informing people that they have been exposed, has been a major method for controlling COVID-19 spread in other countries, such as South Korea. But in the U.S. the strategy is—like every other part of our nation’s COVID-19 response—incredibly patchwork. We have no national contact tracing app, much less a national contact tracing workforce, leaving states to set up these systems on their own.

    Back in May, I researched and wrote an article for Stacker about this problem. I compared contact tracing targets, calculated by researchers at George Washington University, with the actual numbers of contact tracers employed in every state, compiled by the public health data project Test and Trace. GWU’s estimates started at a baseline 15 contact tracers per 100,000 people, then were adjusted based on COVID-19 trends in every state. Now, this story should be seen as a historical snapshot (the summer’s Sun Belt outbreaks hadn’t yet started when I wrote it), but it is telling to scroll through and see that, even several months into America’s COVID-19 outbreak, the majority of states had tiny fractions of the contact tracing workforces they needed to effectively trace new cases. New York, for example, had a reported 575 contact tracers employed, compared to an estimated need of over 15,000 contact tracers.

    Today, many states are doing better. New York is up to 9,600 contact tracers, according to Test and Trace’s latset counts, and has planned to hire thousands more. This state, along with Massachusetts, New Hampshire, New Jersey, Connecticut, Vermont, and Washington D.C., has received high marks from Test and Trace’s scoring system, with 5 to 15 tracers employed for every new positive COVID-19 case. But all of these high-scoring states are in the Northeast, where COVID-19 outbreaks peaked in the spring. The Midwestern states currently seeing spikes, such as Wisconsin and Missouri, all rank far lower on their preparedness to trace new cases. (See the screenshot above.)

    Meanwhile, actual data on the efficacy of these contact tracers are difficult to come by. To continue using New York as an example: since the application’s release on October 1, New York’s Department of Health has not released any data on how many people have downloaded the application, much less how many positive cases have been logged or how many contacts have been traced. Data have neither been mentioned in Cuomo’s press releases nor have they appeared on the state’s COVID-19 dashboard.

    According to tech website 9to5Mac, as of October 1, 11 states had registered contact tracing apps with Google and Apple’s exposure notification technology. These states include Alabama, Arizona, Delaware, Nevada, New Jersey, North Carolina, North Dakota, Pennsylvania, Virginia, and Wyoming, as well as New York. Six more states have apps in development.

    A brief analysis by yours truly found that, of those 11 states with contact tracing apps, only four post contact tracing data: Delaware, New Jersey, North Dakota, and Wyoming. Delaware and New Jersey both have dedicated data pages detailing the share of COVID-19 cases which have successfully participated in the state’s contact tracing efforts (57% and 71%, respectively). North Dakota and Wyoming both post statistics on their cases’ source of COVID-19 exposure, including such categories as “contact with a known case,” “community spread,” and “travel”; these data must be sourced from contact tracing investigations. 11.1% of North Dakota’s cases and 27.1% of Wyoming’s cases have an exposure source listed as “unknown” or “under investigation,” as of October 18. Meanwhile, Pennsylvania and North Carolina have both posted statistics on their contact tracing workforces, but no data on the results of these workforces’ efforts.

    Other states without registered apps may also be posting contact tracing data. But it is still a notable discrepancy that, among the states that have systematic contact tracing technology, tracing results are lacking. Compare these states to South Korea, which at the height of its outbreak publicly posted demographic information and travel histories for individual COVID-19 cases in alerts to surrounding communities. South Korea’s approach has faced criticism, however, for reporting private information about people who tested positive.

    And that brings me to the biggest weakness for American contact tracing: lack of public trust. Americans, more than residents of other nations, tend to be concerned about personal privacy and, as a result, are more wary of speaking to strangers on the phone or using an application that sends their data to the government, even if all those data are anonymized. Olga Khazan explained this issue in an article for The Atlantic, published in late August:

    Still, contract tracing depends on trust, and many Americans don’t trust the government enough to give up their contacts or follow quarantine orders. Of the 121 agencies Reuters surveyed, more than three dozen said they had been hindered by peoples’ failure to answer their phone or provide their contacts. About half of the people whom contact tracers call don’t answer the phone, because they don’t want to talk with government representatives, Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases, said during a June news conference.

    Black and Hispanic or Latino communities are particularly likely to distrust the government and avoid contact tracers’ calls. This attitude makes sense, given how both America’s government and medical systems are inexorably tied to racist histories. But for the public tracers hoping to help these communities—which have been disproportionately impacted COVID-19—it’s another barrier to stopping the virus’ spread.

    Even I, as someone who understands more about the need for contact tracing than the average American, am wary about using New York’s COVID Alert app. The app asks me to turn on both Bluetooth and location data, and even though COVID Alert purports to be anonymous, Twitter, Instagram, and other applications have made no such promises. So far, I have been using the application when I go to the park, grocery shop, or ride the subway, but for the vast majority of my days it sits dormant on my phone.

    And of course, I have to wonder: where was this app in March, when the city shut down and my neighborhood filled with ambulance sirens? Like most other parts of America’s COVID-19 response, contact tracing has been scattered and difficult to evaluate, but the data we do have indicate that most states are doing too little, too late.

  • Featured sources, Oct. 4

    All featured sources are available here.

    • Search trends on COVID-19 symptoms: Researchers can now access a Google Search dataset of search trends related to over 400 COVID-19 symptoms and related health conditions, such as fever, cough, and difficulty breathing.
    • FAQs on Protecting Yourself from COVID-19 Aerosol Transmission: This pubic document was compiled by a group of scientists and engineers who study aerosol transmission. Answered questions range from “How long can the virus stay in the air indoors?” to “What is the best type of mask?”
    • MIT COVID-19 Indoor Safety Guideline: Another air transmission resource allows users to calculate the risk levels for different indoor spaces, based on a model by MIT researchers Martin Bazant and John Bush.
    • Open States COVID-19 LegislationOpen States, a public civic engagement project, is compiling a list of legislation related to the COVID-19 pandemic in the U.S. The database currently tracks over 3,000 bills in 46 states.
  • I am once again asking: why are journalists doing this?

    I am once again asking: why are journalists doing this?

    President Trump and the First Lady tested positive for COVID-19 in the early morning on Friday, October 2. As I draft this newsletter on Sunday morning, at least 15 other people connected to the President have tested positive, ranging from Bill Stepien, Trump’s campaign manager, to New York Times Washington correspondent Michael Shear.

    You might expect me to source this number and these names from a federal public health agency, which is conducting all of these tests and making their results public. Not in this pandemic! My source is, of course, a dashboard compiled by volunteer journalists and science communicators.

    This dashboard, called the COVID-19 At The White House Contact Tracker, is attempting to trace over 200 contacts in connection with the President and his staff. The team behind it includes Benjy Renton, independent reporter on COVID-19 in higher education, Peter Walker, data visualization lead at the COVID Tracking Project, and Jesse O’Shea, MD, infectious disease expert at Emory University.

    The Contact Tracker is an incredible public service. In its current form, the dashboard lists 235 White House contacts who should get tested for COVID-19, along with their positions, test results (if known), symptoms (if they test positive), and the date of their most recent test. You can also view the data as a timeline, based on each person’s last contact with the President, and as a map based on the Rose Garden ceremony, the debate, and two other potential spreading events.

    It is not surprising, after months of poor data reporting from the federal government that, instead of the CDC or the HHS, the best source of data on this high-profile outbreak is—as Dr. O’Shea puts it— “three awesome dudes [contact tracing] from our homes.” But it is worth emphasizing.

    What are federal public health agencies prioritizing right now, you might ask? The HHS is planning a $300 million-plus ad campaign with the goal of “defeating despair” about the coronavirus. And this money came out of the CDC’s budget. I was planning to devote a bigger section to this campaign before COVID-19 hit the White House, but instead, I will direct you to an excellent (and terrifying) POLITICO feature on the subject. Dan Diamond also discusses his investigation of the campaign on his podcast, POLITICO’s Pulse Check.

  • School data with denominators

    School data with denominators

    The COVID-19 School Response Dashboard has surveyed nearly 1,000 individual schools and districts on their enrollments, case counts, and COVID-19 mitigation strategies. Screenshot retrieved on October 3.

    The data sources on COVID-19 in U.S. K-12 schools vary widely, but most of them have one thing in common: they only report case counts.

    Texas rescinded school data this past week due to errors. Florida recently began publishing school reports—which list out cases by individual school while failing to provide cumulative totals. But a larger problem for these states and others is that, when case numbers are reported in isolation, there is no way to compare outbreaks at different locations.

    Imagine, for example, that you only knew that Wisconsin had seen 18,000 cases in the past week, while Texas had seen 28,000. You would assume that Texas is currently in more dire straights, with more people infected. But adjust for population—divide those case numbers by the populations of both states—and you find that Texas has an infection rate of about 95 people per 100,000 Texans, while Wisconsin has a rate of about 303 people per 100,000, over three times higher. Texas is slowly recovering from its summer outbreak, while Wisconsin is an outbreak site of major national concern.

    In the case of school data, enrollment numbers are the key to these comparisons. Knowing how many students are infected in your district may be useful, but unless you know how many students are actually going into school buildings on a regular basis, it is difficult to translate the case numbers into actionable conclusions. The majority of states which report school COVID-19 data do not report such numbers, and even those that do may have incomplete data. New York’s dashboard, for example, currently reports 0 staff members in the New York City school district, which opened for in-person instruction last week.

    Volunteer datasets similarly focus on case numbers. The National Education Association School and Campus COVID-19 Reporting Site, built from the crowdsourced spreadsheet of Kansas high school teacher Alisha Morris, compiles case counts from news outlets and volunteer reports. The COVID Monitor, a school dashboard produced by Rebekah Jones’ Florida COVID Action project, combines news and volunteer reporting with state-reported numbers. Both of these efforts are incredibly comprehensive in documenting where COVID-19 is impacting students and teachers, but without enrollment numbers for the schools, it is difficult to use the data for meaningful comparison.

    Even the New York Times focuses on case counts. The Times’ review of school COVID-19 cases found extremely scattered public reporting, but the paper failed to include any denominators—not even the county case counts which this paper has been tracking since early in the pandemic. Alexander Russo, columnist at education journal Phi Delta Kappan and friend of this newsletter, recently commented on how such cases-only reporting may paint a misleading picture of the pandemic’s impact.

    Clearly, we need denominators for our case counts. And a new dataset is out to provide this crucial metric. Emily Oster, Professor of Economics and Public Policy at Brown University, collaborated with software company Qualtrics and several national education associations to build a COVID-19 school dashboard which focuses on case rates, not counts.

    This project sources data by directly surveying schools every two weeks, rather than relying on sporadic news and volunteer reports. And it includes information about school reopening plans and mitigation strategies, such as whether masks, increased ventilation, and symptom screenings are in use. As of the dataset’s most recent update (for the two-week period of September 14 to 27), 962 schools in 47 states are included. These schools report an average student infection rate (confirmed and suspected cases) of 0.62% and an average staff infection rate of 0.72%; both rates are up from 0.51% and 0.5%, respectively, in the previous two weeks. For more initial findings, see this NPR feature on the dashboard, published on September 23.

    I spoke to Oster this past Tuesday, only four days after the dashboard’s public release. She explained more detail about the project’s methodology and her future plans for tracking COVID-19 in schools. (This interview has been lightly edited and condensed for clarity.)


    Interview

    Betsy Ladyzhets: What is your background in data and education reporting? What have you been working on during the COVID-19 pandemic that led you to this dashboard?

    Emily Oster: I am, by training, an economist, so I have a lot of background in data analysis and some in data collection. But most of my work, virtually all of my work has been on health, not on education. I have written a couple of books on pregnancy and parenting, so I have this audience of parents. And during the pandemic, I was writing a lot about kids and COVID. And then that led me to be interested in issues around schools, and putting together this team to do the data collection for the dashboard.

    BL: Who else is on the team?

    EO: The partnership—the primary people who are doing the work and analysis—is Qualtrics, which is a tech company. And then, there are a number of educational association groups. The School Superintendents Association, the National Association of Elementary School Principals, the National Association of Secondary School Principals, that was the initial core team. Then, we’ve got a lot of distribution help from the charter school alliance, from a bunch of the independent schools associations. A lot of different educational groups have done distribution work.

    BL: How did you develop partnerships with these different education groups?

    EO: I had expressed in some public forum that I thought there should be more of this data collection, and someone from Qualtrics reached out and said, “We think there should be more of this, too. Maybe we can help.” And around this time, I was connected with a woman at the school superintendents association, who also said, “I think we should do this, maybe we can help.” Those were the two key pieces, and it came together from there.

    BL: Yeah, it’s good to have—it seems like a very useful partnership, that you have the tech expertise but also the people who are actually interacting with teachers and students.

    EO: Yeah. I think our biggest get for the dashboard, and what is potentially useful about it, is that we start at the school level. We know what the schools are doing. We’re in direct contact with them.

    BL: I know from poking around the dashboard and reading the NPR article that the way you’re gathering data is with that direct interface, surveying schools. Why did you choose this method as opposed to looking at news articles or compiling data from public sources?

    EO: It was really important for us to understand the context around school reopening before we asked about the COVID cases. We really wanted to know: how many kids do you have in school, are they actually in school or are they virtual, what kind of enrollment do you have? And also, what are you doing as mitigation? To come, ultimately, to understand what’s happening with cases, we really need to start by understanding, like, are you wearing masks? Are you distancing? Are you doing all of these things? So then, if we do see cases, we can go back and look and say okay, can we make any conclusions about which of these precautions are helping.

    In particular, these enrollment numbers give us the ability to say something about not just cases, but rates. To be able to say, this is the share of people that are infected. Which I think is a very important number, and arguably more important for decision-making, than counts.

    BL: Yeah, I was going to ask about that. Your dashboard, unlike a couple of other school COVID data projects, actually has denominators, so that you can compare case rates.

    EO: That’s our thing. That’s our whole pitch. We have denominators.

    BL: Why is it so important to have denominators?

    EO: I think the importance of denominators is, it tells you something about the actual risk of encountering someone with COVID… If you’re going to send your kid off to school, and if you’re going to send your kid to a school of 1,200 people, I think it is useful to understand—are there likely to be 100 kids in the building with COVID? Is there likely to be one kid in the building with COVID?

    And similarly, thinking about the risk to your kid, if your kid is going to be in the building for two weeks, what’s the average experience? Is there a ten percent chance they’re going to get the coronavirus over these two weeks? Is there a one percent chance? I think that that is the thing we should be making decisions on. We really need those denominators to get the rate.

    BL: Absolutely. Could you tell me more about how the surveys work? What questions you’re asking, and how often you’re collecting data?

    EO: There’s two different avenues for data collection… First, if you’re an individual school, then the way we’re collecting the data is that you enroll in a baseline survey on Qualtrics. We ask you about your enrollment, your opening model, what share of your kids are in person, how many staff you have, are they in person. And then, if you have in-person instruction, we ask you about masking and distancing, what you’re doing on those conventions. And then we ask maybe one or two demographic questions, like are you doing free or reduced-price lunch, or financial aid if it’s a private school.

    That [initial survey] is followed up every other week with a survey that is very short. It’s basically, how many confirmed and suspected cases do you have in students and staff, and then [we ask schools to] confirm their in-person enrollment, just to see if there have been large changes in the opening model.

    And then, on the district side, we’re asking all the same questions, but—in the case of the districts, there are a number where [superintendents] have said, “We’d like to enroll our entire school district in your thing, and we’re going to give you all of our data.” When we do that, we’re actually collecting the data internally in Excel. We send them an Excel sheet with their schools, they fill out that same information [as in the school survey], and then we come back again biweekly and ask them those same questions. It’s the same information, it’s just that rather than making them go through 25 versions of the same Qualtrics survey, we have it all in one.

    BL: What mechanisms do you have in place for flagging errors? I know that’s a concern with this kind of manual back and forth.

    EO: On the district side, there’s a cleaning procedure. When the surveys in, obviously we don’t change them, but we look them over. If there’s something that’s wrong, like the number of COVID cases is greater than the number of people, or they’ve reported three billion students enrolled, we go back to the district and ask, “Can you look at this?”

    Then, on the individual school side, there’s a bunch of validation built into the Qualtrics survey operation. And we have some procedures which we’re working on ramping up which are going to do a little bit of hand lookup, just to make sure that we’re getting valid data.

    BL: What is your sample of schools like so far? Is there a particular area, like any states or types of schools that you have more complete data so far, or any areas where you’re prioritizing in trying to get them to take the surveys?

    EO: We’re an equal opportunity prioritizer. We’ll take anybody. There are a few states where we have better representation of private schools, because [private school associations are involved in roll-out]. We have more schools in Washington than elsewhere.

    Particularly on the public school side, we’re very concerned about enrolling entire districts. That’s the easiest thing for us, it’s the most robust. It is also—we think it provides the most service to the district. And so we are spending a lot of time doing outreach to states and to districts, trying to get people to encourage their districts to enroll.

    BL: Another thing I’m curious about is testing. Just yesterday, the Trump administration announced that they’re going to deploy 150 million rapid antigen tests around the country, once they’re made by Abbott, and they’re going to focus on getting those tests to students and teachers. Is testing something that you’re thinking about tracking?

    EO: Yeah. We ask [the schools], are you doing any routine testing of anybody, and most them say they’re not. But I think it would be very interesting to incorporate. Part of my hope for this project is that, over time, as we get more people enrolled and we get more of a rhythm of reaching people routinely, that there will be questions we can add. We’ll potentially get to a place where we’ll say, “Okay, now, a bunch of districts are doing testing, let’s put that in.” And we’ll try to figure out, how common is that, and who’s doing it.

    BL: There are also states that are reporting COVID data in schools. I know New York has a dashboard, that’s pretty extensive, while other states report numbers by county or district or just overall. Is your project doing anything with those public data, or with other volunteer projects that track COVID in schools?

    EO: Not at the moment. I think that we are eager to—there are a number of states that have very good dashboards, and our goal, one of the things we are working on is, how can we basically pull that in? One of the issues is that most of those dashboards just report cases, and so in order to pull them into what we’re doing, we need to go behind this and say, okay, we need to go behind and actually figure out what the initial enrollments were.

    BL: Which states do you think are doing the best job so far?

    EO: I mean, New York’s is pretty good. Tennessee has a pretty good dashboard. South Carolina. There’s a few.

    BL: I know New York is one—I think it’s the only one that has both testing numbers and enrollment numbers. (Editor’s note: I checked; this is true.)

    EO: Exactly.

    BL: Last question: how do you expect the dashboard to be utilized in future research, and are you seeing any applications of it so far?

    EO: No, it’s literally been, like, four days. My guess is that we will see more—we’ll see some usage by districts, as they try to think about opening, that’s the first use case. Just districts that are opening, trying to think about what’s the right thing to do. My guess is that, in the long run, maybe we’ll see some research with this. That isn’t the goal of the project, but we’ll see.

    BL The focus is on helping districts compare to each other.

    EO: Exactly, yeah.


    Analysis

    I’m excited about this dashboard. First of all, it can’t be overstated: denominators are huge. Knowing that the estimated infection rate of K-12 students in the U.S. is under one percent is so much more useful from a decision-making standpoint than the actual number of cases.

    Second, the school survey model is a novel method with advantages for one specific group: the very schools included in this dataset. This dashboard is not particularly useful for me, a COVID-19 journalist, right now; its sample size is small, and the data are not currently available for download by outside users. (Oster told me that she is planning to set up a validation feature, so that she and other partners on this project can track how their data are being used.) But the school administrators who fill out the project’s biweekly surveys will be able to see COVID-19 trends for their students and staff, compared to trends at other schools across the country. They are essentially getting free consulting on their school reopening plans.

    I have one major concern, however. As Oster explained in our interview, the dashboard currently includes an abundance of private and charter schools in its sample, due to partnerships with private and charter school associations.

    According to Education Week, public schools made up 70% of American schools in 2017-2018. In Oster’s dashboard, these schools are 67% of the sample size, while private, charter, and religious schools make up the rest of the sample. At a glance, this seems fairly representative of the country’s school demographics. However, the average public school has far more students than the average private school; without seeing the actual enrollment numbers of the schools included in this dashboard, it is difficult to determine how balanced the dashboard’s sample truly is.

    In addition, the dataset’s sample so far shows a bias for suburban schools. The schools surveyed are 37% suburban, 28% rural, 26% urban, and 8% town. Suburban school districts tend to receive more funding than urban districts, and suburban districts are historically sites of school segregation. Finally, this dataset so far heavily represents private schools in Washington, with 106 schools, over 10% of the sample, coming from this state. West Virginia, Alabama, and Mississippi, all more rural states which rank in the bottom ten in U.S. News & World Report’s education rankings, are so far not represented at all.

    A recent New Yorker article by Alec MacGillis draws attention to the low-income students of color who may be left behind in this era of remote learning. Students whose parents and guardians need to continue working outside the home, or otherwise do not have the resources to support kids with an array of Zoom links and homework platforms, may lose a year of education if their schools don’t reopen—and yet these students and their families are more vulnerable to COVID-19 if they do go back in person.

    The schools which serve low-income minority communities are likely to need this dashboard more than any others. And yet these very schools may be left out of data collection, as their principals and superintendents may not have the bandwidth to fill out even the simplest survey. Extra effort could be needed to ensure that crucial schools are not left behind. The COVID-19 School Response Dashboard, and other future school data sources, must prioritize diversity in their data collection if they are to be truly complete.

  • Featured sources, Sept. 20

    • Dear Pandemic: This source describes itself as “a website where bona fide nerdy girls post real info on COVID-19.” It operates as a well-organized FAQ page on the science of COVID-19, run by an all-female team of researchers and clinicians.
    • Mutual Aid Disaster Relief: This past spring saw an explosion of mutual aid groups across the country, as people helped their neighbors with food, medical supplies, and other needs in the absence of government-sponsored aid. These groups may no longer be in the spotlight, but as federal relief bills continue to stall, they still need support. Organizations like Mutual Aid Disaster Relief can help you find a mutual aid group in your area.