Category: Higher education

  • How one biostatistics team modeled COVID-19 on campus

    How one biostatistics team modeled COVID-19 on campus

    Screenshot of a modeling dashboard Goyal worked on, aimed at showing UC San Diego students the impact of different testing procedures and safety compliance.

    When the University of California at San Diego started planning out their campus reopening strategy last spring, a research team at the school enlisted Ravi Goyal to help determine the most crucial mitigation measures. Goyal is a statistician at the policy research organization Mathematica (no, not the software system). I spoke to Goyal this week about the challenges of modeling COVID-19, the patterns he saw at UC San Diego, and how this pandemic may impact the future of infectious disease modeling.

    Several of the questions I asked Goyal were informed by my Science News feature discussing COVID-19 on campus. Last month, I published one of my interviews from that feature: a conversation with Pardis Sabeti, a computational geneticist who worked on COVID-19 mitigation strategies for higher education. If you missed that piece, you can find it here.

    In our interview, Goyal focused on the uncertainty inherent in pandemic modeling. Unlike his previous work modeling HIV outbreaks, he says, he found COVID-19 patterns incredibly difficult to predict because we have so little historical data on the virus—and what data we do have are deeply flawed. (For context on those data problems, read Rob Meyer and Alexis Madrigal in The Atlantic.)

    Paradoxically, this discussion of uncertainty made me value his work more. I’ve said before that one of the most trustworthy markers of a dataset is a public acknowledgment of the data’s flaws; similarly, one of the most trustworthy markers of a scientific expert is their ability to admit where they don’t know something.

    The interview below has been lightly edited and condensed for clarity.


    Betsy Ladyzhets: I’d love to hear how the partnership happened between the university and Mathematica, and what the background is on putting this model together, and then putting it into practice there.

    Ravi Goyal: Yeah, I can give a little bit of background on the partnership. When I did my PhD, it was actually with Victor De Gruttola [co-author on the paper]. We started using agent-based models back in 2008 to sort of understand and design studies around HIV.  And in particular in Botswana, for the Botswana Combination Prevention Project, which is a large random cluster study in Botswana.

    So we started using these kinds of [models] to understand, what’s the effect of the interventions? How big of a study has to be rolled out to answer epidemiological questions? Because, as you would imagine, HIV programs are very expensive to roll out, and you want to make sure that they answer questions.

    I’ve been working with [De Gruttola] on different kinds of HIV interventions for the last decade, plus. And he has a joint appointment at Harvard University, where I did my studies, and at the University of California in San Diego. And so when the pandemic happened, he thought some of the approaches and some of the stuff that we’ve worked on would be very applicable to helping think about how San Diego can open. He connected me with Natasha Martin, who is also on the paper and who is part of UC San Diego’s Return to Learn program, on coming up with a holistic way of operating procedures there. She’s obviously part of a larger team there, but that’s sort of where the partnership came about.

    BL. Nice. What would you say were the most important conclusions that you brought from that past HIV research into now doing COVID modeling?

    RG: Two things. One is uncertainty. There’s a lot of things that we don’t know. And it’s very hard to get that information when you’re looking at infectious diseases—in HIV, in particular, what was very difficult is getting really good data on contacts. In that setting, it’s going to be sexual contacts. And what I have understood is that people do not love revealing that information. When you do surveys where you get that [sexual contact] information, there’s a lot of biases that creep in, and there’s a lot of missing data.

    Moving that to the COVID context, that is now different. Different kinds of uncertainty. Biases may be recall biases, people don’t always know how many people they have interacted with. We don’t have a good mechanism to sort of understand, how many people do interact in a given day? What does that look like?

    And then, maybe some of these that can creep in when you’re looking at this, is that people may not be completely honest in their different risks. How well are they wearing masks? How well are they adhering to some of those distancing protocols? I think there’s some stigma to adhering or not to adhering. Those are biases that people bring in [to a survey].

    BL: Yeah, that is actually something I was going to ask you about, because I know one of the big challenges with COVID and modeling is that the underlying data are so challenging and can be very unreliable, whether that’s, you know, you’re missing asymptomatic cases or it’s matching up the dates from case numbers to test numbers, or whatever the case may be. They’re just a lot of possible pitfalls there. How did you address that in your work with the University of California?

    RG: At least with the modeling, it makes it a little more difficult in the timeframe that we were modeling and looking at opening, both for our work on K-12 and for UCSD. We kicked it off back in April, and May, thinking about opening in the fall. So, the issue there is, what does it look like in the fall? And we can’t really rely on—like, the university was shut down. There’s not data on who’s contacting who, or how many cases are happening. There were a lot of things that were just completely unknown, we’re living in a little bit of a changing landscape.

    I’m sure other people have much more nuance [on this issue], but I’m going to just broadly stroke where this COVID research was different than HIV. For HIV, people might not radically change the number of partnerships that they’re having. When we’re thinking about a study in Botswana, we can say, what did it look like in terms of incidents four years prior? And make sure we’re making our modeling represents that state of how many infections we think are happening.

    Here [with COVID], when we’re thinking about making decisions in September or October. You don’t have that, like, oh, let’s match it to historical data option because there was no historical data to pin it to. So it was pooling across a lot—getting the estimates to run to the model, getting those is, you’re taking a study from some country X, and then you’re taking another different study from country Y, and trying to get everything to work and then hopefully when things open up, you sort-of re-look at the numbers and then iteratively go, what numbers did I get wrong? Now in the setting where things are open, what did we get wrong and what do we need to tweak?

    BL: I noticed that the opening kind-of happened in stages, starting with people who were already on campus in the spring and then expanding. So, how did you adjust the model as you were going through those different progressions?

    RG: Some assumptions were incorrect in the beginning. For example, how many people were going to live off campus, that was correct. But how many people, of those off-campus people, were ever going to come to campus, was not there. A lot of people decided not to return to San Diego. They were off-campus remote, but they never entered campus. Should they have been part of that model? No. So once we had those numbers, we actually adjusted.

    Just this past week, we’ve sort of started redoing some of the simulations to look towards the next terms. Our past miscalculation or misinformation, what we thought about how many people would be on campus, now we adjusted from looking at the data. 

    And some of the things that we thought were going to be higher risk, at least originally, ended up being a little bit lower risk than anticipated. One thing is around classrooms. There have been—at least, from my understanding, there have been very few transmissions that are classroom-related. And we thought that was going to be a more of a higher transmission environment in the model, wasn’t what we saw when we actually had cases. So now we’re adjusting some of those numbers to get it right to their particular situation. It’s a bit iterative as things unroll.

    BL: Where did you find that most transmissions were happening? If it’s not in the classroom, was it community spread coming into the university?

    RG: They [the university] have a really nice dashboard, where it does give some of those numbers, and a lot of the spread is coming from the community coming on to campus, and less actual transmissions that are happening within. I think that’s where the bulk is. I think the rates on campus were lower than the outside.

    BL: Yeah, that kind-of lines up with what I’ve seen from other schools that I’ve researched that, you know, as much as you might think a college is an environment where a lot of spread’s gonna happen, it also allows for more control, as opposed to just a city where people might be coming in and out.

    Although one thing, another thing I wanted to ask you about, is this idea that colleges, when they’re doing testing or other mitigation methods, they need to be engaging with the community. Like UC Davis, there’s been some press about how they offer testing and quarantine housing for everybody. Not just people who are students and staff. I was wondering if this is something accounted for in your model, and sort of the level of community transmission or the level of community testing that might be tied to the university and how that impacts the progression of things on campus.

    RG: The model does incorporate these infections coming in for this community rate, and that was actually based off of a different model modeling group, which includes Natasha, that is forecasting for the county [around UC San Diego]. Once again, you have to think about all the biases on who gets tested. False positives, all of those kinds of caveats. They built a model around that, which fed into the agent-based modeling that we use. We do this kind-of forecasting on how many infections do we think are going to be coming in from people who live off-campus, or staff, or family—what’s their risk?

    That’s where that kind of information was. In terms of quarantining my understanding is, I don’t think they were quarantining people who weren’t associated [with the school] in the quarantine housing.

    BL: Right. Another thing I wanted to ask about, I noticed one of the results was that the frequency of testing doesn’t make a huge difference in mitigation compared to other strategies as long as you do have some frequency. But I was wondering how the test type plays in. Say, if you’re using PCR tests as opposed to antigen tests or another rapid test. How can that impact the success of the surveillance mechanism?

    RG: Yeah, we looked a little bit in degrading the sensitivity from a PCR test to antigen. The conclusion was that it’s better to more frequently test, even with a worse-performing test than it is to just do monthly on the PCR.

    We put it on the dashboard. This is the modeling dashboard… It has a couple of different purposes. So first, there was obviously when the campus was opening, a lot of particular anxiety on what may happen come September, October, and some of that [incentive behind the dashboard] was to be transparent. Like, here’s the decisions being made, and here is some of the modeling work… Everything that we know or have is available to everyone.

    And the second piece was to have a communication that safety on campus is the responsibility of everyone. That’s where the social distancing and adherence to masking comes in, why you’re allowed to change that [on the dashboard], is supposed to hopefully indicate that, you know, this really matters. Here’s where the students and faculty and staff roles are on keeping campus open. That was the two points, at least on my end, in putting together a dashboard and that kind of communication.

    BL (looking at the modeling dashboard): It’s useful that you can look at the impacts of different strategies and say, okay, if we all wear masks versus if only some of us wear masks, how does that change campus safety?

    Another question: we know that university dorms, in particular, are communal living facilities—a lot of people living together. And so I was wondering what applications this work might have for other communal living facilities, like prisons, detention centers, nursing homes. Although I know nursing homes are less of a concern now that a lot of folks are vaccinated there. But there are other places that might not have the resources to do this kind of modeling, but may still share some similarities.

    RG: Yeah, I think that’s a really interesting question. I sit here in Colorado. The state here runs some nursing homes. So we originally looked at some of those [modeling] questions, thinking about, can we model [disease spread in nursing homes]?

    I think there’s some complexities there, thinking about human behavior, which may be a little bit easier in a dorm. The dorm has a sort-of structure of having people in this suite, and then within the dorm—who resides there, who visits there, has some structure. It’s a little bit harder in terms of nursing homes, or probably it’s the same with detention centers, in that you might have faculty or staff moving across a lot of that facility, and how that movement is a constantly-evolving process. It wasn’t like a stationary state, having a structure, if that makes sense?

    BL: Yeah. Did you have success in modeling that [nursing homes]?

    RG: Not really so much with [a long-term model], it was more, we had a couple of meetings early on, providing guidance. My wife works for the state with their COVID response, so that was an informal kind-of work. They were trying to set up things and think about it, so I met with them to share some lessons learned that we have.

    BL: That makes sense. What were the main lessons? And I think that is a question, returning to your university work, as well—for my readers who have not read through your paper, what would you say the main takeaways are?

    RG: I think I would probably take away two things that are a little bit competing. One is, based on both some of the university work and the K-12 work, that we have the ability to open. We have a lot of the tools there, and some things can open up safely given that these protocols that we have in place, particularly around masking and stuff like that, can be very effective. Even in settings that I would have originally thought were very high risk. Areas that could have a very rapid spread, for example college campuses.

    Some campuses, clearly, in the news, [did have rapid spread]. But it’s possible to open safely. And I think some of the positive numbers around UC San Diego showed that. Their case counts were very manageable for us. It was possible to open up safely, and same with the K-12. That requires having a first grader wear a mask all day, and I wasn’t sure it would work! But it seems like some of that takeaway is that these mitigation strategies can work. They can work in these very areas that we would have not thought they would have been successful.

    So that’s one takeaway, that they can work. And the competing side is that there’s a lot of uncertainty. Even if you do everything right, there is a good amount of uncertainty that can happen. There’s a lot of luck of the draw, in terms of, if you’re a K-12 school, are you going to have just a couple people coming in that could cause an outbreak? That doesn’t mean that you did anything wrong. [There’s not any strategy] that’s 100% guaranteed that, if you run the course, you won’t get any outbreaks.

    BL: I did notice that the paper talks about superspreading events a little bit, and how that’s something that’s really difficult to account for.

    RG: Human behavior is the worst. It’s tough to account for, like, are there going to be off-campus parties? How do you think about that? Or is that, will the community and their communication structure going to hamper that and effectively convince people that these safety measures are there for a reason? That’s a tricky thing.

    BL: Did you see any aspect of disciplinary measures whether that is, like, students who had a party and then they had to have some consequence for that, or more of a positive affirmation thing? One thing that I saw a couple of schools I’ve looked at is, instituting a student ambassador program, where you have kids who are public health mini-experts for their communities, and they tell everyone, make sure you’re wearing your masks! and all that stuff. I was wondering if you saw anything like that and how that might have an impact.

    RG: The two things that I know about… I know there were alerts that went out, like, oh, you’re supposed to be tested every week. I don’t know about any disciplinary actions, that’s definitely out of my purview. But talking to grad students as well, I knew that if they didn’t get tested in time, they would get an alert.

    And the other thing that I will say in terms of the planning process—I got to be a fly on the wall in UC San Diego’s planning process on opening up. And what I thought was very nice, and I didn’t see this in other settings, is that they actually had a student representative there, hearing all the information, hearing the presentations. I had no idea who all of these people are on all these meetings, but I know there was a student who voiced a lot of concerns, and who everyone seemed to very much listen to and engage with. It was a good way to make sure the students aren’t getting pushed under—a representative was at the table.

    BL: Yeah, absolutely. From the student perspective, it’s easier to agree to something when you know that some kind of representative of your interest has been there, as opposed to the administrators just saying, we decided this, here’s what you need to do now.

    My last question is, if you’ve seen any significant changes for this current semester or their next one. And how vaccines play into that, if at all.

    RG: That’s the actual next set of questions that we’re looking into. If weekly testing continues, does the testing need change as people get vaccinated? The other thing that they have implemented is wastewater testing and alerts. They’re sampling all the dorms. And how does that impact individual testing, as well? Does that—can you rely on [wastewater] and do less individual testing? That’s some of the current work that we’re looking into.

    BL: That was all my questions. Is there anything else that you’d want to share about the work?

    RG: I will say, on [UC San Diego’s] end… I think you can use models for two things. You can use them to make decisions—or not make them, but help guide potential decisions. Or you can use them to backdate the decisions that you wanted to make. You can always tweak it. And I would say, in the work I’ve done, it’s been the former on the part of the school.

    The other thing is, thinking about the role of modeling in general as we move forward, because I think there’s definitely been an explosion there.

    BL: Oh, yeah.

    RG: I think it brought to light the importance of thinking about… A lot of our statistical models, for example, are very much individual-based. Like, your outcome doesn’t impact others. And I can see these ideas, coming from COVID—this idea that what happens to you impacts me, it’s going to be a powerful concept going forward.

  • What makes a successful semester during COVID-19?

    What makes a successful semester during COVID-19?

    Despite outbreak risks, a lot of colleges and universities brought their students back to campus during the fall 2020 semester. Everyone from epidemiologists to the students themselves asked: What worked, and what didn’t? How do we even measure success, when every campus is unique and every option is complicated?

    A lot of journalists have tried to answer these questions in the past few months. I took a crack at them in a feature for Science News, published this past Tuesday. My editor and I picked five universities, ranging from large state schools to small close-knit institutions. I graphed their cases and tests, attempting to determine both the drivers of campus outbreaks and how school leadership got them under control. And I spoke to administrators and students at each school who explained their campus’ approach to COVID-19 mitigation.

    Obviously, I want you to read the full story. Any institution trying to handle COVID-19 can learn valuable lessons from these universities, especially from those that got their students involved in the COVID-19 protection efforts—like Rice University, which set up a student-run court to judge those who broke safety rules, or North Carolina Agricultural & Technical University, which let students go live on Instagram while they got tested.

    But in the COVID-19 Data Dispatch this week, I wanted to share some bonus material. One of my favorite interviews that I did for this feature was with Dr. Pardis Sabeti, a computational geneticist at the Broad Institute of Harvard University and MIT. The Broad Institute helped over 100 colleges and universities set up COVID-19 testing and student symptom monitoring, most of them in New England. When I talked to Dr. Sabeti, though, she mostly spoke about Colorado Mesa University—a small school in Grand Junction, Colorado that saw it as a moral imperative to bring all of their students back to campus this fall.

    Dr. Sabeti told me all about why the Broad Institute and Colorado Mesa University (or CMU) were a great match, able to try out novel COVID-19 control efforts that many other schools didn’t consider. She also gave me her perspective on what makes a successful pandemic semester—spoiler, she has a pretty high bar.

    The interview below has been lightly edited and condensed for clarity.


    Betsy Ladyzhets: Tell me about how the Broad Institute started working on infectious disease management, and how that led to your current efforts with COVID-19.

    Pardis Sabeti: I do a lot of work in infectious diseases, mostly in West Africa. In 2014, Harvard University set up an outbreak surveillance committee that helped the school through all of these things around Ebola. And then, it was sort-of in-place, we had this committee of folks across the institution that were working together on outbreaks. 

    Then, in 2016, we got re-empaneled when there was a mumps outbreak at Harvard that ended up spreading across Massachusetts. We learned that yes, universities are laboratories for infectious disease spread, and Massachusetts has 110 of them.

    So, there was a lot going on there. We worked with the Mass. Department of Public Health and the higher ed consortium in Boston and we were really able to move things forward together, to cooperate, share data. We even found a transmission link between an outbreak—there was an outbreak in east Boston that happened in an unvaccinated community that was thought to be a separate outbreak, but then our genome sequencing data showed that it was firmly within the Harvard University cluster. And then additional case investigations showed that there were three members of that community that were Harvard affiliates, that were the likely links.

    When we did the genome sequencing, it showed us this idea that traditional epidemiology is very accurate. Whatever links the public health teams had found, we confirmed with genome sequencing. But they missed most of the transmissions. There were a lot of transmission events that were very obviously tied to each other but that the public health teams didn’t catch. 

    So at that point, we really doubled down on this idea of genome sequencing and genomic epidemiology being really important for understanding outbreaks. But then also, we understood that we needed to be very fast about doing outbreaks [sic]. What the rest of the world figured out during COVID, we figured out because of mumps—that we needed apps to essentially allow people to start sharing information about their symptoms, so people can get quick diagnoses.

    It was this funny thing where four people on my team all became infected while we were investigating the mumps outbreak with what looked like mumps. Each of them went to their own PCP [primary care provider], and their own PCP did a work-up, and you’re like—wait a second. Wouldn’t it be useful to know these four people are all in connection with each other? If one of them had a diagnosis, it would probably inform everyone else’s diagnosis.

    We created what’s now called Scout. It’s an app that allows you to share with your contacts what’s going on if you have an infection, allows people to quickly figure out what their diagnosis might be and to alert people. We weren’t thinking about it necessarily for pandemic reporting. We were just thinking, wouldn’t it be something handy, that next time you get sick, you immediately know what you have and what to do about it. Particularly since viral and bacterial infections need entirely different courses of action from people. Like, could we help everybody get informed? And then we also built Lookout, which is a dashboard that collects all that information and shows public health teams and administrators what’s going on.

    BL: Yeah, the CMU administrators I talked to talked about that [dashboard] a lot.

    PS: Yeah, which is great. We joke that CMU has one of the most sophisticated public health systems. The school can see, at this exquisite level, what the cases are, where they’re located. It’s really allowing you to do those investigations that most people I’ve seen elsewhere are doing on the back of an envelope.

    We [Broad] needed a place to work with that was going to be very collaborative and open. And so we were talking to a lot of different folks in different places, and everywhere there were different challenges of getting in the ground. And Colorado Mesa, to us, was this breath of fresh air. One, it was heartwarming to be working with this school in Colorado that has a large population of first-in-their-families-to-go-to-college students. And it was also empowering to hear the need that they had, the fact that they had to come back and they had to come back fully on campus because the students’ livelihoods and future success depended on it. And it was also heartwarming to see the way that the leadership was so engaged, so strong, so open, to anything.

    And also, like, the wastewater testing is being done by faculty and students in the engineering department. The clinical sample collection is being led by Amy Bronson and the nursing team. That’s a lot of what you want to see happening on college campuses. To me, the way I pitched it is, what we were building was the Facebook app for outbreaks that also needed to start with a close-knit community where you could get a lot of adoption. 

    But also, this idea that colleges are both high risk but also exactly where innovation can happen. It’s where people are ready to explore and try things out.

    I hadn’t seen that [mindset] at a lot of other schools. I saw this administrator, top-down, we’re gonna tell you how to behave and you’re gonna be in this room. A lot of schools got into a frame of like, we’re gonna manage these students, whereas CMU really was like, no, we’re going to partner with these brilliant students and figure this out together.

    In my mind, I was always perplexed, where we kept describing this year as this kind-of less-than year, where we were just going to suffer through education. In my mind, it was a more-than year. People learn the best when the stakes are the highest. There’s no other time we’re gonna teach kids about public health, infectious disease, genomics, and epidemiology than now. So we should shift what we’re trying to do. It shouldn’t be like, let’s get the Chaucer done while an outbreak is killing people in our community. We should’ve shifted our attention and all learned math, and stats, and clinical medicine, and public health, and biology around what’s going on.

    And that’s what CMU is doing. They’re hosting classes that are around outbreak response. The coaching teams and the sports teams are the ones doing contact tracing. It’s interesting, because it’s, in a way, it’s a school that doesn’t have all the resources where the ingenuity is going to happen. They can’t just call an outside consultant to do these things for them, they had to rely on themselves and the students.

    Did they show you the videos that they made?

    BL: I watched that “CMU is Back” one, which is great.

    PS: Yeah. They made many of them. They have a new song—I have to make a video later this afternoon for it.

    The fascinating thing is, right, even the art students got in on it and started doing public health messaging. I say, and it’s true—they already had me at the team. I just thought the team was so delightful and inspiring, but they sealed the deal with the video.

    What communities do you know that would make a video together? Most offices, they hate each other, everyone’s resentful and no one’s gonna make a video. In a lot of the schools that I know, there’s taglines that they hate the administration. There’s a fight between the administration and the students. Where here, it’s like, the administrators and the students got together and made a music video. They told me that they have a very close-knit culture and a trust in each other, that would make things go forward.

    And I’m sorry, I know this is very much my pitch for CMU, but I just love talking about this place. Here’s this thing where—did they show you the simulation that happened over Halloween weekend? Did they show you that data?

    BL: I think so, yeah.

    PS: It was the real-time where you’re seeing everybody clustering?

    BL: Oh yeah, yeah.

    PS: Yeah. What is fascinating about that whole scenario is that you had 358 students, voluntarily without any real advertisement from us, download an app that tracks all of their movements over Bluetooth, over Halloween weekend. And then proceeded to go out and do their thing. So here’s that kind-of interaction, and you’re seeing, minute-by-minute, the kind-of high resolution data that we’re getting on how students are interacting with each other. What clusters they’re forming, what times of day we need to watch out for for interactions. It’s pretty bananas.

    These students have an enormous number of contacts. This is the fear that you have with college students. Someone might look at [these data] and say, it’s terrible. But in other ways, it’s like, these kids trusted you enough to download an app, get themselves tracked, and go on and basically engage in behavior that they could get themselves thrown out. That’s trust in the leadership. That is what we need to be able to stop outbreaks.

    And then, the last piece I’ll say before I go off of my CMU storyline is… I’ve been trying in Massachusetts, for a long time, to get people to understand that you’re gonna spend millions. Each of these colleges are spending millions and millions of dollars on diagnostic testing on a daily basis or a weekly basis. That’s an incredible amount of tests that are being used with no hypothesis. Meanwhile, the surrounding communities are talking about seven days ‘til getting a test result, and standing in line for four hours for a test.

    That’s dangerous. I kept trying to convince a lot of the colleges that testing yourself in the middle of a shortage of tests looks selfish and is ineffective. Ultimately, the way that COVID spreads, one person can come into a room and infect 50 people. And so, the metaphor I use is, it’s like being in a drought with a fire alarm shortage, and putting all the fire alarms in your own house. You’ll be exquisitely good at detecting a fire when it hits your house, but at that point, it’s burning to the ground. What you should do is, you should get [the fire alarms], and you should put them in all your neighbors’ homes. For a wide stretch.

    Ultimately, what colleges should do is to support their communities’ testing, by reaching out and saying, okay, every faculty and staff and student, tell us who your contacts are, and have them tell us who their contacts are, and we will prioritize testing for those individuals. We’ll get them tested. That’s how the colleges should have interacted.

    And that really fell on deaf ears in general, there’s a variety of reasons for that. But Colorado Mesa doubled down. We [Broad] tried all these different models, like use 100% of your tests on yourself, use 100% of your tests on other people, or use 25%, 50%, 75%, those different groupings. And we found that the most effective way of stopping an outbreak is if you use 75% of your tests outside of the school. You keep 25% for yourself, but 75% should be used outside the school. That’s how you stop outbreaks on campus.

    We’re writing up that work right now, but even when we showed Colorado Mesa the preliminary data, they were like—that’s now their new model. It’s essentially what they’ve done. They’re putting the majority of their tests [in Grand Junction, the city around the school]. And to me, that’s going to be the really remarkable thing to watch going forward. We’ve created the apps, and the dashboards, and the systems to be able to do this well, but now we really want to reach out to our surrounding community and see where we can go here.

    BL: I know they mentioned to me that they were starting to help the other schools—like, the elementary and middle and high schools in Grand Junction get tested as well.

    PS: Yeah. Our foray into community testing was there. Basically, when the school stopped and they had this break over the holidays, they started pushing this community testing… It’s all about trust, right? They got the trust of their students, and now they’re getting the trust of the community. They’re saying, okay, we’re here to help you, how do we work through this together. That’s the idea behind it.

    So that’s all of my CMU backstory. But it also just generally tells you about the way I think things need to happen. Colleges are both a laboratory for infectious disease spread and also a great laboratory in which to try new technologies out, but it really has to involve community engagement, empowering of all the actors in the system, and trust-building. It does have to involve bringing the students on board on the mission, not just coming top-down and telling them how to do things, and reaching out to the communities and doing testing for your communities.

    It both makes you look more selfless because you’re a college helping your community. That’s always a great way, when you’re going to throw a party in the middle of the night, for them to be happy that you’re there. This[fall 2020] was the opportunity for all colleges to get buy-in from their communities, to show why they’re there and why they’re useful, and that’s another thing where it’s like, why are we not doing that? We have that opportunity.

    BL: That’s definitely something that I saw in part at some of the other schools [in my story], but not to the same degree as what Colorado Mesa was doing. I think you answered a lot of my questions already, because I was going to ask you, like, what makes colleges a good place to try out mitigation methods.

    But one more question is, do you have specific parameters that you would think about when you look at, say, cases and testing numbers, of what you would consider a successful fall semester for a campus?

    PS: The thing is, most schools had very unsuccessful semesters…. For me, success would be… The bar for me for success is really high. To justify coming back when so many people can’t… It would be, not having an outbreak on campus, or not seeding an outbreak in the community. Which could happen—you could not have an outbreak on campus but could have seeded one in the community, if you caught it and you were able to quarantine your people but you already spread it there and the whole thing went on fire. Essentially if your surrounding community has lower case rates.

    I always talk about, when you do something that’s counter to what you should be doing, success is going far and beyond. For me, when I have my students go into someone else’s lab, I’m like, you need to leave that lab better than when you found it. If you’re a guest in someone’s home, if you are treading in a place you shouldn’t tread, your level of success is leaving the college and the community better than when you found it. And having the students learn new skills, be engaged, and feel excited about the future.

    The fact, again, that CMU has their new song— which they just sent me, and it’s a little silly ‘cause it has all these excerpts of me talking—is, “The Future is Now.” And that, to me—even though, by the metrics of what I was just talking about, they weren’t successful. They had an outbreak on campus, it might have spread to the community. But they made a big headway, they learned a lot, the students engaged a ton, and they collectively were making the community around them better. That to me is—I think they had a successful semester in that the students were engaged and they learned, and they attempted to support the community around them. And from that will learn to be even better and stronger.

    BL: Is there anything in particular that you are expecting to be different this spring, learning from CMU and from the other schools you’ve worked with via the Broad Institute?

    PS: This spring is going to be very… it’s going to be hard to know how it will go. You’re gonna get vaccines coming in, that’s gonna make things better, but you have case numbers that are really high, variants that are more infectious, that are gonna make things worse. And a lot of civil unrest and tensions and all of that.

    It’s one of those things where we really have to double down on our civic engagement, I think that’s going to be really important. And on our public health view of what’s going on.

  • Schools go on winter break but discourse continues

    Rounding out the issue with a couple of updates on school data:

    • CDC issues new estimates for the cost of keeping K-12 schools safe: It would take about $22 billion for all public schools in the country to safely reopen in the spring, according to the CDC. The state-by-state estimates incorporate face masks, desk shields, cleaning supplies, transportation, and more. But these estimates are “significantly lower” than other estimates calculated by education organizations, as the CDC failed to include additional costs for face masks, food service, and contact tracing, according to U.S. News & World Report.
    • Rockefeller Foundation advocates for mass testing in schools: “Altogether, K-12 schools, their students, teachers and staff, will need approximately 300 million Covid-19 tests performed each month from February through June,” write the authors of a new Rockefeller report focused on safely controlling COVID-19 spread while vaccines are rolled out. The report provides detailed guidelines on testing and case studies from which readers can learn.
    • The College COVID-19 Outbreak Watchlist goes on winter break: After 15 weeks of updating his watchlist of colleges with high COVID-19 case numbers, Benjy Renton is taking a couple of weeks off. (From this dashboard, anyway.) Many schools have also suspended their COVID-19 reporting, as few students are on campus.

  • Featured sources, Oct. 11

    As I promised in previous weeks, I’ve compiled all the data sources featured in this newsletter into a resource spreadsheet. The doc includes 56 sources, sorted by category (schools, testing, etc.) with descriptions and notes from past newsletters. I’ll keep adding to it in future weeks!

    (Editor’s note, Jan. 2: The resource list is now a page on this website.)

    • The Human Mortality DatabaseThis database includes detailed population and mortality data for 41 countries. In response to the COVID-19 pandemic, the team behind the database has started compiling weekly death counts, which can be used for excess death calculations; they have compiled counts for 34 countries so far.
    • SARS-CoV-2 Superspreading Events: Superspreading events, or instances in which many people are infected with the novel coronavirus at once, have been identified as a major force behind the spread of COVID-19. This database includes over 1,400 superspreading events from around the world, with information on each event’s timing, location, inside/outside setting, and more.
    • COVID-19 Risk Levels Dashboard: A new map from the Harvard Global Health Institute and other public health institutions allows users to see the COVID-19 risk for every U.S. county. These risk levels are calculated based on daily cases per 100,000 population (7-day rolling average).
    • New York Times College and University COVID-19 counts: The NYT is now releasing the data behind its counts of COVID-19 cases reported on college and university campuses, which the paper has been collecting in surveys since July. The survey includes over 1,700 colleges. This initial data release only includes cumulative data as of September 8—and it does not include denominators. NYT reports that the data will be updated “approximately every two weeks.”
  • School data update, Sept. 6

    Since last week’s issue, four more forms of official state reporting on COVID-19 in schools have come to my attention:

    • New Hampshire is publishing school-associated case data, including active cases, recovered cases, and outbreak status (not clearly defined) on a page of the state’s dashboard, updated daily.
    • Mississippi is publishing a weekly report on cases, quarantines, and outbreaks among students, teachers, and staff, aggregated by county. So far, the state has released reports on the weeks ending August 21 and August 28.
    • Hawaii’s state Department of Education is publishing a page on COVID-19 in the school district, updated weekly. (Did you know that the entire state of Hawaii is comprised of one school district?)
    • New York is launching a public dashboard on COVID-19 in schools; this dashboard will be available starting on September 9. So far, the page states that, “New York school districts will be required to provide the Department of Health with daily data on the number of people who have tested positive for COVID-19 beginning Tuesday, September 8th.” Last week, Mayor Bill de Blasio announced that classes in New York City would be delayed by two weeks to allow for more extensive safety precautions.

    In addition, the nonprofit civic data initiative USAFacts has compiled a dataset of reopening plans in America’s 225 largest public school districts. The dataset classifies reopening plans as online, hybrid, in-person, or other, with information as of August 17.

    Meanwhile, on the higher education front:

    • Education reporter (and friend of this newsletter!) Benjy Renton has launched a dashboard keeping track of COVID-19 outbreaks on college and university campuses. The dashboard organizes outbreaks according to their alert level, based on new cases in the past week.
    • I am continuing to monitor the COVID-19 metrics reported by college and university dashboards in my comparison spreadsheet. I haven’t had the chance to expand this analysis much in the past week, but it continues to be an ongoing project.
  • How are colleges and universities reporting COVID-19 data?

    How are colleges and universities reporting COVID-19 data?

    Screenshot of UNC-Chapel Hill’s COVID-19 dashboard. The school has made headlines in the past two weeks for facing outbreaks almost immediately upon students’ return to campus.

    Across the country—and despite the warnings of numerous faculty members and public health leaders—colleges are reopening. Freshmen are returning to campus eager to meet their classmates, and upperclassmen are returning eager to see friends after months in their hometowns.

    Most college students will certainly do their best to wear masks, socially distance, and meet outdoors wherever possible. But it is inevitable that COVID-19 will hit campuses, and when it does, school leaders need to be prepared: not only with testing and tracing systems and room for students to self-isolate, but also with a robust data system that can let their communities know exactly what’s going on.

    For this issue, I surveyed the COVID-19 dashboards of 50 higher education institutions across the country. This analysis expands upon the work of Andy Thomason, an education journalist at the Chronicle of Higher Education who has been compiling dashboards on Twitter for the past two weeks:

    The 50 dashboards I examined include public and private schools in 26 states. The most heavily represented states are North Carolina (nine schools) and Massachusetts (4 schools). I plan on expanding this analysis to include all 82 schools currently on Andy Thomason’s list, as well as any other dashboards that I can find, but 50 seemed like a solid number to discuss for today’s issue.

    Rather than attempting to count the number of COVID-19 cases occurring at colleges and universities, I chose to focus on how schools are reporting: what COVID-19 metrics are they making public, and how often are these metrics updated? (If you’d like to see sheer case numbers, I recommend this dashboard by the New York Times, which includes counts from both public sources and additional NYT reporting.)

    Here’s the spreadsheet with my work. I encourage you to check it out for yourself, especially if you attend or have friends or family attending one of the schools on the list. But I’ll include some summary statistics here because, yeah, you’re probably skimming through this on Monday morning and have about 900 other things to do.

    • 41 schools report some form of cumulative COVID-19 cases that have occurred on campus, usually since the beginning of August. Most of these (29) separate out student and faculty/staff cases.
    • 24 schools report COVID-19 cases per day. 20 of these separate out student and faculty/staff daily cases.
    • 18 schools report active COVID-19 cases. Definitions for “active” vary, from cases identified in the past 14 days to all cases minus those which reportedly no longer need to isolate thanks to a negative test.
    • No schools report COVID-19 deaths. Thankfully, no school yet needs to.
    • 29 schools report cumulative COVID-19 tests. The majority of these (21) do not specify whether their test counts are reported in specimens, people, or some other unit.
    • No schools report antibody or antigen tests.
    • 23 schools report some form of test positivity. 4 schools report daily figures, 11 report weekly figures, and 8 others report some form of cumulative figure. (Test positivity over a range greater than 14 days is not particularly useful, as this metric is used to track testing capacity over time.)
    • 19 schools report counts of students and/or staff currently undergoing quarantine or isolation.
    • 19 schools update their dashboards daily, 6 update Monday through Friday, 15 update on a twice- or three times-weekly schedule, and 7 do not clearly state how often updates are made.

    And here are a couple of examples of notable dashboards:

    • Roger Williams University: This dashboard focuses on testing. It’s the only dashboard in my initial analysis to report tests in both “tests administered” (specimens) and “total participants” (people). It also comprehensively reports test wait times, which few schools are tracking.
    • University of Nevada at Reno: This school has less of a “dashboard” and more of a series of COVID-19 reports. Click into a given month to find one-line statements about new cases, sorted by day. No overall statistics are reported.
    • University of Wisconsin at Madison: This dashboard includes charts and a table that show total tests, positive tests, and percent positive by day. I’m a fan of the dashboard’s Data Notes section, which clearly defines all the dashboard’s terms and encourages users to reach out with questions.
    • University of Connecticut: UConn has five campus across the state, and each one gets its own section in this dashboard. The organization is very clean; it’s easy to navigate and compare campuses.
    • University of North Carolina at Chapel Hill: UNC-Chapel Hill gets a shout-out for being the only dashboard I analyzed to report COVID-19 cases by residence hall. The university’s overall COVID-19 response may have been a clusterfuck, but at least their dashboard gives students information to help them manage it.