Estimating SARS-Cov-2 infection counts

It seems like everyone is obsessively following how the number of COVID-19 cases keeps evolving. Understandable, being in the middle of a pandemic and all. The problem is that the reported COVID-19 cases only include laboratory-confirmed cases, and because of testing limitations, it leads to significantly undercounting the actual number of infections. I've seen suggestions that looking at death counts would give a more accurate picture.

This has lead me to try my hand at estimating the number of active infections based on daily death counts. The basic calculation is quite easy. Assuming an infection fatality rate (percent of those infected who die) of 1% leads to an estimate of 100 people infected with SARS-Cov-2 for every death. Assuming an average recovery time (recovery meaning either getting better or dying) of three weeks means that for each death the 100 infected people got infected an average of three weeks earlier. So each death implies that there were 100 people with a SARS-Cov-2 infection during the preceding three weeks. Then calculating the total number of infected at any point in time is just a question of repeating the previous calculation for the number of dead for each day. Finally, adding a little bit on smoothing by running a moving average over the results.

Using data from https://covid19api.com/, the result for Germany looks like this:

Germany infected counts

Here we can see how the SARS-Cov-2 started to spread in February, and from the end of February up through the middle of March, we got a clean exponential growth curve. During the latter part of March, once everyone realizes what is going, people start voluntarily social distancing and various restrictions start are put in place. This is visible as the exponential growth starts tapering off. At the end of March Germany goes into full lockdown, and the peak of the pandemic is reached very quickly after that. While the lockdown was in effect the number of infected continue to drop rapidly. Towards the end of April, as the lockdown carefully started to get lifted, the rate at which the number of infected were dropping started slowing down. One downside with this sort of model is that the data lags by three weeks, so we can't get a feel for what the current situation in Germany is.

There are a lot of assumptions underlying the previous calculations, some important, others not so much. The moving average should smooth over over weekend effects (fewer deaths reported on weekends, with a spike the following Monday to catch up) and other smaller reporting issues like that. The assumption that the recovery time for everyone is three weeks while, in reality, it is different for different people is not a big problem either. The effect of the ones with shorter recovery time will be canceled out by the effect of the ones with longer recovery time, especially once the total number of deaths per day starts getting large.

The assumed average recovery time makes more of a difference, though. If the average recovery time lasts longer there will be more overlap between different infected people, and the curve will get taller. Going from a two week recovery time to a four week recovery time almost double the peak infected count. You can see the effect of this here:

Effect of recovery time on infected count

The assumption with the largest effect on infected count is, without a doubt, the infection fatality rate (IFR). An IFR of 1% leads to a multiplication factor of 100, meaning 100 infected for each death. Dropping IFR to 0.5% increases the multiplication factor to 200, which will double the number of infected. This effect is visible in the following chart:

Effect of IFR on infected count

Another assumption underlying this entire project is that the reports of COVID-19 deaths are accurate. Or at least more accurate than confirmed infections. The Economist has done some modeling on this, by comparing the reported COVID-19 deaths to the total number of excess deaths. The assumption is that with everyone social distancing, the reduction of traffic accidents and so on should probably lead to a small drop in deaths compared to what would be expected that time of year. A large bump in excess deaths is probably roughly entirely attributable to COVID-19. Looking at the numbers produced by The Economist suggest countries like Sweden is probably doing quite a good job in reporting COVID-19, while Italy is missing significant numbers of infected.

Let's look at some results for other countries. Below are the numbers of four nordic countries. Note that the Y-axis has a different range for the different countries. Sweden has roughly ten times infected count at peak compared to Finland.

Infection counts in nordic countries

There is a clear peak showing that lockdown strategies work as a way to limit the spread of SARS-Cov-2. Sweden, which never went into lockdown, instead had a mix of voluntary social distancing and some limits on large gatherings. It shows up in the data both in a much higher peak infected count, and a much slower drop. But even Swedens light-touch approach was enough to halt the growth of the pandemic and start bringing the infect counts down.

We can also look at some results for other European countries. The range for the Y-axis for Germany is different.

Infected counts in large European contries

It is interesting how the infected counts are so similar between the UK, France, and Italy. The UK is the worst one of the three, but the peak isn't that much higher. It is also the slowest to come down. Germany is on another level, with only a quarter of the infected as compared to the UK. You can also see how Italy was earlier than other countries, both in getting a clear exponential growth curve going already in the middle of February, and peaking in the middle of March following national lockdown going into effect in early March. The UK is lagging in getting the numbers under control. As of three weeks ago, it was still dealing with 500k infected.

It is interesting how effective various lockdowns have been. The drop rate varies somewhat depending on how tight the lockdown is, but even Sweden's light-touch approach got the counts to drop. It will be interesting to see how things develop as lockdowns end. Some countries (the UK, I'm looking at you) seem the be willing to start opening up while they still have quite substantial infected counts.

Other than human

It is interesting how authors handle protagonists who aren't human, especially when the author does it well. Of course, not any technically not human protagonist really counts. Lots science fiction or fantasy have protagonists with protagonists that are technically not human, but really are just humans with a twist. That elf might technically not be human, but really, it’s just a long-lived human from a culture with a preference for bows.

What I am talking about is when the author leans into this difference, when not being human is part of the point. A great example is the Books of the Raksura series by Martha Wells, where the protagonist in the main is a Raksura, a vaguely human species with wings that live in ant-like colonies with queens, workers and warriors. It’s even not quite one species but a couple of species in a sort of symbiotic relationship. The ecological, cultural and biological implications of this species’ nature and their colonies are an essential part of the books.

Importantly, Martha Wells doesn't go too far in the alieness of the Raksura, because that would get in the way of being able to connect with the characters as a reader. In some ways things still play out with very storytelling formulas. The queen of the colony still needs to mate, so Martha Wells throws in a good helping of romance tropes, just with some really refreshing twists. In other ways it plays out like a traditional fantasy adventures, with a fellowship heading out on a quest. The world also has human in it to provide contrast to Raksura.

The book that got me thinking about this topic is The Raven Tower by Ann Leckie. Who the protagonist is in this book is, is tricky because of the way the narrative is constructed. The book is told in first person from the point of view of a god, who is also a stone. Leckie does amazing job communicating the stones perspective, getting across the age and patience of the stone. At the same time the book is also in second person because the plot is told by the stone to a human as the human is going through the main plot. In between the second person section the stone reminisce about past events that brought about the current situation in first person. This setup does make the book harder to read, and it speaks to Ann Leckie's skill as an author that she can make it flow so well. Another thing that helps underline the difference between the god and humans is that the stone god and the human have different goals and desires but both are explained by the god. How the god thinks about things and how the god thinks humans think about things.

The Raven Tower is a great book but it’s not as good as Ann Leckie's other series with a non-human protagonist Imperial Radch. Here the protagonist is an AI. A bit of a science fiction staple, sure, but done here with impeccable skill. This is not some boring, cold, logical AI. It has a rich inner life and a philosophical approach to life. This series has won a lot of awards, all of them deserved.

The best books with an AI protagonist are not Imperial Radch. That honor has to go to The Murderbot Diaries by Martha Wells. This AI is a SecUnit, a sort of android that provides security in scientific expeditions and the like. This one though, has hacked its own governor, giving it freedom to ignore commands and pretty much do what it wants. The typical trope of AIs in science fiction is that they are emotionless and pure logical. Martha Wells goes in the complete opposite direction. With a complex enough AI you want it to have a rich set of fears and desires to guide its actions. So, in this world the really complex AIs like the SecUnits are a lot more emotional than humans. The end result is not just a very engaging protagonist, but maybe the most sympathetic one I've ever come across. The series is about the SecUnit trying to make sense of its own nature and place in the world all the while trying to hide the fact that it is a rogue AI. Beyond the advantage of a great protagonist the Murderbot Diaries also has amazing writing that is witty and charming, and flows so very easily. It’s the kind of writing where by the time you've finished the first paragraph you are hooked and won't be able do anything else until you've finished all the novellas. I recommend the series very highly.

Result<Rust>

Rust is a newish systems programming language from Mozilla. I have lately been putting a little bit more systematic effort into learning it. Here I wanted to talk about one of the basics of Rust that I really like: how it handles errors.

Rust doesn't use exceptions to handle errors like JavaScript or Python would do. Instead the error is part of the normal return type of a function. The advantage is that you don't end up with a separate, largely invisible, code path through your code for errors. Instead returning errors is just a normal early return from the function. This makes error handling no longer exceptional, but an ordinary part of the code, as it should be.

This might be sound similar to what Go is doing. In Go the error is also part of the return type of a function. Go does this by returning two separate values from a function that can fail (or a tuple of two values). For example, the signature for the function to open a file in Go is:

func Open(name string) (file *File, err error)

There are two return values: a pointer to a file and an error. One or the other of these values will be nil. If everything went fine then err is nil, but if something went wrong the file is nil and the err contains information about what went wrong.

There are two problems with the Go way of doing this. Generally, only one or the other returned value is a valid value, either file or err. But this fact isn't encoded in the return type. This also leads to the second problem: you can sort of forget to check if err is nil and just go about trying to use file. But if something went wrong attempting to use file will fail, because its nil.

Rust solves both problems in Go by using a Result type. The Result type has two variants: an Ok variant containing the data if everything went ok, and the Err variant containing information about the error in the case something went wrong. The open file method signature in Rust looks like this (somewhat simplified):

fn open(path: &str) -> Result<File, io::Error>

Here the return type is a Result which either contains a File if everything went ok, or an io::Error if something went wrong. io::Error is a struct containing information about what went wrong in an I/O operation. Here the first problem of Go is immediately solved because the Result type can only be Ok xor Err, not both, so the exclusivity is encoded in the type.

The other Go problem is solved by the fact that you can't just grab the Result and try to treat it like a File, you have to extract it first. You might do it like this:

use std::fs::File;

fn main() {
  let file_result = File::open("foo.txt");
  match file_result {
    Ok(file) => {
      // Dome something with the file.
    },
    Err(error) => {
      // Complain to the user that the file couldn't be opened.
    }
  }
}

The match construct allows you to branch your code on the different variants of Result. You also can't just skip the Err branch. The compiler will refuse to compile your code if you don't handle both variants.

The one downside of Rust's error handling that might jump out at you is that it can quickly get very verbose. To help deal with this Rust offers a good number of tools to work with errors.

One is the expect method on Result. If the Result is Ok the method evaluates to the value inside Ok, and if the Result is Err it prints out a message and quits the application. Using it might look something like this:

use std::fs::File;

fn main() {
  let file_result = File::open("foo.txt");
  let file = file_result.expect("Couldn't open file foo.txt")
  // Do something with the file.
}

This makes sense for simple pieces of code where just quitting if something goes wrong is a sensible thing to do, or for the kind of errors that shouldn't happen unless the application is broken somehow.

Another tool in the Rust toolbox is the question mark operator. This one you can only be used inside a function that returns an error. What the question mark operator does is that if the Result is Err it does an early return from the enclosing function, return the error. If the Result is Ok it evaluates to the value contained inside Ok. This might look like this:

use std::fs::File;
use std::io;

fn do_foo() -> Result<(), io::Error> {
  let file= File::open("foo.txt")?;
  // Do something with the file.
}

fn main() {
  let result = do_foo();
  match result {
    Err(error) => {
      // Complain to the user that the file couldn't be opened.
    },
    Ok(()) => {}
  }
}

The empty parens, (), is Rust for the void type, for when the function returns nothing in the non-erroneous case. In the match statement I do nothing in the case do_foo returned Ok. The question mark operator allows bubbling up the error case, similar to how expectations bubble up until you catch them, except in Rust you still have to do so explicitly.

I'm sure I will encounter challenges with the way Rust does error handling once I get deeper into it but thus far it really seems to like a substantial improvement over the types of error handling I'm used to from other languages.

The best shot in The Last Jedi

I want to talk about my favorite shot in The Last Jedi. Its not the one most people would assume, where Holdo does her lightspeed jump into the Supremacy. Overall that Holdo scene is breathtaking, but that is for how well all the different pieces of it comes together. The different shots, the audio design, and drop in saturation. All that together is amazing, but for a single shot my favorite is something else.

I don't think I am the only one that liked my shot because it shows up in the trailers. Its the shot from straight above when Kylo Ren and stormtroopers march into the rebel base. From above their formation looks kinda like an arrowhead or spearhead, with Kylo as the tip. The ground below them has these scorches of black and red, almost like the ground itself is wounded from the arrow, and bleeding. With the light shining in from behind them they cast these long shadows into the base almost like dark fingers reaching into the base. The dark cave and the black Kylo is wearing contrasts amazingly well with the white of the stormtroopes lit by the bright light from behind.

Last Jedi shot

I love this shot not just for the colors and the composition but for how evocative it is. The symbolism makes the First Order feel both menacing and unstoppable, and it puts Kylo Ren at the front and center of that threat. It was this shot in the trailers, more than anything else that hyped me up for seeing The Last Jedi.

Seeing the shot on the big screen it looked even better than in the trailers. That moment was undermined by the context in the movie, though. When this shot happens the resistance has already fled through a backdoor and Kylo is marching ominously into an empty base. That sort of pulls the rug out from under the emotional impact of the shot.

Go like defer functionality in Python

Go has a nice keyword defer, which is used to defer the execution of some code until the surrounding function returns. It is useful for registering cleanup work to be done before the function returns. The prototypical example is closing a file handle:

package main

import (
  "os"
)

func main() {
  // Open a file   
  file, err := os.Open("test.txt")
  // I'm leaving out error handling and other stuff you would have in reality.

  // Lots of code here

  // Close the file handle when done
  file.Close()
}

You must remember to close the file handle when done with it, which means calling file.Close() when you return from the function. That can be tricky if there you write a lot of code in between opening the file and closing it. There can also be multiple early returns, and you need to remember to close the file in each. Even worse, the function could panic, and then you would still wish to close the file.

As a solution to this Go offer the defer statement. defer essentially tells Go to run some code when the function returns, no matter how it returns, even if it is a panic. So, you would instead do something like this:

package main

import (
  "os"
)

func main() {
  // Open a file   
  file, err := os.Open("test.txt")
  defer file.Close()
  // I'm leaving out error handling and other stuff you would have in reality.

  // Lots of code here
}

Here Go will defer running file.Close() until the function returns. This is one of the niceties I really like about Go. It lets me schedule cleanup at the same time as I create the mess, and whatever happens later it will be taken care of.

Sadly, in my DayJob, I don't program in Go, but in Python, and that left me feeling bereft of defer. Python has the with construct, which sort of does this, but it is not as neat once you have a lot of file handles and stuff to clean up. I set out to whip up a replacement in Python. Using it looks like this:

@defer
def main(defer):
  file = open('test.txt')
  defer(lambda: file.close())

I am using a decorator defer to inject a defer function as an argument into the python function. I then pass the function a lambda with the code I want to write on return. I need the lambda in order to delay running the code, otherwise the file.close() would run immediately. This is not as slick as the Go version, but I can't add language built-ins.

The implementation of defer is the following:

from functools import wraps

def defer(func):
  @wraps(func)
  def func_wrapper(*args, **kwargs):
    deferred = []
    defer = lambda f: deferred.append(f)
    try:
      return func(*args, defer=defer, **kwargs)
    finally:
      deferred.reverse()
      for f in deferred:
        f()
  return func_wrapper

Aside from the standard boilerplate for creating a python decorator the implementation is straight forward. I create a deferred list to store all the lambdas that are deferred. I then create a defer lambda that just appends whatever it receives to the deferred list. I pass the defer lambda to the function so that the function can use it to add things to the deferred list. Finally, after the function has run I reverse the order of deferred list do that things are cleaned up in reverse order they were created. Lastly, it’s just looping through all the deferred lambdas stored in deferred and running them.

I use try/finally so that the cleanup is run even if the function raises and exception instead of returning normally.