Neues von plus.maths.org

Keeping up with COVID19
Many of us wait for the daily numbers announcing the state of the pandemic: people testing positive for the first time, hospitalisations, and sadly, deaths in the last 24 hours. But you might have wondered why two numbers are missing from the daily government statistics: how many people currently have COVID19 and how many new COVID19 infections there have been in the UK.
See here for all our coverage of the COVID19 pandemic.
Accurately knowing these numbers, the prevalence and incidence of the disease, would seem to be vital during the pandemic, but these numbers aren't announced alongside other daily statistics. Since we can't possibly test everyone in the population all the time, these numbers are hard to come by. This is why the Office for National Statistics (ONS) started the COVID19 Infection Survey back in April 2020. The results and analysis of the ONS survey are reported weekly to Government and the public.
The ONS Survey
The ONS survey has provided something no other studies have: it follows the same households over time, with the surveyed households forming a sample that approximately represents the whole population. Initially everyone in the household is tested for COVID19 every week under supervision, moving on to monthly tests after the first four weeks. This study offers the opportunity to observe, in close to real time, the prevalence and incidence of COVID19 in these households, whether or not they have symptoms. In this way the ONS survey gives a much clearer picture of state of the pandemic in the community than the data from the NHS Test and Trace programme, which reports oneoff tests primarily taken in response to people developing symptoms of COVID19.
A series of COVID19 test results for a participant in the ONS survey.
The ONS survey provides a direct measure of prevalence, in a similar way as polling gives a direct measure of people's voting intentions. Indeed, similar methods to those used in polling – called multilevel regression and poststratification (MRP) – are used to extrapolate from the survey sample to the whole population of the UK. (You can read more about the methods used in the ONS Survey here.)
Researchers can also use the data to estimate incidence and duration of infection, using a more indirect route. Suppose a participant tested negative in week 1, positive for COVID19 in week 2, positive again in week 3, and negative in week 4. This sequence of test results is the thing the ONS survey captures that isn't captured anywhere else: the survey detects the change from a person not having the virus in week 1, to the person testing positive for COVID19 and then recovering at least two weeks later.
"Given the study design, we don't know exactly when they started and stopped testing positive," says Thomas House, a mathematician from the University of Manchester and a member of the JUNIPER consortium. House is one of the team of academic collaborators with the ONS on this ongoing project. "All you can really do is calculate the earliest and latest times for these events." This is an example of censoring problem in statistics, where all you know about the data is that it is within some observed range.
A number of mathematical approaches can be used to try to manage such censored data but in the case of the ONS survey the first approach used was remarkably straight forward. In essence, the team of academic collaborators on the survey split their bets with something called midpoint imputation. They know the participant was negative on the date tested in week 1, and positive on the date tested in week 2. So they split the time between these known facts in half, and assume the person became infected in the middle of this interval.
Midpoint imputation assumes that the actual date a participant became infected, or recovered, was halfway between the change in their observed test results.
Duration
The initial weekly phase of the ONS survey didn't just provide information on the state of the pandemic in the UK, it provided vital evidence about the disease itself. In particular, the weekly follow up provided the opportunity to gather data on how long people had the disease, whether or not they had any symptoms.
As you'd expect, this duration isn't a fixed number – some people will only test positive for a few days while some will test positive for weeks. House explains that the best way to mathematically describe the duration of the disease is in terms of the probability that someone still tests positive at some point in time after they first caught the infection. Mathematically we'd write this as function over time (with the variable name ), giving the conditional probability:
An approximate sketch of the duration function, showing the probability that someone is still positive for COVID19 as time passes after they were infected.
The ONS survey that gave an opportunity to estimate directly, in as close as real time as possible. The researchers now have a much clearer mathematical description for the duration of someone's infection with COVID19.
From prevalence to incidence
The number of new positive tests is reported as part of the daily COVID19 statistics. This is solid data that gives a reasonably good picture of the growth and ebb of the disease as we monitor the rise and fall of the number of new positive tests. But the reported number of new positive tests doesn't include people who have the disease but have not been tested: perhaps because they don't have symptoms; they are not able to access a test; or they have some other reason not to take a test. Furthermore, the day someone takes the test, and the day the result of that test is reported, will almost certainly not be the first day they had COVID19. The number of new positive tests reported each day isn't the same as the true incidence of the disease at that time.
"When we were still following up most of the participants weekly we were trying to work out the incidence and prevalence directly," says House. "But that's stopped now as too many people have moved to monthly follow ups." Once you are only testing people monthly you'll miss too many people's initial positive tests to have an accurate measurement of incidence. But the ongoing snapshots of the number positive tests in the sample of the population still provides a good measure of the true prevalence, as does the other significant COVID19 study, the REACT study, which randomly tests 150,000 people a month.
"Instead we are moving to calculating the incidence from the prevalence, as that's the only way you can do this once you are relying on monthly follow ups," says House. This is possible thanks to a mathematical relationship between the prevalence, incidence and duration of the disease.
The prevalence of the disease includes the people who caught COVID19 today: which we'll write as a function of time, , where today is time .
The prevalence also includes the people who caught the disease yesterday (given by if we are measuring time in days) who still have the disease today. We know from our discussion of the duration of the disease above that the people who caught the disease 1 day ago will have a probability of of still testing positive, so the number of them still testing positive today is .
The prevalence of the disease today also includes the people who caught the disease two days ago who are still testing positive today: given by . And we can continue with this train of thought to give the prevalence of the disease to be:
The prevalence of a disease today is the sum of the people who caught the disease today, plus those who caught it yesterday and are still infected today, plus those who caught it two days ago and are still infected today... and so on. The light blue rectangles illustrate the incidence on previous days, and the dark blue rectangles illustrate the proportion of those (given by Inc(tk) x Dur(k) where k is the number of days ago) who are still infected today.
Statistics are announced every day, but this division of time into daylength steps is quite arbitrary. Mathematically, it can be easier to work with smaller and smaller time steps, and if you take this to the limit you benefit from all the powers of calculus. At this point, rather than daily, or hourly counts, you can describe the prevalence for any point in time , as an integral over the time, , since people caught the disease:
This is a lovely mathematical result, but it is also very useful. House explains that in the weekly phase of the ONS survey they were trying to measure each of these three things – the incidence, the duration and the prevalence – separately. And now they are measuring the prevalence using the monthly follow ups, they can still use this mathematical relationship to calculate the incidence of the disease.House explains that this approach illustrates some of the innovations mathematicians have had to make, moving from their usual experience of working on a complete data set for a past disease, to dealing with a stream of live data from a disease progressing in real time. "Trying to work out directly in real time the incidence of a disease has never been done before. That's really challenging." Thankfully people like Thomas House and the rest of the researchers working on the team are rising to the challenge.
About this article
Thomas House
Thomas House is Reader in Mathematical Statistics at the Department of Mathematics at the University of Manchester and a member of the JUNIPER modelling consortium and the modelling group SPIM, and contributes to the Scientific Advisory Group for Emergencies (SAGE).
Rachel Thomas is Editor of Plus.
Thank you to Sarah Walker, Professor of Medical Statistics and Epidemiology at the University of Oxford, for her help with this article. She is one of the team of academic collaborators on the ONS survey.
This article was produced as part of our collaboration with JUNIPER, the Joint UNIversity Pandemic and Epidemic Response modelling consortium. JUNIPER comprises academics from the universities of Cambridge, Warwick, Bristol, Exeter, Oxford, Manchester, and Lancaster, who are using a range of mathematical and statistical techniques to address pressing question about the control of COVID19. You can see more content produced with JUNIPER here.

The Quantum Menagerie
Quantum mechanics is weird, and it is hard. One reason it's hard is that the physical phenomena which gave rise to it represent a fundamental contradiction of our everyday experience of the world. The second reason is that there are three equivalent theories of quantum mechanics, developed almost simultaneously in the 1920s, each theory with its own mathematical notation.
But don't despair. The book The Quantum Menagerie by James V Stone (published December 2020) explains quantum mechanics using a finely balanced combination of words, diagrams and mathematics. The result is a tour of the most intriguing aspects of the theory, including Einstein's "spooky action at a distance", Bell's inequality, Schrödinger's cat, Heisenberg's uncertainty principle, and de Broglie's matter waves.
To get a taster of the book, read Stone's Plus article Using quantum mechanics to edit the past. If you like to marvel at quantum strangeness then this article won't disappoint.

Using quantum mechanics to edit the past
There are many strange things about quantum mechanics, but perhaps the strangest is that it appears to allow us to edit the past. To understand how this might be possible, we first need to know about the double slit experiment. For this, a brief summary is given below, and a more detailed account is given here.
The double slit experiment
The double slit experiment, first performed in 1801 by the physicist Thomas Young, appeared to provide an unequivocal demonstration that light behaves like a wave. But the importance of the double slit experiment extends far beyond that demonstration because, as Richard Feynman said in 1966:
"In reality, it contains the only mystery... In telling you how it works, we will have told you about the basic peculiarities of all quantum mechanics."
In Young's experiment the experimental apparatus consists of a barrier with two vertical slits and a screen, as shown in Figure 1 below. Light emanating from each slit interferes with light from the other slit to produce an interference pattern on the screen. This seems to prove conclusively that light consists of waves.
Figure 1: The doubleslit experiment. Waves travel from the source (top) until they reach a single slit. A semicircular wave emanates from the slit until it reaches a barrier, which contains two slits. The two semicircular waves emanating from these slits interfere with each other, producing peaks and troughs along radial lines that form an interference pattern on a screen (bottom).
However, with modern equipment, we can also see that the interference pattern consists of individual dots, each of which corresponds to a single particle of light, called a photon. This, in essence, the famous waveparticle duality.
Remarkably, even when the light is made so dim that only one photon at a time reaches the screen, the same type of interference pattern builds up over time, as shown in Figure 2. With such a low photon rate, it can take several weeks for the interference pattern to emerge. However, the very fact that an interference pattern emerges at all implies that even a single photon behaves like a wave. This, in turn, seems to imply that each photon passes through both slits at the same time, which is clearly nonsense.
Figure 2: Emergence (a to d) of an interference pattern in a double slit experiment, where each dot represents a photon. Image: Dr. Tonomura and Belsazar, CC BYSA 3.0.
This raises the question: did the photon really pass through only one slit, and if so, which one?
Did the photon pass through one slit or two?
Before we answer this question, let's think about what we'd see if there was no interference. Imagine we closed one slit, and left the other open, so the photons from that slit are allowed to form a pattern on the screen. Next, only the other slit is left open, and the photons from that slit are allowed to add to their pattern to the screen. Using this procedure, the image captured by the photographic plate is the sum of photons from each slit, with the guarantee that photons from the two slits could not possibly have interfered with each other. The result is a diffraction envelope as shown in Figure 3.
Figure 3: Measuring position at the slits (slit identity) results in a broad diffraction envelope. Here, the height of the curve indicates light intensity.
Now, to attempt to answer the question of whether the photons passed through one or both slits, we could place a photodetector at each slit to find out which slit each photon passed through. However, for reasons that will become clear later, we will consider a different setup. Imagine replacing the screen with an array of long tubes, each of which points at just one slit, as shown in Figure 4. At the end of each tube is a photodetector, such that any photon it detects could have come from only one slit. Note that there should be a pair of detectors at every screen position, with each member of the pair pointing at a different slit. Thus, irrespective of where a photon lands on the screen, the slit from which it originated is measured.
Figure 4: An imaginary experiment for demonstrating wave–particle duality.
If we were to use this imaginary apparatus then we would find that the interference pattern is replaced by the diffraction envelope described above, and as shown in Figure 3. Thus, using detectors to measure which slit each photon passed through (its slit identity) prevents any wavelike behaviour, just as if each photon had travelled in complete isolation as a single particle. If both slits are left open (and no photodetectors are used) then the original interference pattern is restored, as if the individual photons behave like waves (see Figure 5). This wave–particle duality seems to suggest that the way the experiment is measured has a mysterious impact on its outcome.
Figure 5: When both slits are left open an interference pattern appears. Here, the height of the curve indicates light intensity.
Heisenberg's uncertainty principle
As counterintuitive as the result above seems, it is consistent with the most famous principle of quantum physics: Heisenberg's uncertainty principle. It states that it is not possible, in principle, to know with absolute certainty both the position and momentum of a particle – the more accurately the position is known, the less accurately momentum can be known, and vice versa (find out more here). This is not because the measuring instruments are imperfect, but because the notions of position and momentum are inherently fuzzy in the quantum world.
Heisenberg's uncertainty principle relies on the fact that, because both light and matter behave like waves, they can be analysed using Fourier analysis (the main mathematical tool for dealing with waves). This is important because Fourier analysis was used by Heisenberg to derive a mathematical theorem, Heisenberg's inequality (as it is now known), which underpins Heisenberg's uncertainty principle.
In the double slit experiment, the slit identity of a photon can be treated as its position at the barrier. Because the momentum of a particle specifies its direction of travel, the particle's final position on the screen is also specified by its momentum (see Figure 6a). Thus, Heisenberg's uncertainty principle, which is usually stated in terms of position and momentum, guarantees that reducing the uncertainty in slit identity (position) must increase uncertainty in screen position (momentum). So positionmomentum uncertainty translates to slit identityscreen position uncertainty in the double slit experiment.
Wheeler's delayedchoice experiment
So far, we have chosen to measure two different aspects of each photon: first, measuring each photon's screen position (which translates into momentum at a slit); and second, using tube detectors to measure slit identity (which translates into position at the barrier).
Measuring photon screen position accurately means that the complementary measurement, slit identity, is uncertain. Because the experimental does not change in the time taken for the photon to travel from the barrier to the screen, this uncertainty makes it seem plausible (or at least acceptable) that each photon could have passed through both slits. Similarly, measuring slit identity accurately, it seems plausible that each photon passes through only one slit.
But there is an alternative experiment, which involves changing the experimental setup while each photon is in transit between the slits and the screen. If the decision about what is measured alters how each photon behaves then it seems reasonable that this decision must be made before each photon starts its journey from the barrier to the screen. However, what if the decision on whether to measure screen position or slit identity is made after each photon has passed through the slit(s), but before it has reached the screen or tube detectors? This is Wheeler's delayedchoice experiment, named after the physicist John Wheeler, and depicted in Figure 6 below.
Such an experiment is conceptually easy to set up. Once a photon is in transit between the slits and the screen, we can decide whether to measure the photon's screen position (by leaving the screen in place, Figure 6b), or slit identity (by removing the screen so that the tube detectors can function, Figure 6a).
Figure 6: An imaginary experiment for demonstrating wave–particle duality. a) If slit identity (photon position at the barrier) is measured using an array of oriented detectors at the screen then momentum (direction) precision is reduced, and a diffraction envelope is observed on the screen, as in Figure 3. b) If slit identity is not measured then the location at which a photon hits the screen effectively measures its direction (photon momentum) with high precision, then an interference pattern is observed. In Wheeler's delayedchoice experiment, the decision to measure either a) slit identity, or b) photon momentum is made after the photon has passed through the slit(s).
In 2007 the physicist Vincent Jacques and his colleagues, published the results of an experiment which is conceptually no different from the experiment described above, using measurement devices called interferometers. Translating from the Jacques interferometer experiment, the photons' screen positions are effectively measured by leaving the screen in place, yielding an interference pattern on the screen. This fits with the account above: accurate measurement of screen position corresponds to each photon passing through both slits, which is consistent with wavelike behaviour. In contrast, when the screen is removed, an array of detectors is revealed that detects each photon's slit identity, and the resultant diffraction envelope formed is consistent with photons behaving like particles.
Crucially, the decision to measure screen position (by leaving the screen in place) or slit identity (by removing the screen) was made (at random, and by a physical random number generator) after each photon had passed through the slit(s); so the behaviour of each photon as it passed through the slit(s) depended on a decision made after that photon had passed through the slit(s). In essence, it is as if a decision made now about whether to measure slit identity or screen position of a photon (that is already in transit from the slits to the screen) retrospectively affects whether that photon passed through just one slit or both slits.
In principle, the slit–screen distance can be made so large that it takes billions of years for each photon to travel from the slit(s) to the screen. In this case, a decision made now about whether to measure the slit identity or screen position of a photon seems to retrospectively affect whether that photon passed through just one slit or both slits billions of years ago.
As we should expect, these results are consistent with Heisenberg's uncertainty principle. Regardless of when the decision is made, if the detectors measure slit identity then this must increase uncertainty regarding the photon's screen position, which causes the interference pattern to be replaced by a simple diffraction pattern. It is a mystery how any physical mechanism could trade uncertainty in slit identity for uncertainty in screen position, and it is even more of a mystery how this trade could work backwards across time. However, the fact remains that if such a mechanism did not exist then Heisenberg's uncertainty principle would be violated.
Editing the past
So, suppose we wanted to edit the past. First, how could we do so, and second, how far back in time could that edit be?
Well, to answer the first question (and as described above), a decision made now about whether or not to leave the screen in place seems to determine how photons behaved in the past. Specifically, if we wish to ensure that each photon passed through just one slit then we should remove the screen (allowing the detectors to measure which slit each photon exited). Conversely, if we wish to ensure that photons passed through both slits then we should leave the screen in place. In both cases, this decision can be made after the photons have passed through the slit(s).
Second, the temporal range of our edit depends on how long the photons have been in transit from the double slits to the screen/detectors. This looks as if we need to have a double slit experiment that has been set up in the distant past. However, there are natural phenomena, like gravitational lensing, which can be used to effectively mimic the double slit experiment; as if slits are so far away that the photons we measure have been in transit for many years. Thus, in principle, temporal editing can reach back to the big bang, some 14 billion years ago.
Finally, it should be noted that not everyone (including Wheeler) agrees that Wheeler's delayed choice experiment edits the past. Like most of quantum mechanics, the equations that define the results of the delayed choice experiment do not have a single unambiguous physical interpretation. Indeed, the difficulty of interpretation can only be fully appreciated by understanding the equations that govern quantum mechanics. Fortunately, those equations are usually more daunting than the underlying physics they seek to describe.
Further reading
This article is based on an extract from the book The Quantum Menagerie by James V Stone (published December 2020). You can see Chapter 1, the table of contents, and book reviews here.
You can also find out more about quantum mechanics in our package Who killed Schrödinger's cat?
About the author
James V Stone is an Honorary Associate Professor at the University of Sheffield, UK.

The hardest logic puzzle ever
The hardest logic puzzle ever is a riddle by the logicians Raymond Smullyan and John McCarthy that became famous in the 1990s.
Who is True, who is False, and who is Random?
Imagine there are three oracles called True, False, and Random. True always tells the truth, False always lies and Random randomly tells the truth or lies. Your task is to find out which oracle is which by asking them some yes/no questions. But there is an additional difficulty: the oracles will reply in their mystic language, either saying "DA" or "BAL" and you don't know which one of these words means yes and which one means no.
Surprisingly, three questions are enough to solve the riddle.
Before you start pondering this, here are three things that are important to know:
 The random oracle will decide whether or not to lie or tell the truth by secretly tossing a coin.
 You can ask any yes/no question to any oracle of your choice.
 You do not have to ask all questions at once. In fact, you can make use of the previous answers to adapt your questions and choose the oracle that you ask them to.
If you are struggling to find the solution, don't despair. After all, it's called the hardest logic puzzle ever. Here is the general idea behind the solution. With the first question you identify an oracle X which is not Random. You then ask the two further questions of X. The second question identifies whether X is True or False, and the third question identifies the other two oracles.
If and only if
The solution makes use of the logical concept of if and only if, which works as follows: given two statements A and B, the statement A if and only if B is true whenever A and B are both true or both false, otherwise it is false. This means that the statement
2+2=4 if and only if the Moon is made of cheese
is false because one component is true and one is false. On the other hand the statement
2+2=5 if and only if the Moon is made of cheese
is true because both components are false. These examples don't quite chime with our intuitive interpretation of what "if and only if" might mean, but try not to worry about this. Just keep in mind the logical definition given above. (If and only if is the negation of something called XOR, which you can read about here.)
Starting simple
Now let's go back to our riddle and start with a simplification. Let's suppose that DA means "yes" and BAL means "no".
First suppose you have already asked the first question to some oracle, which has identified a different oracle X that isn't Random. Can you then think of a simple question that will reveal who X is? Afterwards, can you think of a simple question for X which will reveal who the other two oracles are?
If not, then here are the answers. To find out if X is True or False you simply ask a question to which you already know the answer, such as:
Is 2 plus 2 equal to 4?
To identify the other two oracles you can then ask X:
Is this oracle (pointing at the oracle you asked the first question) Random?
Since you know if X is True or False, you can determine whether the oracle you asked the first question is Random or not. By elimination you identify the third oracle as well.
The first question
Suppose again that DA means "yes" and BAL means "no" (we will take care about the language issue later on) and let us turn to the crucial first question, which is the most difficult one. Pick any oracle O and ask:
Are you True if and only if this oracle (where you point at one oracle P different from O) is Random?
The Moon is not made from cheese, and neither does it have rings.
Suppose the answer you get from O is "yes". If O is True, then the oracle P must be random. That's because the statement
O is True if and only if P is Random
is true whenever both components are true, or both components are false.
If the answer is "yes" and O is False, then the true answer should have been "no, the statement 'O is True if and only if P is Random' is false". Since the first part of the statement, "O is true" is false, the second part of the statement, "P is Random" must be true (from the definition of if and only if).
So whether O is True or False, an answer of "yes" tells us that P is Random, which means that the third oracle, call it Q, is not Random.
What does an answer of "yes" tell us if O is Random? Well, it doesn't tell us with certainty what P is, but that doesn't matter. In this case, because O is Random, the third oracle Q is definitely not Random.
In summary, then, an answer of "yes" tells us with certainty that the third oracle Q is not Random, no matter what O is.
What if O answers "no"? Then if O is True this implies that P is not Random. If O is False, then the true answer should have been "yes, the statement is true". Since the first part of the statement
O is True if and only if P is Random
is false in this case, this means that the second part "P is Random" is also false.
So whether O is True or False, an answer of "no" tells us that P is not Random.
What does an answer of "no" tell us if O is Random? Well, it tells us that P can't be Random, because O already is.
In summary, then, an answer of "no" tells us with certainty that P is not Random, no matter what O is.
In both cases, whether O answers "yes" or "no", we have identified one oracle as not being Random. If O answers "yes", then the third oracle Q is not Random. And if O answers "no", then the oracle P is not random. This identifies our nonRandom oracle X from above. We can now proceed with the other two questions as described to solve the riddle. The diagram below illustrates the answers our three questions will give us.
This diagram illustrates how the various answers to the three questions allow us to identify all three oracles. Here T stands for True, F stands for False, and R stands for Random.
Note that the role of the if and only if here was to get certainty from the answers of O without knowing whether O is True, False or Random.
DA and BAL
Now consider the original riddle, where you do not know the meaning of DA and BAL. Here we need to introduce another layer of if and only if, which will allow us to elicit the same information as we did before, without knowing which of DA and BAL means "yes" and which means "no". Namely, by adding "if and only if DA means yes" at the end of all the three questions it turns out that we can interpret the answer DA as a positive answer to the original question, and the answer BAL as a negative answer to the original question. In the next section we see an example of this, now we are going to state the general solution to the riddle.
You start by picking an oracle O and ask it, while pointing to another oracle P,
Are you True if and only if P is Random, if and only if DA means yes?
This statement is true whenever the two statements
Are you True if and only if P is Random
and
DA means yes
are both true or both false, otherwise it is false.
As before, this question will identify an oracle X, different from O, that is not Random.
Then you ask of X:
Is 2 plus 2 equal to 4 if and only if DA means yes?
This will tell you whether X is True or False. Finally you ask of X:
Is this oracle (pointing at the oracle O you asked the first question of) Random if and only if DA means yes?
As before, this will tell you whether O is Random or not, and since you already know what X is, it solves the riddle.
Really?
To convince you that this works, let's look at the last question. You already know at this point whether X is True or False. Let's assume X is True. You are asking X about O, the oracle you asked the first question of.
This painting by John Collier depicts Pythia, the oracle of Delphi (who could say more than just DA and BAL).
Suppose that X answers DA and that DA means "yes". Then O is Random. Now suppose that DA means "no". Then O is also Random. That's because in this case the statement
DA means yes
is false, so since X said DA, which means "no", the statement
O is Random
is true.
What if X says BAL and that BAL means "yes". Then O is not Random. That's because in this case the statement
DA means yes
is false, so since X said BAL, which means "yes", the statement
O is Random
must also be false.
Similarly, if BAL means "no" then O is also not Random. That's because in this case the statement
DA means yes
is true, so since X said BAL, which means "no", the statement
O is Random
must be false.
What we have just shown is that, if X is True, then an answer of DA tells us that O is Random and an answer of BAL tells us that O is not Random, even though we don't know what DA and BAL mean.
A very similar argument shows that when X is False, then an answer of DA means that O is not Random and an answer of BAL means that O is Random. In summary, we can work out whether O is Random or not from X's answer without knowing which of DAL and BAL means "yes" and which means "no".
It is also possible to show that the answer to the first question identifies an oracle X that is not Random and the answer to the second question identifies whether X is True of False, even though you don't know which of DAL and BAL means "yes" and which means "no". We will leave this to you as an exercise.
Another puzzle for you
In a TV show you are shown two envelopes. If you open the correct envelope you win a huge prize, while if you open the wrong one you win nothing. There are two people and you can ask one of them a yes/no question. One person will tell the truth and the other will lie (and you don't know who is the truthteller and who is the liar). Which question can you ask in order to identify the envelopes?
You can see the solution here.
About the author
Antonella Perucca is Professor for Mathematics and its Didactics at the University of Luxembourg. She is a researcher in number theory and invents mathematical exhibits. To find out more, explore her webpage.

Vaccination: Where do we stand and where are we going?
The rollout of COVID19 vaccines in the UK is going well, so it seems there are grounds for optimism. But what can we really say about where the vaccines have got us so far and where we are likely to be when the rollout is complete?
See here for all our coverage of the COVID19 pandemic.
Recent work by a team from the JUNIPER modelling consortium provides some insight. It suggests that vaccination on its own is highly unlikely to eradicate COVID19. However, it's got a key role to play in getting us out of the lockdown. Here caution is crucial, the work suggests. The slower the pace of relaxing the rules, the flatter the curves of future infection waves.
The team consists of epidemiologists from the University of Warwick: Sam Moore, Edward Hill, Michael Tildesley, Louise Dyson, and Matt Keeling. Most of the team contribute to the Scientific Pandemic Influenza Modelling Group (SPIM) and Keeling is also a member of the Joint Council for Vaccination and Immunisation.
How good are the vaccines deployed in the UK proving to be?
There's a lot of uncertainty surrounding almost every aspect of the pandemic, and this applies to vaccines too. When they were first approved, clinical trials had shown that the Pfizer/BioNTech and Oxford/AstraZeneca were safe and effective enough to be put to use, but precise figures on just how well they work can only be obtained once large numbers are vaccinated.
The data that has been collected since the rollout started has provided a clearer picture. As Keeling reported in a research talk hosted by the Isaac Newton Institute last week, latest estimates say that the vaccines offer different levels of protection against different aspects of infection.
Ideally a vaccine would stop people from even becoming infected, but it's also possible that a vaccine merely stops you from developing symptoms. The crucial difference here is that a vaccine that only stops symptoms wouldn't block the onward transmission of the disease. The infection would still be able to circulate in the population and those who can't (or won't) be vaccinated wouldn't be protected.
Although there's still not enough data to be sure, the figures Keeling reported in his talk give ground for optimism. The vaccines appear to be between 50% and 80% effective at stopping people from even catching COVID19, and therefore in blocking transmission, and around 90% effective in stopping them from developing severe symptoms.
The effect of the vaccines also increases from the first to second dose. In Keeling's talk he assumed that the level of protection against disease was 70% two weeks after the first dose, rising to 88% two weeks after the second dose.
These percentages are high but they're not maximal, and that's one reason why vaccination alone probably won't eradicate COVID19. "Because the shield [provided by the vaccines to the individual] is partial, it can be overcome by increases in the level of infection in the general population," says Keeling. In other words, some people will become infected with COVID19 despite being vaccinated, and the more infection there is in the general population, the more we are likely to see such vaccine failures.
Another thing we couldn't possibly know before the vaccines were rolled out was just how many people would agree to be vaccinated. And here the numbers exceed expectations, in a good way: so far uptake in those over 60 is at 95%.
Are we getting it right?
The work of the JUNIPER team helped to inform how the vaccines are being rolled out, both in terms of the order in which groups of the population are being vaccinated, and in terms of the gap that is left between the two jabs. With both aspects there are different approaches that could have been taken: prioritising people with many social contacts over the old and vulnerable, for example, or aiming for fewer people getting both jabs quickly, rather than more people getting the first jab quickly.
The JUNIPER team began looking at vaccines strategies last summer (see here), using mathematical models that predict the number of cases, hospital admissions and deaths we are likely to see under various assumptions. There's always uncertainty of course, so the outputs of such models can't be regarded as surefire predictions. Instead they allow us to explore possible scenarios in an ifthisthenthat kind of way. In the case of the vaccine rollout, prioritising the old and vulnerable persistently turned out as the optimal strategy in the modelling, even as the underlying assumptions were being varied. This is basically the strategy that has been adopted in the UK. (See here to find out more about mathematical models used in epidemiology.)
As far as the spacing between doses is concerned, recent work by Hill and Keeling suggests that prioritising the first dose — giving as many people as possible one jab, rather than fewer people both jabs — is generally the best strategy for the vaccines and vaccine capacity we have. The policy adopted in the UK depended on a number of practical considerations — not least the fact that the Oxford/AstraZeneca vaccine is more effective after a 12week delay between jabs. Hill and Keeling's work shows that the strategy also has benefits in terms of reducing the number of deaths.
Where are we going?
Mathematical models can also be used to explore what might happen in the future. The model employed by the JUNIPER team is detailed, taking into account, for example, the age structure of the population, geography, the B1.1.7 variant, and who has been vaccinated to date. It's one of the models that also helps scientists calculate the value of the famous reproduction number R every week. "Overall [the model] is supposed to give a fairly complete picture of what has been happening to date, and therefore we hope can give reasonable predictions of what is going to happen going forward in time, although there is always uncertainty with human behaviour," says Keeling.
The first question you might ask is what vaccination does to R — and the answer, according to the model, is that vaccination on its own is not going to get R to below 1. "We're in a situation where vaccination can have a significant impact on R, but in these simulations it's not sufficient to drive R below 1. This is really saying that although the vaccine is doing well to control the disease, it's never going to put us in a situation where we can eradicate it without other controls," says Keeling.
This figure shows how the value of R is estimated to change as more doses of the vaccine are administered in the population, in the absence of any restrictions. The four different lines correspond to four different assumptions on how much infection the vaccines prevent, as shown in the key. Figure from this paper by the JUNIPER Team, which has appeared in The Lancet.
The team also simulated the effects of relaxing the rules, under various assumptions. "The first thing we [asked] is, what if we completely relaxed everything in April?" says Keeling. "We end up with an absolutely enormous outbreak leading to a large number of deaths and hospitalisations. Even after four months of vaccination, if we stop all controls we get a disastrous result."
This figure explores what might happen to the daily number of COVID19 deaths if restrictions were suddenly lifted in April. The different curves correspond to different assumptions about the efficacy of the vaccines at preventing infection, as shown in the key. These simulations assume that 2.5 million doses of vaccines are being given a week, that the efficacy is 70% after the first dose and 88% after the second dose, and that uptake is 95% in the over 80s, 85% in the 50 to 79 age group, and 75% in 18 to 49 age group. Figure from this paper by the JUNIPER Team, which has appeared in The Lancet.
Even if we wait until December and then lift all measures, the model suggests, we can still end up with a large and sustained outbreak, unless the vaccines are very good at preventing infection. "Any abrupt change is always likely to precipitate some form of future outbreak," says Keeling.
This figure explores what might happen to the daily number of COVID19 deaths if restrictions were suddenly lifted in December. The different curves correspond to different assumptions about the efficacy of the vaccines at preventing infection, as shown in the key. These simulations assume that 2.5 million doses of vaccines are being given a week, that the efficacy is 70% after the first dose and 88% after the second dose, and that uptake is 95% in the over 80s, 85% in the 50 to 79 age group, and 75% in 18 to 49 age group. Figure from this paper by the JUNIPER Team, which has appeared in The Lancet.
The government's roadmap for coming out of lockdown doesn't involve a sudden relaxation of rules, of course, but takes things step by step. The JUNIPER team also looked at such gradual relaxation strategies and found that, according to their model, the slower the pace the smaller the outbreaks. "A five month relaxation still gives a noticeable peak in deaths, even if [the vaccines offer] an 85% protection against infection," says Keeling. As we move to slower and slower relaxation, the outbreaks become less threatening.
This figure explores what might happen to the daily number of COVID19 deaths if restrictions are relaxed over a 5 month period (top left), a 10 month period (top right) and a 14 month period (bottom). The different curves correspond to different assumptions about the efficacy of the vaccines at preventing infection, as shown in the key. These simulations assume that 2.5 million doses of vaccines are being given a week, that the efficacy is 70% after the first dose and 88% after the second dose, and that uptake is 95% in the over 80s, 85% in the 50 to 79 age group, and 75% in 18 to 49 age group. Figure from this paper by the JUNIPER Team, which has appeared in The Lancet.
"The message from all this is that we need to be cautions," says Keeling. "Slower relaxation always works better and higher levels of infection blocking are needed to be able to control the outbreak."
The figures above were produced in January and February with the information that was available then. It's important to remember that things are developing very fast with new data on the efficacy of the vaccines becoming available all the time. "Everything I tell you this week is likely to be tweaked and modified next week," Keeling says. However the general message from the simulations remains the same when the model is run on the latest information — although there is more optimism about how well the vaccine might prevent infection.
In summary, then, the modelling shows that vaccination isn't a silver bullet, but does have a key role to play in helping us come out of lockdown. What exactly is going to happen, the model suggests, will depend critically on exactly how good the vaccines are at blocking infection, how slowly restrictions are released, and how many people agree to be vaccinated. In the longer term, waning immunity and new variants that can evade the vaccine will change this simple picture — allowing further future waves unless we have repeated booster vaccine programmes.
About this article
This article is based on research talk by Matt Keeling hosted by the Isaac Newton Institute and a recent paper by Keeling and Sam Moore, Edward Hill, Michael Tildesley, Louise Dyson.
Sam Moore is a postdoctoral research associate who has been working on vaccination modelling for Covid19 after joining the SBIDER group within the University of Warwick at the start of the pandemic last year.
Edward Hill is a postdoctoral research associate in the SBIDER group at the University of Warwick. He has been a participant in the Scientific Pandemic Influenza Modelling Group (SPIM) since April 2020.
Michael Tildesley is a Reader at the University of Warwick. He has been a participant in the SPIM modelling group since April 2020.
Louise Dyson is an Associate Professor in Epidemiology at the University of Warwick. She has participated in the SPIM modelling group since April 2020.
Matt Keeling is a professor at the University of Warwick, and holds a joint position in Mathematics and Life Sciences. He is the current director of the Zeeman Institute for Systems Biology and Infectious Disease Epidemiology Research (SBIDER). He has been part of the SPIM modelling group since 2009 and is a member of the Joint Council for Vaccination and Immunisation.
Marianne Freiberger is Editor of Plus.
This article was produced as part of our collaborations with JUNIPER, the Joint UNIversity Pandemic and Epidemic Response modelling consortium, and the Isaac Newton Institute for Mathematical Sciences (INI).
JUNIPER comprises academics from the universities of Cambridge, Warwick, Bristol, Exeter, Oxford, Manchester, and Lancaster, who are using a range of mathematical and statistical techniques to address pressing question about the control of COVID19. You can see more content produced with JUNIPER here.
The INI is an international research centre and our neighbour here on the University of Cambridge's maths campus. It attracts leading mathematical scientists from all over the world, and is open to all. Visit www.newton.ac.uk to find out more.