### Spoiler Warning

**This essay contains spoilers for my novel Quarantine.
If you haven’t read it, and have any intention of doing so, you will probably enjoy it
more if you read it before you read this essay.**

*Q*uantum mechanics was born early in the twentieth century as a way of dealing with some
puzzling aspects of the behaviour of light and matter. In the process of constructing theories to make sense of what seemed at first to be
a few niggling loose ends in classical physics, a revolution was started that has led to powerful, detailed, predictive models for
describing almost every aspect of the microscopic world. However, while quantum mechanics has been an incontrovertible success,
its ultimate implications remain a matter of controversy. We know how to calculate the behaviour of nuclei and
molecules, lasers and logic gates, but there is still no consensus as to what quantum mechanics tells us about the fundamental nature of reality.
Classical mechanics assumed that reality was more or less the way it appeared to be from everyday experience. Quantum mechanics shows
that it isn’t, but there is still no firm consensus as to exactly what should replace the classical view.

My 1992 novel *Quarantine* centred on a tongue-in-cheek, science-fictional resolution of that controversy, with a hypothesis that was
chosen solely for its technological and existential ramifications, not because I considered it plausible. I said as much in interviews at
the time.
However, the world is full of misinformation about quantum mechanics, and while nobody would mistake *Quarantine* for a
textbook on the subject, over the years I’ve often looked back and winced at some scientific flaws in the novel that go beyond the mere
implausibility of its central premise.

So, the purpose of this essay is to try to state as clearly as I can where
*Quarantine* parts company from reality:
both from the interpretations of quantum mechanics that most physicists consider likely, and from some well-established facts that don’t really
depend on which interpretation is correct.

I*n* classical mechanics every object possesses a definite location;
we might be ignorant of it, or only know it approximately, but this single, definite location exists nonetheless.
In quantum mechanics, a particle such as a photon or an electron will generally exist in a state where it might be
found, with various probabilities, anywhere across a range of locations. It’s important to understand that this is *not* the same as a lack of
precision in our ability to measure things; physicists
routinely prepare photons in states where they might be found either in one location, or in another one several centimetres away — whereas
a measurement of the photon’s position could pin it down with a precision thousands of times greater than that.
Nor is it an example of the kind of probability we might talk about in everyday life; a photon prepared
this way is *not* simply like a pea that we’re told might be under either of two thimbles, but in fact is definitely under just one. In the famous
double-slit experiment, a photon emitted by a light source is confronted by a barrier with two slits, followed by a screen bearing a
strip of photographic film
or an array of electronic detectors. If the photon simply passed through one slit or the
other, after many repetitions of the experiment there would be a bright patch opposite each slit. But what is actually seen is a pattern of bright
and dark stripes that can only be successfully explained if each photon passes through *both* slits, and its two “versions”
interact with each other in the manner of two overlapping waves, reinforcing each other in certain places, and cancelling each other out elsewhere.

We are led to a mathematical framework where the state of a single particle, at any moment in time, is described by assigning a number known as an
**amplitude** to every location where that particle might be. This assignment of amplitudes to locations is known as the particle’s
**wave function**.
Where the amplitude is small, the probability of finding the particle is small; where the amplitude is larger, the probability of finding the particle
is larger. But these amplitudes
themselves are not quite the same as probabilities; they can be negative, or even complex-valued (containing multiples of the square root of minus 1). We
find the probability associated with an amplitude by squaring the absolute value of the amplitude; if an electron has an amplitude of 0.6 of being
found in one place and an amplitude of –0.8 of being found in another, the probabilities of detecting it in those locations are 0.6^{2}=0.36
and 0.8^{2}=0.64 respectively.

What is the point of talking about these amplitudes; why not just quote the
probabilities, and banish those awkward negative and complex numbers? The reason we can’t do that is because situations arise where two or more
amplitudes need to be added, and the resulting probability will generally be quite different from the sum of the individual probabilities.
In everyday life, when there are two distinct ways something can happen, the probabilities *are* added; for example, if
you buy two tickets in a lottery, the probability that you’ll win is doubled.
But in the double-slit experiment, the amplitude for finding a photon at a certain place on the screen might be 0.1 if it passed through the
left slit, and –0.1 if it passed through the right slit; this leads to a total amplitude of zero, and a probability of zero, rather than the
probability
of 0.1^{2}+0.1^{2}=0.02 that we’d get by adding probabilities. In quantum mechanics, alternative ways for the same outcome to arise
are said to “interfere” with
each other: the possibility of something happening can just as easily be diminished as increased by the fact that it can happen in
different ways. We won’t go into the detailed mechanics of amplitudes here, but there is a beautiful book on the subject
by the late Richard Feynman — written for a lay audience, but scrupulously accurate — called
*QED: The Strange Theory of Light and Matter*, which I recommend highly.

Now suppose we know how to calculate the wave function for a quantum particle prepared in a certain way.
The wave function describes a particle whose location is spread out over several centimetres, but when we make a measurement to see where the particle
is, we find it in *just one place*.
What’s more, if we check again (very quickly, so as not to give the particle time to move or spread out), we will find it in the same place again.
Like a tossed coin that’s finally hit the ground head up, or a pea that’s been revealed beneath the first thimble from the left,
the alternative possibilities that we couldn’t
rule out until the measurement was made seem to have gone away. For the coin and the pea this doesn’t surprise us; they could only ever do one thing at a time. But the
whole point of quantum mechanics is that multiple possibilities *must* have co-existed before we made the measurement; if we try to get rid of
that assumption, the theory no longer agrees with our experiments, and all of its spectacular successes in chemistry, electronics, optics and so on are forfeit.

So, what happened to the other possibilities — the other “versions” of the particle? How does the co-existence of
many different values for a particle’s location get to be replaced by a state with a single location?
When, how, and why does the wave function **collapse**?
Here are some of the answers that have been given to this question:

- Don’t ask stupid questions. Quantum mechanics lets you calculate the probabilities for the various outcomes of any experiment you can describe. What more do you want from a scientific theory? Just shut up and calculate!
- The wave function is not a real
*thing*, the way an ocean wave or a wave on a string is a*thing*. Rather, it describes what we know about a quantum system. When we make a measurement, we get to know something new, so the wave function has to be changed to reflect that. But there is no “objective collapse” — any more than a pea beneath one of three thimbles undergoes some kind of physical change when we see where it is, and we change our probability for its presence there from one in three to 100%. - When the particle interacts with the measuring device (or any other macroscopic object on which it has a significant effect) there is an objective collapse of the particle’s wave function.
- When the measuring device interacts with
*an observer*, there is an objective collapse of the combined wave function of the particle and the measuring device. - The wave function doesn’t collapse, and all its possibilities continue to exist; we are simply unable to detect them because of the effects of decoherence.
- Conventional quantum mechanics is incomplete. We need a better theory to truly understand what’s going on.

As readers of *Quarantine* will know, the novel posits a variation on position 4: human beings alone have a special
structure in their brains that actively causes the collapse of the wave function. Only once *a human* has interacted with the quantum system and
perceived a definite outcome are all the other possibilities obliterated.

As I’ve already said, though this hypothesis leads to some entertaining consequences, I don’t actually believe it for a moment. What’s more, I’m not sure how seriously any physicist has ever really taken position 4 at all. With a less anthropocentric definition of “an observer”, this seems to be suggesting that until someone or something possessing the powers of perception and memory has witnessed the measurement, no measurement has really taken place. But the physics of perception and memory really aren’t very different from the physics of other interactions with macroscopic objects, so (except for people advocating some mystical, dualist form of consciousness) this tends to blur into answer 3. If an electron leaves a trail of droplets in a cloud chamber, it has already made its mark on a macroscopic object, and at the level of statistical physics there is really nothing added to the picture when an observer comes along and notices the trail.

In any case, the point I want to stress is that positing a role for *an observer* in the collapse — as opposed to a role for
*any* macroscopic system that is affected by the quantum system — is a marginal view. I suspect part of the reason that it lives
on even now in some
popular accounts of quantum mechanics is actually due to proponents of position 2 being mistakenly lumped in with position 4. People who insist
that the wave function is just a mathematical tool for coping with imperfect knowledge, much like probabilities,
are actually diametrically opposed to those who believe that the
act of observation affects something real out there in the world, but since in both cases the definitive moment of “collapse”
involves a person making an observation, the two positions have sometimes been conflated.

*T*hat *Quarantine*'*s* central premise is far from any mainstream view
of quantum mechanics is excusable; every science fiction novel is entitled to one outrageous hypothesis. But *even if*
that premise were granted, there are events in the novel that would still be impossible.

Broadly speaking, there are two main problems. The first involves a process known as **decoherence**. In the double-slit
experiment, the interference pattern of bright and dark stripes arises because the quantum amplitude for a photon to arrive at
each point on the screen is the sum of two different amplitudes: one associated with the photon reaching that point via
the left slit, and another associated with the photon coming through the right slit. Each of those amplitudes has a **phase**
that depends on the length of the path it has travelled since leaving the light source, which in turn will depend on the
exact location on the screen we are considering. When the phases for the two ways the photon could arrive are the same, the
amplitudes add together to become stronger, a phenomenon known as **constructive interference**. When the two phases
are opposite, the amplitudes cancel, which is known as **destructive interference**. As we move across the screen, the relationship
cycles between those extremes, giving rise to the characteristic pattern.

However, suppose that the photon interacts with some other “stray” particle before it reaches the screen,
in a way that causes that particle’s
behaviour to depend strongly on the direction the photon was travelling. It then becomes necessary to
consider a more complicated system: the *pair* of particles, not just the photon. When the photon reaches a certain point on the screen,
if the stray will be travelling left or right depending on whether the photon went through the left slit or the
right slit, we can no longer add the amplitudes associated with the photon reaching the screen by the two different paths.
Instead, we have to consider two completely separate events: the photon hit the screen and the stray went left, or
the photon hit the screen and the stray went right. We then add the probabilities for those distinct events, not the amplitudes.
(This might sound *ad hoc*, but it all arises naturally out of the basic mathematics of quantum mechanics; a detailed explanation is given
here.)

The result is that the interaction with the stray particle makes the interference pattern vanish, just as if the photon passed solely through the left slit or solely through the right. If we could get hold of the stray then it would be possible to extract information from it to allow us to recover the interference pattern, but if we can’t — if it escapes into the wider environment — then (depending on how strongly it interacted with the photon), the quantum behaviour of the photon itself, in respect to the two possible paths, is lost.

This effect, which is known as *decoherence*, looks just like a collapse of the wave function. (This is why the supporters of
position 5 in our list above believe there is no actual collapse; all it takes to produce the *appearance* of a collapse
is for a quantum system to interact strongly enough with something in its environment.) Decoherence is a major problem for people trying
to build quantum computers, and although there are ingenious strategies for correcting the errors it introduces into computations,
I suspect the scenarios in *Quarantine* are well beyond the reach of those methods. When Nick is wandering about “smeared”,
the fact that he manages to avoid people observing and collapsing him only gets him so far; he is also faced with the problem
that every molecule of air, every particle of dust, that he disturbs is going to respond in a way that is correlated with his actions.
Without the ability to gather up, measure and manipulate all those “stray” parts of the complete quantum system, the different
alternatives for his own actions will no longer exhibit quantum interference, and he will have no way to shift amplitude into his preferred outcome
in order for the collapse, when it comes, to choose that outcome over the others.

*T*he second problem is one I was alerted to by the complexity theorist
Scott Aaronson,
who pointed me to a 1997 paper, “Strengths and Weaknesses of Quantum Computing” by Charles H. Bennett, Ethan Bernstein, Gilles Brassard
and Umesh Vazirani ^{[1]}. The upshot of this paper is that the “naive” notion of a quantum computer as being
effectively equivalent to a large number of classical computers running in parallel is false.

One crucial aspect of quantum computers is that they will be able to manipulate representations of numbers, and other data, in a form that
consists of many different values simultaneously.
In a quantum computer, there is a wave function associated with the hardware that stores and manipulates
data; this wave function assigns an amplitude to every value that the hardware is capable of representing — and there is no reason why the
amplitude can’t be non-zero for many different values at once.
The basic unit of data in a classical computer is one bit, or binary
digit, which takes a definite value of 0 or 1. In a quantum computer, the basic unit of data is a quantum bit, or **qubit**, which can be given any
amplitude you like for having the value 1; the probability for it having the value 0 will then be one minus the probability for it having the value 1.

If a quantum computer works with whole strings, or words, of qubits in the same way that a classical computer works with words composed
of, say, 16 bits, then (in principle, assuming the technical challenges are eventually overcome), it will be able to put those 16-qubit words
into superpositions of all 2^{16} different binary numbers with 16 bits. For example, one 16-qubit word could represent, simultaneously,
all the integers from 0 to 65,535. If this quantum word was then manipulated to perform various arithmetical operations on it, the result would then
consist of a superposition of all 2^{16} results of performing that arithmetic on the 2^{16} original numbers.

This sounds like a number cruncher’s dream come true: if you need to find, say, a unique integer less than 2^{16} that satisfies
some complicated equation, then instead of performing thousands of tests on different numbers, one after the other, you simply put your
quantum computer in a superposition of all 2^{16} numbers you need to test, run the test just once, and then ... look to see if any of the
results satisfy the equation.

That last part is where the dream collides with reality. There is no general-purpose method for instantly discovering which,
if any, of the “branches” of the calculation yielded the desired result.
All you have at the end of the calculation is a quantum system in a superposition of thousands of states,
and if you simply measure the state of that system, the probability of observing the one result that tells you something useful is vanishingly small.
You might just as well have run a single classical computer on a randomly chosen input!
There *are* ingenious things that can be done *for particular problems*: approaches that exploit the detailed structure of the
problem to enable a quantum computer to reach a state where it has a high probability of telling you something useful
(Peter Shor'*s* algorithm
for factoring numbers is the most celebrated example of that). But what the 1997 “BBBV” paper showed was that the naive idea of taking a completely
general problem and expecting a quantum computer to give the answer in the same
manner, and just as rapidly, as if you were dealing with as many classical computers as there are branches to the quantum calculation, is
untenable.

I suppose I can’t be blamed for failing to know this result five years before it was proved, but this is fatal for most of Nick’s
quantum feats, which amount to him “smearing”, simultaneously trying every alternative among thousands or millions,
then choosing to collapse to the branch that happened to succeed.
There is one small potential escape clause here: arguably, the assumption that something
in the brain (or indeed any device at all) is capable of objectively collapsing the wave function implies some new physics beyond
conventional quantum mechanics; in that view, positions 3 and 4 in our list are implicitly calling on position 6 as well. Some deviation from
standard quantum mechanics, such as a small non-linearity in the equations, could invalidate the “BBBV” result.
But the bottom line is, Nick’s exploits — even viewed as merely a metaphor for quantum processes — violate this important result,
which almost certainly *does* hold in the real world.

I hope that other aspects of the novel are still worthwhile, and that even the impossible quantum feats are entertaining. But understanding
the reality of quantum mechanics is important, and while we are still a long way from grasping that in its entirety, *Quarantine* definitely
strays too far from things that we know to be true.

### References

[1] “Strengths and Weaknesses of Quantum Computing” by Charles H. Bennett,
Ethan Bernstein, Gilles Brassard and Umesh Vazirani; *SIAM Journal on Computing*, Volume 26, Issue 5 (October 1997), Pages: 1510 – 1523.