Planet of Giants

We have been reduced roughly to the size of an inch.

Atoms are mostly empty space. We’ve known this since 1909, when Ernest Rutherford’s researchers Hans Geiger and Ernest Marsden fired alpha particles at gold foil in an attempt to probe the structure of the atom. To their surprise, most of them went straight through. To their even greater surprise, a few of them bounced straight back. This led in short order to the basic model of the atom that we still use today: a tiny, positively charged nucleus containing almost all the atom’s mass, orbited by negatively charged electrons.

It’s this structure of the atom that makes things the size they are. The direction of the electric force between two charged particles depends on whether the charges are of the same sign, or opposite signs. If they are opposite, the force attracts the particles together: if they are the same, the force pushes them apart. Whether the force is attractive or repulsive, it increases the closer the particles get to one another. So the harder you try to push two electrons together, the more strongly they will try to push each other apart again.

Since the outside of the atom is made of electrons, the same thing happens as you try to push atoms together. And that’s what determines the size of objects in the world around us: the balance between the forces that keep atoms together, and the forces that push them apart.

If you’re going to shrink something, then, you have to change that balance. The obvious way is to simply squeeze the atoms closer together, applying enough pressure to overcome the repulsive electric force between the electrons. That’ll work, provided you don’t mind squishing whatever you’re trying to shrink.

If you want to shrink a person, and not turn them into a compressed pellet of dense goo, you’re going to have to be a bit more sophisticated. Instead of increasing the force pushing the atoms together, you could try to reduce the repulsive electric force, making the atoms naturally huddle up closer to each other.

The strength of the electric force depends on a physical constant called the permittivity of free space, usually labelled ε0 (epsilon-nought). The permittivity of a material is a number that tells you how it is affected by an electric field, and ε0 tells you the same thing about space itself. The higher ε0, the lower the force between charges. So, make ε0 bigger, electric fields get weaker, and atoms sit closer together. Shrinking accomplished!

Except… how exactly do you increase ε0? OK, use some mysterious property of the Tardis, fair enough, but what actually would have to be happening?

Well, the reason why ε0 is the size it is, the reason why space even has this property of permittivity at all, is still not entirely understood, but it probably goes something like this. In a normal material, made up of protons and electrons, an electric field makes these particles line up to a greater or lesser extent. Every material has its own molecular structure, its own configuration of protons and electrons, and this determines its permittivity. Empty space doesn’t have protons and electrons in it – that’s why it’s called empty space – but it does have something altogether stranger.

According to the theory of quantum fields, particles can pop in and out of existence for microscopic fractions of time. How long these virtual particles can exist for is governed by Planck’s constant, which is a very small number indeed – and that’s why we’re generally unaware of it happening. But it does mean that what we think of as empty space is in fact a boiling sea of virtual particles, appearing and disappearing in the blink of a quantum eye.

When an electric field passes through some part of empty space, these virtual particles line up with it just like the protons and electrons in a real material. If you want to increase the permittivity of free space, you need to have fewer virtual particles about – and that means reducing Planck’s constant.

So there you have it. Something goes wrong with the Tardis, it somehow reduces Planck’s constant in the bodies and clothes of our time travellers, and everybody shrinks. Job done.

Except… there are some complications. Mucking about with these constants doesn’t just bring atoms closer together. It also changes a lot of other things, including the energy levels of the electrons within the atoms. All of chemistry, and therefore all of biology, is determined by these energy levels. Change these, and you screw up every single process within our bodies. The consequences would at least be mercifully brief.

But let’s say you somehow manage to do this in such a way that everything still works, albeit in a miniaturised form. You still have problems. Breathing, for a start. The oxygen atoms in the air will be much too big for the now-miniaturised alveoli in the lungs, which allow the oxygen to pass into the blood. Seeing will be interesting, too. The light receptors in the eyes will now respond to much shorter wavelengths. For a shrinkage factor of 100, which would roughly shrink an adult down to about an inch, the eyes would see, not visible light, but X-rays. This would be pretty cool, except that there are not many X-rays around at the surface of the Earth – the X-rays from the Sun being absorbed by the atmosphere, which is just as well otherwise we’d all die – and so it would be like wandering about in the dark all the time.

And, even more fundamental and even more inescapable that all of these, there is the matter of weight. Pushing all these atoms together doesn’t change their mass. A person who weighs 70 kg normally will still weigh 70 kg after shrinking. Four such people standing on a table would make it collapse, and crossing soft ground would be impossible as every step would just sink deep into the earth.

All in all, this notion of shrinking people by bringing their atoms closer together doesn’t seem all that clever after all. What happens if we take some of the atoms away instead?

Certainly, removing 99 atoms out of every hundred would shrink a person. But you’d have to be a bit careful about it. Removing atoms blindly would end up destroying the delicate internal machinery of the body’s cells, with fatal consequences. The only way this approach might work is if you take away 99% of each type of cell, like shrinking a wall by removing some of the bricks. The bricks will still work and the wall will still stand, just a little shorter.

Bodies, however, are more complicated than walls. I dare say a length of bone or gut might more or less continue to function with 99% fewer cells. Even the retina might be OK, although the shrunken eyeball will not be able to focus, so one way or another the person ends up blind. But what about the brain? The complex connections between neurons are the basis of all our thoughts and memories, not to mention the unconscious processes that keep our bodies working. Remove 99% of these cells, and you destroy virtually all of the brain, right down to the basic functions that regulate breathing and circulation.

And even if all these problems could be overcome, these is still this fundamental issue of mass. It can’t just vanish – the conservation laws won’t allow it. And evidently it doesn’t just hang around, or our miniature heroes would find themselves drowning in great puddles of organic goo. No, the only way to get rid of mass entirely is to convert it to energy.

How much energy? That’s easy to calculate. Einstein’s famous equation E=mc2 tells us that mass and energy can be turned into one another, with the conversion factor given by the speed of light squared. This factor of c2 works out at just over 20 megatons per kilogram. Shrinking a 70 kg person, therefore, releases just over 1400 megatons of energy. The whole Tardis crew probably amounts to roughly 250 kg, so that’s an energy release of 5000 megatons. (Let’s not even try to figure out the numbers for the Tardis.) This is nearly ten times the size of the entire US nuclear arsenal, and a hundred times greater than Tsar Bomba, the largest bomb ever detonated.

To get an idea of what would happen if all this energy were released in, say, London, we can use one of my favourite web apps: Alex Wellerstein’s NUKEMAP.

DIagram showing blast and heat radii from 5000 MT explosion in London

Blast and heat radii from 5000 MT explosion in London. (Plotted with NUKEMAP.)

The fireball stretches from Croydon to Edgeware. Buildings from Dover to Coventry are blasted to rubble. And everything from Newcastle to Paris is on fire.

Admittedly, this is a crude model. The nukemap is designed to deal with man-made nuclear weapons, and doesn’t necessarily scale up accurately to such colossal power. It also doesn’t take into account the curvature of the Earth, which would be significant on this scale. Finally, using this model means assuming that the energy is released all at once. Just as loose gunpowder will burn rather than explode, a more gradual energy release would not create such a gargantuan blast. However, it would still dump the same amount of heat into a very small volume of the atmosphere, so the firestorm effects at least would be broadly similar. It certainly puts the environmental threat from Forrester’s dastardly insecticide plot into perspective.

So, to conclude, there doesn’t seem to be any way to make miniaturisation work. The best case scenario has the miniature person blindly choking to death in seconds. The worst case scenario incinerates half of Britain. Perhaps it’s best if we never speak of this again.

Advertisements

The Reign of Terror

You will be guillotined as soon as it can be arranged.

Beheading someone is harder than it looks. Just ask Mary Queen of Scots. Well, you can’t, she got her head chopped off, but by all accounts it was a messy business. The first axe blow missed her neck, cutting instead into the back of her head. The second stroke was more successful, cutting through most of the neck, but the executioner had to have a third go at it to grind through the last bits of gristle.

This kind of thing was not at all uncommon. Anyone who’s ever had to chop firewood knows that it’s an inexact business at the best of times, and the neck is a pretty thick, solid object to try to cut through in a single stroke.

So it’s not surprising that people developed machines to do the job more efficiently (the option of just not cutting heads off at all evidently being regarded as folly).

The basic idea of a guillotine is simple enough. The amount of force a human can impart to an axe or sword is limited by their musculature. If you attach the blade to a great big weight and drop it, the speed, energy and momentum of the blade are only limited by how high you can drop it from. Add some vertical guide rails and a neck-holding device at the bottom to make sure it hits its target, and away you go.

The guillotine is indelibly associated with Revolutionary France, but similar devices precede it by hundreds of years. The Halifax Gibbet was built by the good burghers of that town in response to increasing problems of the theft of cloth, a major trade good of the time. No one knows exactly when it was built, though there is apparently a reference to it dating from around 1280, but we know it stayed in use till 1650, when Oliver Cromwell banned the practice. (And when Cromwell thinks your punishments are excessive, it’s time to take a good hard look at yourself.)

It was a simple enough device: two tall vertical runners holding a large, heavy block of wood with an axe head attached to the underside. The hapless thief would be placed with his neck on the block beneath the blade, the executioner would tug on a rope to release the securing pin, and the blade would fall. In a macabre twist, if the conviction was for theft of an animal, the animal in question would be tethered to the gibbet’s rope and then driven off, pulling out the pin. In this way the animal would execute the person who tried to steal it.

The Earl of Morton, a Scottish nobleman and leading opponent of Mary, Queen of Scots, was so impressed with the Halifax Gibbet that he brought the design to Edinburgh, where a more portable version was constructed in 1564. Unlike the English model, this one was flat-packed and stored away in between beheadings. The Scottish Maiden, as it was called, did merry business, taking 150 heads in as many years – including that of Morton himself.

These early devices are basically the same as the classic French guillotine, apart from one technological advance. Both the Halifax Gibbet and the Scottish Maiden have horizontal blades, either straight or slightly curved. The blade on the French version is steeply angled.

To see why, take a knife and cut through something reasonably solid from the fridge – some cheese or meat, say. If you try to cut by pushing the blade straight down, it’s hard – you have to apply quite a lot of force. But if you move the blade horizontally as you cut down, it’s a lot easier. To put it slightly more formally, if the direction of motion of the blade is perpendicular to the blade, then the required cutting force is at a maximum: as the angle increases away from the perpendicular, the force required decreases. When Mary’s executioner struck her neck with his axe, the blade would have been nearly perpendicular to the blow. (The kind of axe he used has sadly gone unrecorded. A curved axe blade would have been better than a straight blade in this respect, but would not have made a dramatic difference.) In the end, he had to resort to a sawing motion to finish the job, giving him the benefit of horizontal slicing.

The angled blade of a guillotine achieves this automatically. The larger the angle, the greater the ratio of horizontal to vertical slicing, and the less force needed to achieve the cut. In practice, the angle can only get so large before the whole mechanism becomes unwieldy: more difficult and expensive to manufacture, and requiring an increasingly high vertical drop. So the final design is a compromise between theoretical cutting efficiency and practical engineering.

So by the eighteenth century, the French had perfected a machine to carry out swift, efficient beheadings. To see why this would become such an icon of the Revolution, we have to look at the man who the machine would eventually be named after: Joseph-Ignace Guillotin. As a physician turned politician, Guillotin’s main work during the revolution was in medical reform. He proposed that a single method of execution should be used in France, and that it should be the beheading machine. There were two reasons for this: one humanitarian, one political. He thought mechanical decapitation would be a virtually painless form of death, and he wanted all citizens to be treated equally, regardless of class. Hitherto, beheading had been reserved for the nobility, while peasants were generally hanged – or worse.

His proposals were accepted, and the classlessness of the new death penalty was amply demonstrated in the Reign of Terror that followed the Revolution. Tens of thousands were executed for political crimes regardless of class, and often regardless of evidence.

The guillotine remained the standard method of state execution in France right through to the twentieth century. It was also adopted in some parts of Germany, but only saw intensive use there once the Nazis came to power. Hitler, it turned out, was a big fan. The improved German models were shorter, for use indoors, and featured all kinds of handy features such as a metal bucket for the head, a spout to direct the blood downwards into a drain, and a forehead strap to keep the victim’s head steady. Over 11,000 people died in the Nazi guillotines, approaching but not matching the figures for the Reign of Terror in France.

It was the swift efficiency of the guillotine that attracted the Nazis to the device, with turn-around times between executions of just a few minutes. Achieving a more humane execution was the last thing on their minds.

And maybe it’s just as well. The guillotine purports to offer an instant death, but from the time of its mass deployment in the French Revolution questions were being asked about how long a decapitated head can live.

There are various tales of guillotined heads remaining apparently alive for many minutes post-execution, but these are mostly apocryphal. The most famous of these is told of the great chemist Antoine Lavoisier, who was sent to the guillotine on charges for which he was posthumously exonerated. It is said that he asked a student to observe his severed head, and that he would continue blinking for as long as he could in order to establish how long consciousness would last. The student watched Lavoisier’s eyes blink for fifteen seconds before they finally closed for good. It is a stirring tale of scientific dedication in the face of terror and injustice, rendered only slightly less compelling by the fact that it appears to have been made up some time in the last twenty years.

The most widely credited testimony comes from Dr Gabriel Beaurieux, who described the experiment he was allowed to perform at the guillotining of Henri Languille. In his account, the severed head remained capable of full eye contact and responsive to Languille’s name being shouted for 25-30 seconds. Even this, though, is under some cloud of doubt, as contemporary photographs of the event are inconsistent with the doctor’s account, and he is not mentioned in the official report.

It would be difficult these days to devise an ethical experiment to sort the matter out once and for all. However, we do know enough about anatomy to be fairly sure that a severed head would have at most a few seconds of consciousness, as the intercranial blood pressure rapidly falls. This is consistent with the more pragmatic observations of British commando pioneers Fairbairn and Sykes, whose table of the effects of severing various arteries in the enemy indicates that, when the carotid artery is cut, unconsciousness occurs in five seconds,and death in twelve. This sets an upper limit on how long a completely severed head could live.

It would be a painful few seconds, and in that respect probably less humane than the long drop hanging technique perfected by Albert Pierrepoint, or the Russian method of shooting in the back of the neck. However, this search for a painless method of execution is a rather artificial exercise. It may be possible to deliver a near-painless death to an animal, but a person condemned to die understands what is to happen to them, and experiences the terror and anguish of death long before the sentence is carried out. Simply to wait for execution is the most wracking torture, regardless of the method of death that is ultimately employed.

There is no such thing as a humane method of execution, and the attempt to create one is really about enabling the people doing the killing to feel better about it. An execution machine like a guillotine is about more than an efficient decapitation: it is about distancing the executioner, the onlookers and the whole of society from the reality of judicial murder. This reaches its ultimate form in the present-day US, with its ritual of killing by lethal injection. The drugs mandated for use in execution are chosen not to minimise the pain of the condemned person, but to give the outward appearance of a gentle slipping away at the cost of actual agony, while the entire process mimics as closely as possible a genuine medical treatment, right down to the redundant swabbing of the skin before the needle is inserted.

The guillotine itself is no longer in use. France executed its last prisoner in 1977. With the death penalty now abolished throughout the EU, and falling out of favour in most of the world outside the US and China, this device of terror is now confined to the museum, as a reminder of less civilised times. Perhaps, in time, the other paraphernalia of state killing will join it.

The Sensorites

We can read the misery in her mind

It’s all about death. Most things are, when you get down to it.

It’s also something of a historical accident.

The nineteenth century spiritualism craze hit Britain just when science was reshaping itself. Some of this reshaping was institutional. Professional scientific institutions were being established, that would transform scientific research from a hobby for learned gentlemen into a career for smart professionals. And some of it was conceptual. The strange phenomena of electricity and magnetism were being systematically investigated and codified, the inner workings of the nervous system were beginning to be exposed, and the full spectrum of light from radio waves to X-rays was opening up.

This created an intellectual environment of strange new forces acting between disconnected bodies as if by magic, of mysterious transmissions through unexplained media, of thoughts and feelings carried by electrical forces. The apparent world became a small circle of light in a darkened vastness, into which the lanterns of science were only beginning to penetrate. Just as geology and evolution opened up great vistas of unknown time, so did physics, chemistry and anatomy reveal that the apparent world is but a small sliver of the full breadth and depth of nature.

So when mediums showed up claiming to be able to speak to the dead, or when thoughts seemed to pass from one mind to another without conscious communication, the intellectual world was primed to conceptualise these phenomena in a new way: not as the work of gods or devils, but as the results of the same unknown forces that enabled electric currents to pass between disconnected wires, radio waves to travel great distances, nerve impulses to cross the synaptic gap.

The pioneering chemist William Crookes was the first to attempt scientific measurements of spiritualist phenomena. When the medium Daniel Dunglas Home appeared to be able to levitate, Crookes carefully measured the force per square inch with a pressure gauge, proving to his own satisfaction at least that there was such a thing as psychic force. The physicist William Barrett conducted experiments on thought transference, and along with scientific colleagues investigated the mind-reading abilities of the five children in the Creery family.

These scientists developed the idea that there was some new force, hitherto unknown to science, that mediated the mental and spiritual realm. This force allowed communication between living minds and between the living and the dead, and could move objects in the physical world. This idea became institutionalised: alongside such respectable establishments as the Royal Society, the Institute of Chemistry and the Society of Telegraph Engineers there was established the Society for Psychical Research.

Make no mistake, these researchers were a minority, The bulk of the scientific establishment dismissed spiritualism as the work of charlatans and mountebanks, and the psychic force as a product of self-delusion and sloppiness. In this, they were entirely right. The fashionable mediums of the day were unmasked or confessed to fraud, Daniel Dunglas Home’s conjuring tricks were exposed, while the Creery children eventually revealed the code-systems they used to communicate.

However, something was lost when these psychic investigations were discredited. To get an idea of what that was, we can look to more recent times, and what was possibly the greatest act of telepathy in human history.

When Queen were on stage at the Live Aid concert in 1985, there was an idea within the mind of Freddie Mercury. That idea was “Freddie Mercury is awesome”. Mercury managed to transmit that idea into the minds of the tens of thousands of people in the audience at Wembley Stadium. That is in itself an impressive feat of telepathy. But thanks to the global satellite broadcast of the event, Mercury was able to transfer this mental construct from his own mind directly into the minds of an estimated one and a half billion people worldwide. And thanks to the video recording being readily available on the internet, Freddie Mercury’s ghost can continue to implant this idea into the minds of millions, long after his death.

The fact that this telepathic influence can be mediated by video recordings tells us something very significant. Whatever it is, it can be encoded in audiovisual data. In other words, it requires no novel or mysterious physical medium, just sound waves and photons. Telepathy, whatever it is, is explicable without any new laws of physics.

Now you may be objecting at this point, saying “That’s not telepathy. That’s just charisma”. Well, yes, it is charisma, and Freddie Mercury was undoubtedly one of the most charismatic men who ever lived. But charisma is just a label for a kind of mental influence that is not at all well understood. We could just as well call it telepathy.

But of course telepathy requires a receiver as well as a transmitter. That’s where the other half of the telepathic equation comes in – empathy. Whether it’s being able to share in another person’s emotions, sense the interpersonal atmosphere in a room, or guess at hidden concerns, empathy involves reaching out to other people and absorbing some part of their thoughts and emotions.

The linking of minds that is at the heart of telepathy happens, in this view, when charisma and empathy both reach out and connect with on another. The more powerful one of these is, the less powerful the other needs to be. The preternatural charisma of Freddie Mercury can reach a vast audience with no particular talent for empathy, while a natural empath can gauge the feelings of people of indifferent charisma.

And it’s empathy and charisma that are the vital components of apparently supernatural cases of mental contact. The stage illusionist Derren Brown repeatedly cites charisma as a vital characteristic for anyone trying to simulate Victorian-style mediumship and spiritualism, whether for entertaining conjuring shows like his own performances or for cruelly fleecing bereaved people out of money by purporting to actually speak to their dead loved ones. Meanwhile the well-meaning souls who attend psychic training schools are effectively given courses in developing their empathic abilities: close listening and sensitivity.

So when we strip away the charlatanry and self-delusion, the phenomena that were investigated by psychical researchers make sense as a combination of charisma and empathy. It is unfortunate that, as official science became established and demarcated, these phenomena ended up in the institutions of psychical research rather than psychology.

The investigation of these phenomena is still geared towards finding some extra force in nature, just as it was back in the nineteenth century. The only difference is that, following trends in physics, the purported mechanisms invoke quantum mechanics rather than electromagnetism – and lest you think that is any more plausible, take a look at the entry for The Keys of Marinus to see how subtle the actual physical processes of quantum action at a distance really are. Meanwhile, rigorous study of thought transference as a mundane psychological phenomenon seems mainly to be done by stage illusionists, who for understandable reasons tend not to write up their investigations in peer-reviewed journals.

We’ve seen how thought transference – telepathy – became separated from mainstream science, but why does that separation exist to this day? What is the resistance to bringing it back into the fold as a mundane, if sometimes baffling, psychological phenomenon? Well, the general air of disreputability that has always hung around this field would explain why the scientific establishment would resist, but there’s a deeper reason. The established psychical investigators are determined to find proof of something beyond normal psychology in these processes, and have been ever since the birth of psychical research, for a profound and powerful reason.

They want to find proof of life after death.

If the human mind can exist in some medium unknown to mainstream science, if it can communicate in some way unbounded by any physiological basis, then there would be some hope that the mind could continue to exist, to experience, to communicate after the physical destruction of the body.

This is why the original psychical investigators got so interested in spiritualism in the first place. It’s what continued to motivate them even as they sought to put some respectable distance between their researches and the charlatanry of mediumship. And it’s why established psychical research still draws in some brilliant and respected scientists at the ends of their careers, as they face the cruel inexorability of old age. The study of telepathy might provide psychologists with new insight into how charisma and empathy work, but they will do nothing to banish the fear of death, or to bring back lost loved ones. And as long as there is some activity with the trappings of science that promises to do just that, there will always be enthusiasts who refuse to let go of the possibility that their experiments might just open the path to a world beyond the grave.

The Aztecs

Three days from today. The moon will pass before the sun and then all will be in darkness.

Total solar eclipse showing solar corona

Total eclipse of the Sun

Astronomy is the oldest science.The remains left behind by ancient civilisations show that they paid close attention to the celestial motions of stars and planets – and with good reason. The celestial sphere is a precise instrument for time-keeping and direction-finding, more constant and reliable than any earthly mechanism. They may have lacked telescopes, but the peoples of ancient Babylon, Egypt, China, Greece and Mesoamerica kept careful records of precise naked-eye observations that gave them a reliable dataset that stretched back centuries.

With so much detailed information on the rising and setting of the Sun and stars, of the phases of the Moon and its motion against the stellar background, of the peculiar wandering stars called “planets”, early astronomers could establish calendars, predict agricultural seasons, devise systems for navigation at sea. Astronomy today may seem remote from practical concerns, but in those days it was a matter of life and death for rulers, their subjects, and whole kingdoms.

Into this serene heavenly clockwork, disruptive influences would suddenly protrude. Meteors, that seemed like falling stars and occasionally brought chunks of heavenly iron to Earth. Comets, signs of ghostly foreboding that presaged great turmoil. But none were more closely studied, and none more feared, than eclipses.

To understand why, we first have to look at the two different kinds of eclipse: lunar and solar. This diagram shows the basic idea – not to scale, of course.

Diagram showing configuration of lunar eclipse

Lunar Eclipse

The Sun is in yellow, and the blue Earth orbits around it. The Moon, shown in white, orbits round the Earth. When the Sun, the Earth and the Moon are all lined up so that the Moon passes through the Earth’s shadow (shown in grey), we have a lunar eclipse. If you were sitting on the near side of the Moon, you would see the Earth pass in front of the Sun, blotting it out for a while. If you’re on Earth, looking up at the Moon, you see something more remarkable: the Moon turning a dark, ominous red.

Why red? Well, as the Sun’s rays pass through the Earth’s atmosphere, they bend a bit. This is diffraction – the same effect that makes a stick seem to bend when you put it in water. These light rays illuminate the Moon so it doesn’t completely disappear even though it is cut off from the Sun’s direct light. But these light rays aren’t just bent: they’re reddened. As the light passes through the atmosphere, it scatters off the molecules of nitrogen, oxygen and other gases, and blue light scatters much more that red light. So the blue light tends to scatter away, while the red light keeps going in more or less the same direction. This is why sunsets are red, and it’s why the light striking the Moon during a lunar eclipse is also red. Indeed, the Moon is being bathed in the light from every sunset on Earth.

If we now switch the alignment round, so that the Earth is in the Moon’s shadow as shown in the next diagram, we have a solar eclipse.

Diagram showing configuration of solar eclipse

Solar Eclipse

From a suitable vantage point on Earth, the Moon blocks out the light of the Sun. In a curious cosmic coincidence, the apparent size of the Moon as seen from the Earth is almost exactly the same as the apparent size of the Sun. This means the Moon can exactly block out the whole solar disc. When it does so the hot outer atmosphere of the Sun – the corona – that we normally can’t see due to the Sun’s glare suddenly appears in the darkened sky. It’s quite a sight. Meanwhile the Earth appears to darken and cool, until the Moon moves past the Sun and daylight is restored.

But notice that the Moon’s shadow is much smaller than the Earth’s. In the case of a lunar eclipse, the whole Moon could easily fit inside the shadow of the Earth, while in a solar eclipse only a small part of the Earth is covered by the Moon’s shadow. This fact was crucially important for the ancient astronomers, as we shall see in a moment.

Now these rather noddy diagrams I’ve drawn are rather too simplistic. They show the Sun, Moon and Earth tracing out perfectly circular orbits in a single plane. If that were the case, eclipse prediction would be easy: every Full Moon would be a lunar eclipse, and every New Moon would be a solar eclipse. One of each, every month. Of course, it’s more complicated than that.

The most important factor is that the orbit of the Moon around the Earth is not in quite the same plane as the orbit of the Earth around the Sun: there’s an angle of about five degrees between them. That may not sound like much, but it’s enough that perfect alignments of all three bodies are rare.

Nowadays we can predict eclipses accurately, thanks to Newton’s laws of motion and the theory of celestial mechanics that is built on them. But the ancient astronomers didn’t have that kind of understanding. If they were going to predict eclipses, they would have to do it by detailed, long-term observation of the motions of the Sun and Moon, analysing these long sequences of data to find any regularities that would hold a clue as to when these special alignments would take place.

And that’s just what they did.

The most important of these regularities is the Saros Cycle. This is a period of 223 months: about 18 years, 11 days and 8 hours, and it is the time it takes the Earth, Moon and Sun to orbit around and come back into approximately the same alignment relative to each other. So, if there’s an eclipse today, then there will be another in just over 18 years. Now, eclipses aren’t simply 18 years apart – there are all kinds of other cycles going on as well, which mean more eclipses within that period, but if you observe enough eclipses and use this 18-year trick for each one you can start to build up a reasonable set of predictions.

A big problem with this technique is that the Saros is not a whole number of days. For a lunar eclipse that’s not such a big deal – it may happen eight hours late, but you should still get it on the right night. For solar eclipses, though, it’s more of an issue. Remember that the Moon’s shadow on the Earth is relatively small, only about 100km across, and the eclipse happening eight hours later also means it happens in some distant part of the Earth that happens to be in the Moon’s shadow at the time – as far as you can tell, there has been no eclipse at all. It’s not all hopeless, though: three times eight is 24, and so three Saros cycles add up to a whole number of days, bringing the solar eclipse back to roughly your neighbourhood.

In the Western world, it was the Babylonians who first discovered this 223 month cycle, and we have records of their meticulous astronomical observations that survive from the 17th century BCE. By the time we get to the classical Greeks, eclipse prediction has become much more sophisticated. The 223 month cycle is built into the clockwork computer known as the Antikythera Mechanism, and the most famous Greek astronomer, Ptolemy, had a sophisticated method for predicting both lunar and solar eclipses. The Chinese, developing in parallel, also figured out how to predict eclipses and by the third century CE knew how to predict solar eclipses by analysing the motion of the Moon.

It seems a lot of trouble to go to for something that may be interesting, but isn’t obviously useful. Predicting eclipses won’t tell you when to sow your seeds or when to harvest your crop. So why bother?

In the West and in China, it comes down to this idea of omens: that heavenly occurrences foretell earthly events. Which is a load of rubbish, of course, but they didn’t know that. If the regular motions of the stars and planets predict the regular cycles of the seasons, then doesn’t it seem reasonable that irregular celestial events like eclipses foretell irregular events of Earth – sudden calamities and the like? Ancient rulers in particular took this possibility very seriously. This was good news for astronomers seeking funding, but not so great if they didn’t produce the goods, as two ancient Chinese court astrologers discovered when they were beheaded following an unexpected solar eclipse.

But if we want to see sheer cosmic terror in action we have to leave behind the Chinese, set aside the Greeks and go even further west, to Central America – and the Aztecs.

Like all ancient peoples, the Aztecs had a complex cosmology, explaining in mythic terms how the world came to be, how the gods set the Sun and Moon in the sky, the divine purpose behind creation, and so on. What really distinguishes the Aztec cosmology is the sheer amount of blood involved. Blood made the Sun rise in the heavens. Blood made the crops grow in the fields. Blood was the very fuel of the engine of creation, and without an endless cycle of blood sacrifices the Universe would grind to a halt and catastrophe would come to all humanity. It was the place of humans to play their part in this natural cycle, and the ritual killing of the appointed victims was recognised as a supreme moral duty.

Again, like the other great civilisations of the ancient world, the Aztecs were dedicated astronomers, with a calendar based on celestial observations that ordered their society. In fact, they had two. The first was a solar calendar spanning the familiar 365 days, divided into 18 months of 20 days, each with its own set of rituals – bloodletting, sacrifice, flaying of prisoners and so on – plus a special five-day period at the end of the year. Running alongside this was the ritual calendar of 260 days, comprising 20 periods of 13 days, each dedicated to a different god.

These two calendars would march along out of step with each other for the most part, but every 52 years they would coincide and then the cycle would start up again. The Aztecs believed that, at the end of each 52-year period, the gods might decide to end the world. To stave off this disaster, they performed the New Fire ceremony, in which all fires throughout the Aztec real were extinguished, a man was sacrificed atop the extinct volcano of Huixachtlan, and new fire kindled on his chest and passed out to all the people. This ceremony was always successful.

Every bit as dangerous were the solar eclipses, which the Aztecs understood as the Moon – depicted in their art as a monstrous deity – attacking the Sun. If this attack were not repelled by suitable rituals of bloody sacrifice, the Sun could disappear for ever and the world come to an end. A solar eclipse at the time of the New Fire ceremony would be particularly terrifying, and even a New Moon around this time, with its potential to turn out to be an eclipse, would be a source of great anxiety.

Whether the Aztecs could have predicted these calamitous events with any reliability is not known. Alas, much of the intellectual material of their empire was destroyed in the Spanish conquest, when the Aztecs were overthrown by a more technologically advanced bunch of blood-soaked religious fanatics. What we do have, from detailed records in surviving codices to precise astronomical alignments of key buildings, suggests a remarkably precise degree of astronomical measurement and clever ways of using alternating whole numbers to express fractions of a day in orbital motions. They certainly had some knowledge of the cycles underlying lunar eclipses, and it is entirely possible that they could have matched or even surpassed the Greeks and Chinese in eclipse prediction. It is unlikely that we shall ever know.

The Keys of Marinus

I wouldn’t think of asking you to travel in such an absurd way.

Teleportation – moving from place to place near-instantly, without having to travel through the intervening space – has long had a hold on the human imagination. From the Arabian Nights to the Ring Cycle, it appears as a magical ability to disappear here and reappear there, and is still invoked in this way by various mystics to this day (as well as, bizarrely, being studied seriously by US military intelligence). Even when it comes into science fiction, it is at first as a mystical or psychic power, whether as John Carter’s sudden trip to Mars or Gully Foyle’s jaunting.

But science fiction inevitably seeks to translate mystical marvels into technological devices, and teleportation is no exception. It’s most famous from Star Trek, of course, and apart from a few sad fans no one knows or cares that Doctor Who got there first.

So how could the technology of teleportation work?

Naively, you could imagine doing teleportation by measuring the position and all the other properties of every particle in the body, then transmitting that information to somewhere else where the body is reassembled. This is the usual explanation of Star Trek-style teleportation. It is, unfortunately, impossible. It’s generally said that this impossibility is due to the Heisenberg Uncertainty Principle, which says that physical variables at the quantum level come in matched pairs, such as position and momentum, and the more accurately you measure one the less accurately you can know the other. This is quite true, and in itself a fatal blow to this model of teleportation (one which later Trek series hilariously handwaved away by invoking “Heisenberg compensators”), but there’s a deeper version of this idea that we need to understand before going on to see how quantum teleportation can work.

In quantum mechanics, systems of particles exist in quantum states, which cannot be measured directly. A single measurement only gives us partial information, and it destroys the quantum state in the process. If you have a load of systems in the same quantum state you can measure all of them, and build up an approximate description of the underlying state – the more of these systems you measure, the more accurate the description. What you can’t do is directly measure the complete quantum state of a single system, such that you could then transmit that information somewhere else and recreate the system.

Quantum teleportation solves these problems, but with some restrictions and subtleties. It involves the use of particles that have been made to interact in some way so that they are each part of the same quantum system, then separated such that they are still part of the same quantum state even though they are some distance apart. This is called entanglement.

Imagine a setup where two people, let’s call them Arbitan and Barbara, share in advance a pair of particles that have been put into an entangled state. Now Arbitan has a third particle, that is in some quantum state of its own. This is the particle he wishes to teleport. By making certain cunningly-contrived measurements on this third particle in conjunction with his entangled particle, Arbitan manages to extract a set of information about his combination of particles, which he sends to Barbara by conventional means. Barbara can then use this information to put her half of the entangled pair into the same state as the particle that Arbitan wanted to teleport. So the net effect is that the quantum state of Arbitan’s particle is destroyed, and transferred to Barbara’s particle. Crucially, it is the complete quantum state that is transferred, not just the partial information that Arbitan could glean by measuring his particle’s quantum state directly. That’s really the clever bit.

An interesting philosophical wrinkle here is that it is not quite right to say that a quantum state is transferred from one particle in Arbitan’s possession to a different particle in Barbara’s possession. Elementary particles are indistinguishable from one another. Electrons aren’t like cars. Even though cars are mass-produced and come in production runs of apparently identical cars, there is a real sense in which my dark blue Vauxhall Astra is not the same as your dark blue Vauxhall Astra, even before they get scratched and grimy and the passenger sides covered in the muddy footprints of our respective spouses. Electrons are different. They don’t have number plates or identifying marks. If we each have an electron, and we swap them, the electrons remain in the same physical state: as far as the laws of physics are concerned, nothing has changed. This is really, really important. The behaviour of matter depends on how electrons and other particles behave as a statistical aggregate, and those statistics become very different if this isn’t true. Among the many, many things that depend on this are semiconductors, such as the chips that drive the computer or phone or whatever device you’re using to read this blog.

The upshot of this, as far as teleportation is concerned, is that there’s no sense in saying “you haven’t teleported the particle, you’ve just transferred its quantum state to another particle far away”. These two things are identical.

We also don’t need to worry about this apparent duplication process giving rise to multiple identical copies. Arbitan’s measurement destroys the quantum information in his version of the system – there is only ever one copy at a time.

There is still the question of  – assuming we can scale this up from the spin state of one particle to the entire quantum ensemble of 1029 particles that make up your typical living, breathing human in such a way that the teleported person is still living and breathing at the end of the process – whether the teleported person (let’s call her Susan) is copied as a single, continuous entity or whether she is killed by Arbitan and resurrected by Barbara as a new person with only the memories of the original Susan. The argument that the particles are indistinguishable, so she should just chill out, might not seem so compelling to the Susan in Arbitan’s clutches, as she experiences her quantum information being destroyed. It’s as much a question of philosophy as physics, and it’s philosophers we turn to for an answer.

In a recent survey of 931 philosophers, one of the questions they were asked was precisely this: does teleporting Susan result in her death and the creation of a copy, or her survival in Barbara’s far-off location? The results were as follows:

Survival: 36.2%

Death:31.1%

Other:32.7%

I guess that’s why they get paid the big bucks.

Now there are three big restrictions on this kind of teleportation. The first is that Arbitan still has to send the results of his measurements to Barbara before she can perform the teleportation at her end. That’s maybe not such a big deal, but it does mean that you can’t use this to travel faster than light. The second is that Barbara has to have a suitable supply of appropriate particles to complete the teleportation. Easy enough if we’re talking about individual electrons, but quite how you would store and use the raw material for a complete Susan is a trickier question.

The biggest problem of all, though, is that this can only work at all if Arbitan and Barbara have previously shared between them enough particles in entangled quantum states to allow them to do the teleport at all. And each entangled pair is a one-use, disposable item – when they’re gone, they’re gone and Barbara has to go back to Arbitan the slow way so they can share another batch. This means you can only teleport between pre-arranged locations that have been visited by someone carrying entangled particles from the home station, and these need to be resupplied or else they will run out of entangled particles and become useless.

Let’s be honest, it’s starting to sound a bit shit.

Could there be another way?

In the post for An Unearthly Child, we talked about distorting spacetime with the use of exotic matter. We can do something similar for teleportation.

The idea, basically, is to cut out a small region of spacetime at the departure point, and an identical region of spacetime at the arrival point, and join them together so that they become one. You then have a portal in spacetime through which you can simply step from one region into another.

Physicist Matt Visser has done a lot of work on these sorts of traversable wormholes. In one of his papers he lays out a simple design: a cuboidal frame into which the traveller can step and be instantly transported to another place. The edges of the frame are made of exotic matter, and the clever bit is that all the immense stress-energy needed to rupture spacetime in this way is concentrated along these edges: as long as you just step through the faces of the cuboid, you should feel no ill effects.

This is a crucial piece of progress. Most wormholes, such as those that may be created by rotating black holes, subject anyone who comes near them to such overpowering tidal forces that the hapless traveller becomes, in general relativity jargon, spaghettified. Which is about as pleasant as it sounds. If any wormhole is to be actually useful for travel, it must be set up so as to avoid this danger.

That said, it’s still not something that we have any idea how to set up in practice. How to manufacture exotic matter with negative mass is still an open question (though one that we may return to for The Evil of the Daleks), as is the amount of such matter that would be needed to create this frame. Visser’s earlier calculations suggest that making a human-sized frame would require a quantity of exotic matter roughly comparable to the mass of Jupiter, though he reckons he has since come up with a way to do it with much less.

These niggling technical details aside, this kind of travel through wormholes – let’s call it “classical teleportation” – has real advantages over the trendier quantum teleportation. There are no questions of whether you are killed in the process, for a start: you simply step through the portal as if you were stepping through a door, and any philosophical questions about whether you are the same person on the other side of the teleporter become no more pressing than the question of whether the you that gets off a bus is the same as the you that got on it. (In other words, actually quite a tricky philosophical problem if you think about it, but not one that keeps most people awake at night.) Also, we don’t have to worry about continually replenishing the supply of entangled particles to keep the process going: once the wormhole is set up, you can go back and forth as much as you please, and if you want to close it and reopen it somewhere else you just need your original supply of exotic matter.

So perhaps we should assume that the travel dials that Arbitan provides to our time travellers somehow generate a frame of exotic matter that punches a hole in spacetime that opens out onto the destination. To my mind it’s a more pleasing solution: having teleportation work along similar scientific principles to the Tardis gives a pleasing sense of coherence to this science-fictional world. Which, let’s face it, is more than can be said for Terry Nation’s plots.

Marco Polo

We shall all die of thirst.

A body lies in the desert sands. A desiccated corpse, stretched out in the vast, baking emptiness. A lost traveller, found by chance. Found too late.

The body has scant clothing and few possessions. Everything that was not essential long since discarded in the exhausting struggle against the desert heat. Only one precious object remains – a water bottle.

It’s still half full.

This is more common than you might think. People often die of thirst in the desert long before they run out of water. This is because they make the mistake of rationing their water supply. It seems like common sense: you only have so much water, and you want it to last as long as possible. But if you’re sweating water out and not replacing it, you will get more and more dehydrated, and eventually die.

Water isn’t like food. If you ration out your food, you’ll feel hungry, sure, but you can keep going for a very long time while taking in fewer calories than you are expending. Your body just starts using up its reserves, extracting energy from stored fat to make up the difference. You lose weight, but you stay alive. Even when all the fat is gone, your body will keep going by cannibalising its own muscle tissue. In the end, of course, you will die if you don’t get enough food, but if you carefully eke out your remaining supplies you can put that day a long way off.

When it comes to water, you have much less room for manoeuvre. Your body temperature must be kept within a fairly narrow band, within about half a degree of 37 °C. If it gets much higher than this, you begin to suffer heat exhaustion and eventually, if it gets past 40 °C, heatstroke. At this point, you either get emergency medical treatment to cool you down rapidly, or you die. (Getting too cold can be just as dangerous, but we won’t deal with that here.)

There are three main ways a body can lose heat: radiation, convection and evaporation. Of these, there’s not much your body can do about the first two. The rate at which a body radiates heat is (to a good approximation) simply a function of its surface temperature and surface area, and there’s not a lot you can do to change those. Convection is a little more hopeful. This is when your body transfers heat to the air next to the skin, and as the air moves the heat is carried away. A good breeze will help with this, if you can find one, or a fan – although fanning yourself will generate more heat than it carries off. When you’re in the desert, your best bet to maximise convection is to wear loose clothing and hope for the best.

That leaves evaporation. Your body emits droplets of water from the skin, and as these evaporate they carry away heat. Crucially, this is something your body is able to control directly, increasing the rate of water emission in response to heat, so as to keep its core temperature within that narrow band of safety.

In other words, when it’s boiling hot, you sweat buckets.

This brings us to the crucial point. You need to sweat a certain amount to prevent heatstroke, and if you deprive your body of water you deprive it of the means to regulate its temperature. There is no sweat reserve that your body can use in an emergency, as it uses up fat reserves when food is scarce. If you sweat out more water than you drink, you will die pretty quickly. And before you die you’ll suffer the early symptoms of heatstroke, including confusion and disorientation, making it all the harder for you to correct this mistake in time.

So you shouldn’t ration your water, but when you’re out in the desert and you use up your water you’re going to die anyway, so what should you do? Apart from “be somewhere else”, which in fairness is the obvious solution.

The answer is to reduce your body’s need to sweat. That way, you can keep going longer with less water, because your body isn’t using so much to keep its temperature down.

The single simplest way to do this is to rest and sleep during the hot day, in as much shade as you can find or contrive, and do your travelling in the cooler periods of early morning, late evening and night. Keeping your mouth closed as much as possible will help you to retain moisture – one traditional trick is to suck on a small, smooth round pebble. It also helps if you can avoid eating: digestion requires water, and you need to save as much of your water as possible for sweating.

You should certainly avoid the temptation to drink your own urine. Your body will just use up even more water trying to flush out all the excess salts you’ve just consumed. That’s not to say your piss is useless, however. If you can save it up until you are ready to rest for the day, then pee into some small depression and rest on top of it, the damp ground will help to keep you a little cooler.

We don’t see these techniques in use when Marco Polo is dragging our time travellers through the Gobi Desert, and in some ways that’s just as well. The sight of the Doctor settling down for the day in a bed of his own piss might have been educational, but it is unlikely to have been welcomed. Instead, our intrepid heroes manage to survive by extracting water from their surroundings using the phenomenon of condensation.

There is a way you can do this in the real world. It’s called a condensation trap, and it works like this. Dig a decent-sized hole, about a metre across, deep enough that it goes down into damp ground. You can even pee into the hole for extra moisture. Pop a cup down at the bottom of the hole, somewhere near the middle, and cover the hole with a clear plastic sheet. Make sure the sheet is weighted down with stones all around its circumference so as to seal the hole, and place a rock on top of the sheet above the cup. Then wait.

As the sun heats the damp earth, water will evaporate, then condense on the underside of the plastic sheet. It will drip down from the low point created by the rock, and be caught in the cup. At the end of the day, uncover the hole and have a good drink.

It’s a sound enough theory, and popular in survivalist circles, but unfortunately it’s not all it’s cracked up to be. It generates water, sure, but you’ll be doing well to get more than 100 ml or so out of it – and you’ll sweat out more than that digging the damn thing in the first place.

Still, this seems to have provided the inspiration for the Doctor’s life-saving discovery of condensation in the Tardis. And it also gives us some indication of why that doesn’t seem to make a whole lot of sense. For a start, you need a source of moisture, and it’s not clear where that is coming from in the Tardis. (Viewers of later series might suggest the Tardis swimming pool, but if the Tardis has a swimming pool at this point then why not just drink directly from that?) Secondly, how do you collect this condensation from the Tardis walls? Mop it up with J-cloths and wring it out into a pint mug? All suggestions gratefully accepted.

So if you must head out into the desert, plan ahead to avoid having to resort to these desperate measures. Take enough water for your daily consumption, and enough transport to carry it all. And avoid travelling with sinister villains if you can at all help it. That never goes well.

The Edge of Destruction

What is inside, madam, is most important at the moment

Image

The Belgica, marooned in Antarctic ice

In 1898, the Belgian Antarctic Expedition ship, the Belgica, spent eight desperate months trapped in polar ice. The entire crew became depressed, demotivated, hardly able to work or even to sleep. One man became convinced his crewmates were trying to kill him, and would sleep wedged into a small recess in the ship so as to remain hidden. Another became deaf and mute through psychosomatic illness. Only through the unstinting efforts of the ship’s doctor, Frederick Cook, were the crew able to shake off their maladies enough to blast the ship free of the ice and escape their terrible frozen prison.

Antarctic science is now a well-established part of national research institutions across the globe, and yet with all this professionalism things still go wrong. A study of Antarctic researchers in 1957-8 found that several experienced fugue states, leaving their quarters then coming back to consciousness some time later in another part of the station with no idea how they had got there or what they had been doing. In 1979, one crew member at South Pole station burst into the galley wreaking havoc with a two-by-four, smashing up crockery and his apparent rival for the affections of a female colleague, before charging out berserk into the freezing polar darkness. And there are many more tales that are not in the public record, as you’ll find out if you go for a few beers with an Antarctic scientist.

With the advent of space flight, these breakdowns took on a new importance. The psychological challenges faced by Antarctic researchers, and people in other confined environments such as nuclear submarine crews, have long been used as models for the stresses to be expected in long-term space travel. Since the advent of long-duration space missions on the Russian space station Mir, followed by the International Space Station, psychologists have real data from astronauts and cosmonauts to add to their insights from terrestrial observations about how human beings can cope with extreme isolation.

To be cooped up in a tiny space with a small number of other people, who you may not know well and certainly might not like very much, is bound to be tricky, as even a cursory viewing of the Big Brother franchise will indicate. Really, the remarkable thing is not that people in these environments sometimes crack up – it’s that so few of them do.

Simply being stuck inside a glorified tin can is bad enough. In the early days of the US space programme, the astronauts who were due to fly the Mercury missions insisted that the capsules should have windows. This developed into an almighty tussle with the engineers, who quite sensibly pointed out that windows would weaken the structure and the astronauts didn’t actually have anything to do in flight that would involve seeing outside. But the astronauts won, and became the first Americans to see Earth from orbit. Window time remains a valued necessity on the ISS, and even on submarines crew members are given scheduled periscope time to catch a precious glimpse of the world outside. We humans have a deep need to see the wide world: in one experiment, it was found that even paintings can have psychological benefits to isolated crews, provided they are realistic depictions of spacious landscapes. Antarctic research stations are at least well supplied with windows, but the frequent white-outs at Halley, the British station on the Brunt ice shelf, gave rise to the blank, distant gaze known as the “Halley Stare”.

It’s how people get on in small, isolated groups, though, that really interests the psychologists, and that’s where the biggest problems can lie. Whether at the poles or in space, living and working for months on end with the same few colleagues can foster intense solidarity and friendship – or resentment, bitterness and misery.

The International Biomedical Expedition to the Antarctic was a comprehensive study of how human beings cope in Antarctica, both physically and mentally. It followed twelve men on a 72-day traverse of the polar plateau in French Antarctic territory, with laboratory studies before and after the expedition. On the trip, serious group conflicts and tensions arose: some individuals found themselves ostracised due to nationality, and the observers even had to step in and intervene when the resentments got to the stage of scientists threatening to disrupt their rivals’ experiments. The mutual animosity persisted for many years after the study.

As you may have noticed, this was an all-male group. There were understandable reasons for that at the time – the study required experienced polar researchers, and in those days that was an overwhelmingly male activity, but these days we would expect a mixed-sex crew by default. Whether the presence of females increases or reduces the conflict level within the group depends one one major factor: whether or not the men are sexist arseholes. In one notorious case, a female cosmonaut boarding the Mir space station was greeted by her male colleagues presenting her with a dustpan and brush, with an announcement that she would be doing all the cleaning. As far as I can tell, her response is not recorded.

In less misogynistic teams, female members often play a positive role as mediators and peacemakers within the group, helping to reduce tensions and improving the group’s performance. Indeed, studies in isolation experiments have shown that all-female teams perform at least as well as, and often better than, all-male teams, with more sensitivity to individual concerns and less macho bullshit. Having settled the argument about whether women should be on long-term isolation missions, perhaps we should start asking  whether men should.

The size of the crew is also important. A larger group is generally better than a smaller one, as individuals are less likely to find themselves isolated or singled out, and an odd number of members is better than an even number, as it reduces the potential for deadlock in joint decision making. Clear leadership makes a big difference: the leader’s role must be well-defined, with no confusion as to who is in charge, and he or she must make decisions that the group can understand and go along with. Above all, there must be only one leader: one consistent finding is that there are problems if two crew members have a high need for dominance.

All this matters, not only because these people are stuck with each other for an extended period, but because they are in a dangerous environment in which they have to perform complex technical tasks. Individual psychological problems or toxic group dynamics only serve to increase stress. This can cause acute psychological reactions, psychosomatic illness such as fatigue or apparently inexplicable pain, and may end up with people making mistakes under pressure, with serious or even fatal consequences. Keeping busy helps, provided it is meaningful work: it’s when you’re bored that you begin to notice your colleagues’ annoying habits and irritating mannerisms.

Having said all this, severe emotional or behavioural problems are uncommon in astronauts. This is probably because they are highly screened before being allowed to go into space, and those who are unlikely to get on with others don’t make it onto the launch pad. In less highly screened isolated populations, such as Antarctic winterers, severe emotional problems have occurred at a higher rate than in the general population.

But all these isolated environments are still at least within sight of Earth. People are still in touch with home in some fashion, however distant. The psychological impact of being totally cut off is still not understood – but it could be devastating. According to astronauts, the direct visual link to Earth is of immense importance. It is not known what the psychological effect will be of this link being broken for extended periods, such as on a human mission to Mars. In Space Psychology and Psychiatry, Kanas and Manzey speculate: “At a minimum, this experience will add to the feelings of isolation and loneliness within the crew. Beyond that, it seems possible it will induce some state of internal uncoupling from the Earth, Such a state might be associated with a broad range of individual maladaptive responses, including anxiety and depressive reactions, suicidal intention, or even psychotic symptoms such as hallucinations or delusions. In addition, a partial or complete loss of commitment to the usual (Earth-bound) system of values and behavioural norms may occur. This can result in unforeseeable changes in individual behaviour and crew interactions.”

So in the light of all this, how does our Tardis crew stack up in terms of psychological risk?

We have a small, even-numbered group. There are cultural divisions – the mix of males and females is a positive thing, but there are profound differences between the mysterious time travellers and the two school teachers. They have had no training, preparation or screening for their roles, and no testing for compatibility between crew members. They are cut off completely from home, with no knowing when they might return. Leadership is erratic, unreliable and untrustworthy, when it is not being actively contested. Only one crew member has any work to do on board, though how much of that is meaningful as opposed to fussing and busywork we don’t know. The ship keeps malfunctioning, and although they are not always confined on board, whenever they do go outside people try to kill them.

It’s a wonder they don’t all crack up.

The Daleks

We know that there are survivors. They must be disgustingly mutated.

Flowers began to grow back in Hiroshima less than a month after Little Boy incinerated the city. But this was no comforting return of nature after humanity’s terrible flash of technological sorcery. The distorted and malformed blooms were a haunting sign that the world would never be the same again.

Both the wielders and the victims of the atom bomb knew about the lethal potential of radiation. Survivors of the blast told lurid tales of the black rain that brought radioactive sludge from the atmosphere back down to earth, and doctors recognised the low white blood cell counts of their dying patients as a symptom of something similar to an X-ray overdose. Babies who were in their mothers’ wombs at the time of the explosion were born with cruel deformities and genetic maladies.

The Americans were keen to play down the radiation story. To be fair, the radiation levels dropped rapidly after the explosion, and fears that Hiroshima might be uninhabitable for decades were swiftly proved to be unfounded. Seizing on this, the US military spin machine presented their atom bomb as just another high explosive,  certainly more powerful than any yet created, but not fundamentally different from a stick of dynamite.

They maintained this stance for the best part of nine years, and for all the vague fears among the general public radiation was mostly seen by solid, no-nonsense types as a relatively minor hazard of warfare in the atomic age. Nuclear fallout was recognised and studied, but with most atomic test explosions taking place high enough off the ground to avoid drawing radiation-blasted soil up into the mushroom cloud, it didn’t seem like a major worry.

Castle Bravo changed all that. Operation Castle was the US attempt to develop a hydrogen bomb that could be practically delivered to the enemy. The preceding programme, Operation Ivy, saw the first ever explosion of a hydrogen bomb in the Ivy Mike detonation. At over ten megatons, this was more than six hundred times more powerful than the Hiroshima bomb, but as an experimental setup – a huge, cryogenically-cooled storage tank – it wasn’t something you could readily drop on Moscow. Castle Bravo swapped the cumbersome liquid deuterium that fuelled Ivy Mike’s fusion explosion for solid lithium deuteride, creating a bomb that could be readily transported – and dropped.

Mushroom cloud from the Castle Bravo test

Castle Bravo detonation

It was detonated on Bikini Atoll on 1 March 1954. The explosive yield was 15 megatons – three times higher than expected, thanks to an incomplete model of the fusion process. The wind had shifted eastward, blowing the radioactive fallout outside of the designated zone. The fallout plume spread out over a hundred miles, shrouding inhabited islands in radioactive dust. Most famously, the Japanese fishing vessel Daigo Fukuryu Maru was caught in the plume, radioactive coral debris raining down as white ash. Its 23 crewmen all became seriously ill, and one died. This was too big a calamity for the official US denial machine to brush aside. Along with the dreadful effects on the many islanders and fishermen subjected to this calamity, and the strain the event put on US diplomatic relations with Japan and in the wider Pacific region, Castle Bravo showed, publicly and undeniably, the far-reaching lethality of nuclear fallout from the new hydrogen bombs.

Not only would nuclear warfare devastate cities, destroy countries, turning nations to rubble in a few hours or days like World War II on fast-forward. It would also poison the soil, contaminate the sea, fill the air with lethal dust, covering the world in a deadly shroud that would linger for years – centuries – millennia. The Earth would become a dead planet.

Which brings us to Skaro.

The dead planet with its petrified forest is Terry Nation’s surreal vision of a planet long since ravaged by nuclear war. If we’re going to understand what happens to our four time travellers once they step out onto the ruined surface, we have to look at exactly why radioactivity is so bad for you.

When we talk about radiation, as in the intangible killer that blighted the survivors of Hiroshima, Castle Bravo and Skaro, we’re really talking about ionising radiation. That is, rays of light or subatomic particles that have enough energy to knock electrons out of atoms when they collide with them. This matters, because chemical processes are all about the interactions between electrons belonging to different atoms, and ionising radiation is radiation that is powerful enough to screw up chemistry. The more complicated the chemistry, the more ways there are to screw it up, and the most complicated chemical phenomenon we know about is life. So, ionising radiation is particularly relevant if you are alive, especially if you plan to stay that way.

Your body is made up of many different kinds of cells, each performing its own specialised function. The effects of ionising radiation depend not just on what kind of cell it hits, but also on whether the cell is killed outright or merely damaged. Large doses of radiation will kill a load of cells at once, leading to radiation sickness, while lower doses can damage the reproductive mechanisms of cells, causing cancers or genetic mutations. The radiation levels that we encounter on Skaro are high enough to give the time travellers acute radiation sickness, while the natives seem to only be suffering the chronic effects of mutations. Evidently cells on Skaro are made of sterner stuff than on Earth.

How susceptible a cell is to radiation damage depends mainly on how quickly it reproduces: the higher the reproduction rate, the greater the chance that the cell will be screwed up by radiation. In our bodies, blood cells reproduce quickly, nervous system cells reproduce slowly, and the cells in your gut are somewhere in between. And right enough, at low (but still damaging) levels of radiation exposure it’s the blood cells that show the first sign of damage. At this stage you just feel fatigued, though if the radiation has affected the skin there may also be sunburn, and hair loss as the hair follicles are damaged. As the dose gets higher, the damage increases and the gastrointestinal cells begin to suffer. First nausea, then vomiting and diarrhea as the dose level increases. At the highest levels, the central nervous system crumbles, leading to loss of coordination, confusion, coma, shock, convulsions – the sort of symptoms that vomiting and diarrhea seem like a blessed condition.

The lower levels of damage can be treated – blood transfusions or bone marrow transplants can provide for a full recovery from blood disorders. If the gut is too badly damaged, however, death is inevitable, and pretty nasty. And if the radiation dose is high enough to take out the central nervous system, there’s not much in the way of medical treatment beyond one last, heavy dose of morphine.

Older people will tend to be more susceptible to radiation sickness, so it’s no surprise that the Doctor is the first to succumb. We can be thankful his symptoms do not progress beyond the first stages of fatigue: the sight of Billy Hartnell shitting his guts out all over Lime Grove Studio D is not one that anyone wants to see on a Saturday teatime.

If you get your radiation dose from fallout, rather than the direct radiation blast from the explosion itself, how much damage it does depends on the precise chemical makeup of the fallout that you breath in or ingest with your food, as well as the level of radiation it gives off. In the aftermath of a nuclear war, a wide range of radioactive isotopes would be present in the fallout. Project Gabriel, a US Atomic Energy Commission study in the 1950s, determined that the most dangerous isotope would be strontium-90. This isotope emits beta radiation – fast-moving electrons – but what makes it really nasty is where it sits while it’s doing the emitting. Strontium is chemically similar to calcium – it’s directly beneath it in the periodic table – and because of this it is readily absorbed into bones, where it hangs around giving the unfortunate victim bone cancer or leukemia. It was evidence that levels of strontium-90 in children’s teeth had massively increase due to nuclear testing that convinced President Kennedy to sign the partial test-ban treaty that put an end to above-ground nuclear test explosions.

But one of the major horrors of radiation that we haven’t touched on much yet is mutation. Whether by damaging DNA molecules directly, or by upsetting the mechanisms within the cell that enable DNA to replicate, radiation can make cells and even whole organs develop in strange and unexpected ways. There is ample evidence of this kind of mutation happening in human fetuses, from Hiroshima onwards. Whether a single  radiation dose can cause mutations in subsequent generations is a more vexed question. Studies of survivors of the Hiroshima and Nagasaki bombs suggest not, but laboratory studies on mice and fruitflies have found second-generation effects. In any case, to be sure of getting mutation continuing down the generations, you really need the radiation to stick around as a long-lasting environmental feature. continuing mutation effects, you need to have the radiation as a long-lasting environmental feature. The nastiest isotopes of fallout, like strontium-90 or caesium-137, decay with half-lives of the order of tens of years, so after a few generations they would be practically gone. However, there are some fallout isotopes like plutonium-239 and carbon-14 that hang around for tens of thousands of years, and are readily taken up in food and absorbed into the body. These could raise the mutation rate for a very long time indeed.

Even so, it is unlikely to produce a race of Aryan supermen in kinky pants. Most mutations are trivial, and most of the non-trivial ones are harmful. These mutations lead to cancers, genetic diseases, disabilities and severely shortened lifespans. So, although we need some mutations to drive natural selection, producing new variations that may be better suited to their environment, too high a mutation rate does not simply give us evolution on fast forward. Rather, it results in the entire population dying out before it has much chance to adapt to anything. Unless, of course, these poor, crippled mutations have the technological capability to build themselves protective cocoons with mobility and manipulation devices that allow them to survive the debilitating effects of genetic degradation. Yeah, that sounds feasible.

But there’s one last twist in the tale, when the Daleks realise they need radiation to survive. This seems an odd notion – we’ve seen how damaging ionising radiation can be to biological tissues. It is not, however, wholly without foundation. At high doses, radiation is just a bad idea and best avoided. The evidence for what effect, if any, radiation has on us at very low dose levels is sparse. If you drop a nuclear bomb on some people, and they all either die or get cancer, that’s a big effect that’s easy to measure. If you give someone a small radiation dose, and they get cancer thirty years later, separating out the effects of the dose from background radiation, passive smoking, pollution and various other carcinogens is pretty hard. So for now we have to extrapolate, and there are two main theories. One is the straightforward linear extrapolation: draw a straight line through the graph, all the way down to zero. The other is the threshold theory, which is that below a certain level of radiation there is no harm done. People involved in radiological protection argue about this a lot: the linear theory is the standard one, on which standards for radiation dose limits are based, but if the threshold theory is true then those limits are too conservative and we are throwing away money on over-cautious protection.

There is a third, rather more interesting theory. Hormesis is the phenomenon whereby a small amount of something is beneficial, while large doses are harmful. Take any household painkiller, for example – but only as directed on the packet. Since the eighties, there has been something of a cottage industry of scientists trying to establish that radiation might have a hormetic effect, through aiding DNA repair, reducing free radicals or stimulating the immune system. This theory is not widely accepted, and the opinion of official bodies ranges from cautiously interested (France) to patently unconvinced (US). So it’s possible there’s something in it, but there’s a pretty good chance it’s bollocks.

So if you do wind up as the desperate survivor of a nuclear apocalypse, horribly mutated beyond recognition, don’t count on the radiation ever doing you any good, and certainly don’t count on being able to sit around waiting to evolve into something prettier. Instead, get to work building yourself an electric wheelchair with a grabby arm and an eyestalk, and make the best of things. And it’s probably worth sticking some kind of gun on it as well. Just in case.

The Tribe of Gum

The tribe say you are from Orb and when you are returned to him on the stone of death, we will have fire again.

A stone age tribe, struggling for survival. A cave of skulls, imbued with supernatural power. Human sacrifice to the Sun God. A society where political power goes to the man who can make fire. It’s a perilous situation that our time travellers find themselves pitched into after their abrupt flight from London, forced to contend with the superstitious fanaticism of a stone age people.

Human beings are the only animals to have religion. The origins of this idiosyncratic phenomenon are obscure. Although other apes are not religious, they do have social rituals that help to bind their tribe together and create peaceful relationships with other tribes. These range from the ceremonial scrotum-grab of male baboons, to the ritualised group-greeting behaviours of chimpanzees, to the notorious bonobo gang-bangs.

It seems – and we’re never going to get definitive answers on this, so informed speculation is the best we can do – that early humans had ecstatic group rituals of their own, and that these were the first steps towards religion.

Any kind of shared activity can promote group cohesion and bonding. Music, chanting and rhythmic movement all help to build the group identity in the course of the ritual – and these all predated the development of speech. Drugs help, too. There doesn’t need to be any supernatural element. Rock concerts and football matches will do just fine.

These elements persist in modern religions. I still remember the full-on Catholic masses of my youth in St Aloysius Chapel, the great organ resounding around the cavernous, mosaic-covered church, the choir singing, the incense wafting across the congregation as they stepped through the ritual dance of kneeling, standing, genuflecting. And the rituals remain potent even without the theological content: even Richard Dawkins goes carol-singing.

Bonding rituals would certainly have been important in the Paleolithic era, that vast panorama of time that stretches off into the partially-glimpsed origins of our species some hundreds of millennia ago, and which ends around ten thousand years ago with the domestication of plants and animals. Early humanity consisted of small family tribes, thinly scattered across the east of Africa, and their need for rituals to bind their own tribe together and establish peaceful relations with other tribes would have been just as strong as it is for our ape cousins.

But something changed. These rituals became something darker, deeper, more profound. The earliest signs of this are hundreds of thousands of years old – collections of skulls, cracked open in ways that match more recent practices of ritual cannibalism. By eating the brains of the dead, their kin would seek to absorb some of their power and spirit. These skulls also bear the marks of flint knives that show that the flesh was thoroughly removed from them. Later defleshed skulls show signs of staining with red ochre. This naturally-occurring iron oxide is found in the form of a soft rock that can be made into a powder or used directly to make marks like a pencil. It became increasingly used by our ancestors for marking sacred objects and buried corpses, and remains popular to this day in some tribes who use it for body painting.

These early rituals indicate some kind of spiritual attitude concerning the dead, but they are rudimentary compared to the elaborate religious practices found in every human society in the present day. At some point between then and now something changed in human consciousness, and we became a species with the full panoply of supernatural beliefs.

It’s generally reckoned that this change took place around 50,000 years ago. Even with just the fragmentary evidence we have, it seems like a switch suddenly flips in people’s heads, and immediately we have music and art of a recognisably modern form. Indeed, there are etchings on ice-age animal bones showing artistic techniques that seemed revolutionary when Picasso reinvented them in the last century.

The cause of this change is still a matter for speculation. It isn’t linked to any physical change that we can see in fossilised bones: our ancestors were anatomically modern, indistinguishable from ourselves, well before this cultural revolution.

However it happened, we can see in the art our ancestors left behind, in the location of their sacred spaces and in their careful burials of the dead, a new religious sensibility. We can also fill in the gaps by looking at the religious beliefs and practices of modern-day people who live in isolated, tribal societies. You can’t blithely assume that religion has somehow been transmitted unaltered down fifty millennia, but where contemporary practices of people living the closest anyone comes these days to a Paleolithic lifestyle match up to the fragmentary evidence from the lives of our ancient common ancestors it would be perverse not to allow that to inform our speculation.

Figurative art from this period is largely concerned with animals – horses, bison, birds of prey. There are also sculptures of humans – some realistic, some stylised. But some of the most striking art depicts human/animal combinations, such as a man with the head of a lion. This blurring of the boundaries between human and animal is typical of shamanistic religion, and the shaman might well have appeared as a lion-man during rituals, wearing a lion’s head as a head-dress, and there is cave art showing human figures in animal hides apparently dancing and playing musical instruments, similar to shamanistic rituals that have been observed in Siberia and North America. Bears also feature prominently, and it may be that these animals, that seem so close to human when they walk upright, were considered to be the spirits of dead people.

So we have a picture of a shamanistic religion, with important rituals involving communing with animals and with the realm of the dead. Ritual healing would also be a key part of this. Faith healing is, of course, nothing more than the placebo effect – but when the placebo effect is all you have, it starts to look more attractive. The shaman would use his magic to cure or alleviate pain, from injured limbs to gastric infections to childbirth – pain is a phenomenon of the mind, and thus susceptible to the deployment of placebos. Rituals are a vital part of making the placebo effect work – the patient must believe that the magic will help them, and the ritual sells that belief. In the modern world, while old rituals like acupuncture can still deliver an effective placebo, we also find that many patients will feel their symptoms alleviated by a sugar pill if delivered in a suitably earnest medical context. It’s even been found that the colour of the pill influences the mental effect of the placebo, and a more extreme-looking treatment like an injection with saline solution is a more effective placebo than a benign-seeming pill. We can be sure that the stone age shamans were as expert in enhancing the placebo effect through impressive ritual as our own modern charlatans are today. And in a world without any more effective medicine, the man who could accomplish even that much healing would be powerful indeed. Just don’t get bother him when a lion’s taken your hand off – in that case, you’re pretty much on your own.

As the millennia pass, ancestor worship becomes the dominant aspect of religion. And with this comes a shift in political power. By analogy with present-day tribes, we can presume that Paleolithic societies were not just egalitarian, but aggressively so. Every man was equal, and any who tried to set himself up above the rest would be cut down – literally.

But ancestor worship provided the means to change this. The man with the greater ancestors had access, therefore, to the most powerful spirits, and could lay claim to more temporal power on that basis. This became the foundation for hereditary rule, and hierarchical societies.

So how does the Tribe of Gum fit into this picture? I’m afraid the answer is not very well. The Cave of Skulls does bring to mind the shattered skulls left behind by ritual brain-eaters, but there is little sign of shamanism, let alone the reverence and awe with which our forebears regarded animals. Aggressive egalitarianism has been replaced by the dictatorship of the fire-maker, and ancestor worship is nowhere in sight.

To an extent this is fair enough. We only have physical evidence from a few of our ancestors, and this sorry lot don’t look as though they’re going to be around long enough to leave much. But there’s one thing that really doesn’t fit with any of our understanding of prehistoric religion, and it literally couldn’t be any more glaring.

Sun worship.

It is perhaps surprising how rare sun worship actually is in ancient cultures. The Sun is the most powerful and impressive object in human experience, responsible not only for the cycles of day and night but also for all the processes of growth and development that sustain human life. And yet actual solar religions only appear in a few cultures – Egyptian, Meso-American and Indo-European – and only when these had developed urban civilisations governed by holy kings. In these cases, the Sun as a singular, unapproachable, dominant higher power fits with the ruling ideology – and we might speculate that a greater emphasis on agriculture as the primary occupation of the people led to a greater appreciation of the Sun’s overwhelming power and importance. Paleolithic people never worshipped the Sun – indeed, there is no sign of any attention to astronomy in any of their extant remains. Our Paleolithic ancestors instead venerated – and carved beautiful images of – migratory birds like swans and wild geese, whose comings and goings marked the seasons. It was not until agricultural settlement gave rise to the need to predict and understand the seasons in detail that our ancestors became astronomers, from the Egyptians predicting the flooding of the Nile to the ancient inhabitants of Britain creating Stonehenge.

So if you should find yourself dragged off to Paleolithic times by a silver-haired git in checked trousers, don’t panic. They’re more likely to invite you to a night of dancing and drugs than to attack you for the secret of fire, and they will certainly not strap you to a rock and sacrifice you to the Sun. Just don’t try to explain how they are really your ancestors – that could cause a religious debate that would make the Council of Nicea look like a parish church tombola.

An Unearthly Child

I thought you’d both understand when you saw the different dimensions inside from those outside

 

How can a box be bigger on the inside than on the outside?

Let’s look at it another way. Why shouldn’t a box be bigger on the inside than on the outside? And why is Ian Chesterton so agitated about it?

We carry around with us an intuitive model of space and time, a model so apparently obvious and reasonable that no one until the nineteenth century seriously questioned it, and which turned out to be utterly wrong.

In our inbuilt mental model, space is three dimensional. That is to say, the position of anything in space can be specified by three numbers. In your room, you could pinpoint every object by giving the shortest distance to the front wall, the left hand wall, and the floor. Describing geography, you might give latitude, longitude and height above sea level. For the positions of stars in deep space, you might state the right ascension, declination and parallax. Whatever system of coordinates you use, you always need to give three numbers to determine a position. That’s what “three dimensional” means. (If you only need two numbers – a chessboard, say, or a graph on a sheet of paper – then your space is two-dimensional, while a line has only one dimension.)

Time, in our intuitive model, is a separate thing. It ticks along at a steady rate, quite independent of where objects are in space. In principle, we could describe the entire history of the Universe as a series of three dimensional snapshots, each taken at a different instant of time.

People believed this without question, without even thinking about it, without even realising they were making assumptions that might be questioned. Then Einstein came along, and proved that this was all bollocks.

Einstein showed that our our Universe has four dimensions, not three. Time does not tick along independent of space: different people moving at different speeds will measure distances and durations differently, seeing time turning into space or space into time.

This was a pretty staggering revelation at the time, but there was more to come. During the nineteenth century, various mathematicians had been playing around with alternative forms of geometry. The standards rules of geometry had been laid down by Euclid around 300 BCE, and schoolchildren were still being taught from translations of his Elements, more than two thousand years after that text had been written. This ancient text collected the mathematical knowledge of classical Greece in the form of basic definitions, axioms and postulates, from which all the laws of geometry and other mathematical fields were derived by strict logical reasoning. Most of these basic postulates – the starting points for the whole business – seemed self-evident, but there was one that had been niggling at mathematicians for a while: the Parallel Postulate. This states that two parallel lines can never meet, and it seemed a bit arbitrary. Various people had tried to prove it from the other postulates, without success, and in the nineteenth century some mathematicians decided to try something new and radical. They decided to see what geometry would look like without the parallel postulate, and came up with the idea of curved space.

It’s not that hard to imagine a curved space, as long as it’s a two dimensional space, like the surface of the Earth. If you’re standing at the equator, and you draw two parallel lines some distance apart, both pointing due north, and then extend these two lines northward for thousands of miles, they will come closer and closer together and eventually, at the North Pole, they will meet. Put that way, it seems kind of obvious, but the conceptual leap required to treat curved spaces as an alternative form of geometry was profound. New terminology developed for these new ideas. A space, of however many dimensions, where parallel lines never meet is a flat, or Euclidean, space. A space where parallel lines meet is positively curved, and one where they diverge is negatively curved: in both of these non-Euclidean geometries the rate at which the parallel lines meet or diverge tells you the degree of curvature.

Now all this had been kicking around for a while before Einstein, but no one thought it had any connection to reality. The Universe was clearly described by the geometry of Euclid, and that was that. But Einstein, in his crowning intellectual achievement, showed that this was all wrong. He had already shown that we live in a four-dimensional world. Now he showed that the four dimensional geometry of spacetime is not Euclidean, but curved. The degree of curvature depends on how much mass there is in the vicinity: the greater the mass, the greater the curvature. When spacetime curves, objects moving through it follow curved paths, not straight lines. People had observed this weird phenomenon for all of history, but had misunderstood it. Even the great genius Newton, the most brilliant scientist humanity has ever produced, misidentified it as the result of some weird force that acts at a distance. He called it “gravity”. But now we knew better. There is no force of gravity. There is only the curvature of spacetime, that makes objects move together as they move through four dimensions. This is the General Theory of Relativity, and it remains the fundamental theory of space, time, gravity and cosmology in modern physics. It’s been tested, too. Indeed, the GPS system on your phone makes calculations that depend on general relativity, as it determines your position based on signals from satellites moving through the curved spacetime around Earth. If general relativity wasn’t true, your GPS wouldn’t work.

So the world is four dimensional. Time is not independent, but is just another coordinate in four dimensional spacetime, like height or latitude. This spacetime is curved by the masses that inhabit it, and there is no such thing as the force of gravity. So much for intuition.

Imagining the real curved, four-dimensional manifold in which we live is basically impossible. Our brains aren’t built for it. That’s why we need all the hard maths. But we can imagine a curved two-dimensional surface embedded in our intuitive three-dimensional space, and that can give us all sorts of insights into the real curved spacetime that we inhabit.

So, imagine a two-dimensional Ian Chesterton (all right, an even more two-dimensional Ian Chesterton). He lives in a two-dimensional world, like the surface of a sheet of paper, and can never leave it.

One day he comes across a box, guarded by an irascible two-dimensional old gentleman. This being a 2-d world, the box is just a square, one side of which can swing open or closed to allow 2-d people to move in and out. 2-d Ian assumes the interior of the square is slightly smaller than its exterior, and is astonished to discover that it is in fact much bigger!

What he doesn’t realise is that someone has carefully cut out the surface on the inside of the square and replaced it with a little tube leading through the third dimension to another sheet, much larger than the area inside the original square. To us, used to the third dimension, it is easy to see what has happened, but to poor Ian it all seems impossible. He walked all round the outside of the square and it just wasn’t this large.

You can do a similar thing in our four-dimensional spacetime. A nice paper by Arvind Borde shows how you can connect two separate surfaces in spacetime via a manifold that acts like a tunnel between them. The focus of Borde’s paper is on 3-d surfaces within 4-d spacetime, but there’s no reason why the same equations shouldn’t work for connecting up two separate 4-d spacetimes. Intriguingly, in that case the connecting manifold would be five-dimensional, perhaps explaining Susan’s odd fixation on the fifth dimension in her science class.

The fact that you can do this in general relativity perhaps isn’t so surprising. The fundamental equation of general relativity, the Einstein equation, can be written in its simplest form as

G = 8πT

where G describes the shape of spacetime, and T describes how matter and energy is distributed in spacetime. 8π is just a constant, so the equation simply says that spacetime curvature depends on mass-energy. (As is often the case, it looks simple because the complexity is hidden. G and T are both tensors, a mathematical object that is a generalisation of the concept of a vector, and actually solving this equation is a massive pain in the tits, even in simple cases.)

Now there are two ways of using this equation. One, the traditional and sensible way, is to pick some sensible mass-energy distribution, plug it into T, then crank the handle and calculate G and use that to describe the spacetime curvature. That’s how you get descriptions of black holes, gravitational lenses, the big bang and all that sort of thing. The other way, less sensible but more fun, is to come up with whatever bizarre shape you would like to contort spacetime into, plug it into G, then calculate from T the mass-energy distribution you need to create your wacky universe. This is how you get fun stuff like time machines and warp drives. The problem with this is that there is no reason for your T to end up being at all physically reasonable, and it usually won’t be. You generally end up needing things like negative-mass particles, which would be less of a problem if anyone had ever observed a negative-mass particle, or had any idea what one might look like, or had any theoretical reason for believing they might exist.

But this is science fiction, and we can safely assume that whatever exotic, dangerous or downright unhealthy kinds of matter and energy you need to create your Borde tunnel and transition to a new spacetime, the Doctor’s people have it by the sackful.

So that’s all fine, but there’s a deeper issue that we’ve so far only touched on implicitly. It’s all very well coming up with scientific models for a box that’s bigger on the inside, but it’s still a jumped-up parlour trick. The real point of the Tardis is that it can travel freely in spacetime. Why should a time machine have weird geometry?

We’ve already seen that there are deep links between space, time and geometry. Einstein’s theory of curved spacetime is a geometric theory of gravity, and as we shall see in more detail when we get to The Space Museum, travelling freely throughout spacetime requires manipulating the geometry of the Universe in very particular ways. So it’s no surprise that a civilisation that can build machines for spacetime travel can also make them bigger on the inside than the outside. Indeed, it would be quite odd if they couldn’t.

And this, perhaps, is what has got Mr Chesterton so worked up. He can’t imagine how anyone could have achieved this incredible feat of spacetime engineering, but he knows that, if it isn’t just a conjuring trick, the implications go far beyond revolutionising interior design. If you can make a box bigger on the inside than the outside, you have technology that gives you complete power over space and time.

Assuming, of course, it doesn’t break down.