I would say that he is elderly. I think that—obviously he’s not English. He has a rounder face than most of the English people. And I should say he’s probably continental. And, as you say, he’s probably continental, if not eastern continental. The lines in his face would be lines of possible agony. I thought at first there were scars. It’s not a happy face.
One aim of the physical sciences has been to give an exact picture of the material world. One achievement of physics in the twentieth century has been to prove that that aim is unattainable. Take a good concrete object: the human face. This is the face of Stefan Bor-Grajewicz, who, like me, was born in Poland. Here it is seen by the Polish artist Feliks Topolski. We are aware that these pictures do not so much fix the face as explore it, that the artist is tracing the detail almost as if by touch, and that each line that is added strengthens the picture but never makes it final. We accept that as the method of the artist. But what physics has now done is to show that that is the only method to knowledge.
There is no absolute knowledge. And those who claim it, whether they’re scientists or dogmatists, open the door to tragedy. All information is imperfect. We have to treat it with humility. That’s the human condition, and that’s what quantum physics says. I mean that literally. Look at the face across the whole spectrum of electromagnetic information.
The question I’m going to ask is: how fine and how exact is the detail that we can see with the best instruments in the world—even with a perfect instrument, if we can conceive one? And seeing the detail need not be confined to seeing with visible light. The spectrum of visible light from red to violet is only an octave or so in the range of invisible radiations. There is a whole keyboard of information, all the way from the longest wavelengths of radio waves, the low notes, to the shortest wavelengths of X-rays and beyond, the highest notes. We will shine it all, turn by turn, on the human face.
The longest of the invisible waves are the radio waves, whose existence Heinrich Hertz proved nearly a hundred years ago. Because they are the longest, they’re also the crudest. A radar scanner, working at a wavelength of a few meters, will not see the face at all unless we make the face also some meters across. Only when we shorten the wavelength does any detail appear on the giant head. At a fraction of a meter, the ears. And at the limit of radio waves, a few centimeters, the first trace of the man beside the statue.
We are now looking at the face, the man’s face, with the camera which is sensitive to the next range of radiations, less than a millimeter: infrared. The astronomer William Herschel discovered that in 1800 by noticing the warmth when he focused his telescope beyond red light. The infrared rays are heat rays. The camera plate translates them into light, making the hottest look blue and the coolest look red or dark. We see the rough features of the face: the eyes, the mouth, the nose. See the heat stream from the nostrils. We learn something new about the human face, yes, but what we learn has no detail.
At its shortest wavelength, some hundredths of a millimeter, infrared shades gently into visible red. The film is sensitive to both, and the face springs to life. It’s no longer a man, it’s the man we know. White light reveals him to the eye visibly in detail. The small hairs, the pores in the skin, a blemish here, a broken blood vessel there.
White light is a mixture of wavelengths from red to orange to yellow to green to blue and to violet, the shortest visible waves. We ought to see more exact details with the short violet waves than the long red waves, but in practice a difference of an octave or so doesn’t help much.
The painter analyzes the face, takes the features apart, separates the colors, enlarges the image. It’s natural to ask: shouldn’t the scientist use a microscope? Yes, he should. But we ought to understand that the microscope enlarges the image, but cannot improve it. The sharpness or detail is fixed by the wavelength of the light. Here’s an enlargement of over 200 times, and it can single out an individual cell in the skin. But to get more detail, we still need a shorter wavelength.
The next step, then, is ultraviolet light, which has a wavelength at 10,000th of a millimeter and less—shorter by a factor of ten and more than visible light. If our eyes were able to see into the ultraviolet, they would see this ghostly landscape of fluorescence. The fact is that, at any wavelength, we can intercept a ray only by objects about as large as a wavelength itself. A smaller object simply will not cast a shadow. The ultraviolet microscope looks into the cell, enlarged 3,500 times to the level of single chromosomes, but that’s the limit. No light will see the human genes within a chromosome.
Next to the X-rays. They can’t be focused; we cannot build an X-ray microscope, so we must be content to fire them at the face and get a sort of shadow. The detail depends now on their penetration. We see the skull beneath the skin—for example, that the man has lost his teeth. This probing of the body made X-rays exciting as soon as Röntgen discovered them. He was the hero who won the first Nobel Prize in 1901. A lucky chance in nature will sometimes let us do more. We can map the atoms in a crystal because their spacing is regular. This is the pattern of atoms in the DNA spiral. This is what a gene is like. The method was invented in 1912 by von Laue and was the first proof that atoms are real.
We have one step more, to the electron microscope, where the rays are so concentrated that we no longer know whether to call them waves or particles. Electrons are fired at an object and they trace its outline like a knife thrower at a fair. This is the smallest object that’s ever been seen: a single atom of thorium. It’s spectacular and yet even the hardest electrons do not give a hard outline. The perfect image is still as remote as the distant stars.
We are here face to face with the crucial paradox of knowledge. Year by year we devise more precise instruments with which to observe nature with more fineness, and when we look at the observations they are as uncertain as ever. We seem to be running after a goal which lurches away from us to infinity every time we come within sight of it.
Let me put it in the context of an astronomical observatory. This is the observatory that was built for Karl Friedrich Gauss in Göttingen. Throughout his lifetime, and ever since, astronomical instruments have been improved. We look at the position of a star and it seems to us that we are closer and closer to finding it precisely. But when we compare our individual observations, we are astonished and chagrined to find them as scattered within themselves as ever. We had hoped that the human errors would disappear and that we would ourselves have gods of you. But it turns out that the errors can’t be taken out of the observations. And that’s true of stars or atoms, or just looking at somebody’s picture, or hearing the report of somebody’s speech.
Gauss recognized that, with that marvelous boyish genius that he had, right up to the age of nearly eighty, at which he died. When he was only eighteen years old, when he came here to Göttingen to enter the university, he had already solved the problem of the best estimate of a series of observations which have internal errors. When an observer looks at a star, he knows that there’s a multitude of causes for error. So he takes several readings, and he hopes, naturally, that the best estimate of the star’s position is the average, the center of the scatter. So far so obvious. But Gauss pushed on to ask what the scatter of the errors tells us. He devised the Gaussian curve, in which the scatter is summarized by the deviation or spread of the curve. And from this came a far-reaching idea. The scatter marks an area of uncertainty. We are not sure that the true position is the center, all we can say is that it lies in an area of uncertainty. Gauss was particularly bitter about philosophers who claimed that they had a road to knowledge more perfect than that of observation.
Of many examples I will choose one. It happens that there is a philosopher called Friedrich Hegel, whom I must confess I specifically detest. And I’m happy to share that profound feeling with a far greater man—Gauss. In 1800 Hegel published a thesis, if you please, proving that, although the definition of planets had changed since the ancients, there still could only be philosophically seven planets. Well, not only Gauss knew how to answer that, Shakespeare had answered that long before. There is a marvelous passage in King Lear in which who else but the fool says to the king, “The reason why the seven stars are no more than seven is a pretty reason.” And the king wags safely and says, “Because they are not eight.” And the fool says, “Yes, indeed, thou wouldst make a good fool!” And so did Hegel. On the first of January 1801, punctually before the ink was dry on Hegel’s dissertation, the eighth planet—Ceres—was discovered.
History has many ironies. The time bomb in Gauss’s curve is that after his death we discover that there is no god’s eye view. The errors are inextricably bound up with the nature of human knowledge. And the irony is that the discovery comes here, in Göttingen.
Ancient university towns are wonderfully alike. Göttingen is like Cambridge in England or Yale in America: very provincial, not on the way to anywhere. No one comes to these backwaters except for the company of professors. And the professors are sure that this is the center of the world. There’s an inscription in the Ratskeller here: Extra Gottingam non est vita—“Outside Göttingen there is no life.” The symbol of the university is the barefoot goose girl that every student kisses at graduation. The university is a mecca to which students come with something less than perfect faith. It’s important that students bring a certain ragamuffin irreverence to their studies. They’re not here to worship what is known, but to question it. Like every university, the Göttingen landscape is crisscrossed with long walks that professors take after lunch. And the research students are ecstatic if they’re asked along.
Perhaps Göttingen in the past had been rather sleepy. The small German university towns go back to a time before the country was united. And this gives them a flavor of local bureaucracy. Even after 1918 they were more conformist than universities outside Germany. The link between Göttingen and the outside world was the railway. That was the way the visitors came from Berlin and abroad, eager to exchange the new ideas that were racing ahead in physics. It was a byword in Göttingen that science came to life in the train to Berlin. Because that’s where people argued, and contradicted, and had new ideas, and had them challenged, too.
In the years of the First World War, science was dominated—at Göttingen, as elsewhere—by relativity. But in 1921 there was appointed here Max Born, who began a series of seminars that brought everyone interested in atomic physics here. It was rather strange. Max Born was almost forty when he was appointed. By and large, physicists have done their best work before they’re thirty, mathematicians much earlier, biologists perhaps a little later. But Born had a remarkable personal Socratic gift. He drew young men to him. He got the best out of them, and the ideas that he and they exchanged and challenged also produced his best work. Out of that wealth of names, whom am I to choose? Obviously Werner Heisenberg, who did his finest work here with Born. Then, when Erwin Schrödinger published a different form of basic atomic physics, here is where the arguments took place. And from all over the world people came here.
It’s rather strange to talk in these terms about a subject which, after all, is done by midnight oil. Did physics in the 1920s really consist of argument, seminar, discussion, dispute? Yes it did, yes it still does. The people who met here, the people who meet in the laboratory still, only end their work with the mathematical formulation. They begin it by trying to solve the riddles of the subatomic particles—of the electrons and the rest. Think of the puzzles that the electron was setting just at that time. On Mondays, Wednesdays, and Fridays it would behave like a particle. On Tuesdays, Thursdays, and Saturdays it would behave like a wave. How could you match those two aspects, brought from the large-scale world and pushed into a single entity into this Lilliput Gulliver’s Travel world of the inside of the atom? That’s what it was about. And that requires not calculation, but insight, imagination—if you like, metaphysics. I remember a phrase that Max Born used when he came to England many years after, and that still stands in his autobiography. He said, “I am now convinced that theoretical physics is actual philosophy.”
Max Born meant that the new ideas in physics amount to a different view of reality. The world is not a fixed solid array of objects out there. It shifts under our gaze, it interacts with us, and the knowledge that it yields has to be interpreted by us. There is no way of exchanging information that does not demand an act of judgment. Is the electron a particle? It behaves like one in the Bohr atom. But de Broglie, in 1924, produced a beautiful wave model in which the orbits are the places where an exact whole number of waves closes around the nucleus.
Max Born thought of a train of electrons as if each were riding on a crankshaft, so that collectively they constitute a series of Gaussian curves, a wave of probability. A new conception was being made—on the train to Berlin, and the professorial walk in the woods of Göttingen—that whatever fundamental units the world is put together from, they are more delicate, more fugitive, more startling than we catch in the butterfly net of our senses.
All those woodland walks and conversations came to a brilliant climax in 1927. Early that year, Werner Heisenberg gave a new characterization of the electron. “Yes, it is a particle,” he said, “but a particle which yields only limited information.” That is: you can specify where it is at this instant, but then you cannot impose on it a specific speed and direction of setting off. Or conversely: if you insist that you’re going to fire it at a certain speed in a certain direction, then you cannot specify exactly what its starting point is, or, of course, its end point.
That sounds like a very crude characterization. It is not. Heisenberg gave it depth by making it precise. The information that the electron carries is limited in its totality. That is, for instance, its speed and its position fit together in such a way that they are confined by the tolerance of the quantum. That’s a profound idea—one of the great scientific ideas, not only of the twentieth century, but of the history of science. Heisenberg called this the principle of uncertainty. In one sense, it’s a robust principle of the everyday. We know that we cannot ask the world to be exact. If an object, a face, had to be exactly the same before we recognized it, we should never recognize it from one day to the next. In the act of recognition a judgment is built in: an area of tolerance or uncertainty. So Heisenberg’s principle says that no events, not even atomic events, can be described with certainty, with zero tolerance. What makes the principle profound is that Heisenberg specifies the tolerance that can be reached. The measuring rod is Max Planck’s quantum. In the world of the atom the area of uncertainty is always mapped out by the quantum.
Yet the “principle of uncertainty” is a bad name. In science or outside of it, we are not uncertain. Our knowledge is merely confined within a certain tolerance. We should call it the “principle of tolerance.” First, in the engineering sense: science has progressed step by step—the most successful enterprise in the ascent of man because it has understood that the exchange of information between man and nature, and man and man, can only take place with a certain tolerance. But I also use the word “passionately” about the real world. All knowledge, all information between human beings, can only be exchanged within a play of tolerance. And that’s whether it’s in science, or in literature, or in religion, or in politics, or in any form of thought that aspires to dogma. It’s a major tragedy of my lifetime and yours that, here in Göttingen, scientists were refining to the most exquisite precision the principle of tolerance, and turning their backs on the fact that all around them tolerance was crashing to the ground beyond repair.
The sky was darkening all over Europe, but there was one particular cloud which had been hanging over Göttingen for a hundred years. Early in the 1800s, Johann Friedrich Blumenbach had put together a collection of skulls that he got from distinguished gentlemen with whom he corresponded all over Europe. There was no suggestion in Blumenbach’s work that the skulls were to support a racist division of humanity. All the same, from the time of his death, the collection was added to and added to, and became a core of racist pan-Germanic theory which was officially sanctioned by the National Socialist Party when it came to power.
When Hitler arrived in 1933, the tradition of scholarship in Germany was destroyed almost overnight. Now the train to Berlin was the symbol of flight. Europe was no longer hospitable to the imagination—and not just the scientific imagination. A whole conception of culture was in retreat: the conception that human knowledge is personal and responsible; an unending adventure at the edge of uncertainty. Silence fell as after the trial of Galileo. The great men went out into a threatened world. Max Born, Erwin Schrödinger, Albert Einstein, Sigmund Freud, Thomas Mann, Bertolt Brecht, Toscanini, Bruno Walter, Chagall, Enrico Fermi, Leo Szilard coming finally after many years to the Salk Institute in California.
The principle of uncertainty fixed once for all the realization that all knowledge is limited. It’s an irony of history that, at the very time when this was being worked out, there should rise—under Hitler in Germany and tyrants elsewhere—a counter-conception; a principle of monstrous certainty. When the future looks back on the 1930s it will think of them as a crucial confrontation of culture as I have been expounding it: the ascent of man against the throwback of despotic belief to the notion that they have absolute certainty.
I must put all these abstractions into concrete terms, and I want to do so in one personality: Leo Szilard, who was greatly engaged in them, and with whom I spent the last year or so of his life and many afternoons talking. Leo Szilard was a Hungarian whose university life was spent in Germany. In 1929 he published an important and pioneer paper on what would now be called information theory—the relation between knowledge, nature, and man. But by then Szilard was certain that Hitler would come to power, that war was inevitable. He kept two bags packed in his room, and by 1933 he’d locked them and taken them to England.
It happened that, in September of 1933, Lord Rutherford at the British Association meeting made some remark about atomic energy never becoming real. Leo Szilard was the kind of scientist—perhaps just the kind of good-humored cranky man—who disliked any statement that contained the word “never,” particularly when made by a distinguished colleague. So he set to mind to think about the problem. He tells the story as all of us who knew him would picture it. He was living at the Strand Palace Hotel—he loved living in hotels. He was walking to work at Bart’s Hospital, and as he came to Southampton Road he was stopped by a red light. That’s the only part of the story I find improbable: I never knew Szilard to stop for a red light. However, before the light turned to green, he had realized that if you hit an atom with one neutron and it happens to break up and release two, then you would have a chain reaction. He wrote a specification for a patent which contains the word “chain reaction” which was filed in 1934.
And now we come to a part of Szilard’s personality which was characteristic of scientists at that time, but which he expressed most clearly and loudly. He wanted to keep the patent secret. He wanted to prevent science from being misused, and in fact he assigned the patent to the British Admiralty so that it was not published until after the war. But meanwhile, war was becoming more and more inevitable. The march of progress in nuclear physics and the march Hitler went step by step, pace by pace, in a way that we forget now. Early in 1939, Szilard wrote to Joliot-Curie, asking him if one could make a prohibition on publication. He tried to get Fermi not to publish. But finally, in August of 1939, he wrote a letter which Einstein signed and sent to President Roosevelt, saying: nuclear energy is here, war is inevitable, it is for the president to decide what scientists should do about it.
But he didn’t stop. When in 1945 the war had been won, and he realized that the bomb was now about to be made and to be used on the Japanese, he marshaled protests everywhere he could. He wrote memorandum after memorandum. One memorandum to President Roosevelt only failed because Roosevelt died during the very days that he was transmitting it to him. Always he wanted the bomb to be tested before the Japanese and an international audience, so the Japanese should know and should surrender before people died. As you know, Szilard failed, and with him the community of scientists failed. The first atomic bomb was dropped on Hiroshima in Japan on the 6th of August, 1945, at 8:15 in the morning.
I had not been long back from Hiroshima when I heard someone say in Szilard’s presence that it was the tragedy of scientists that their discoveries were used for destruction. Szilard replied—as he, more than anyone else, had the right to reply—that it was not the tragedy of scientists, it is the tragedy of mankind.
There are two parts to the human dilemma. One is the belief that the end justifies the means. That push button philosophy, that deliberate deafness to suffering, has become the monster in the war machine. The other is the betrayal of the human spirit. The assertion of dogma that closes the mind and turns a nation, a civilization, into a regiment of ghosts—obedient ghosts or tortured ghosts.
It’s said that science will dehumanize people and turn them into numbers. That’s false—tragically false. Look for yourself. This is the concentration camp and crematorium at Auschwitz. This is where people were turned into numbers. Into this pond were flushed the ashes of some four million people. And that was not done by gas. It was done by arrogance, it was done by dogma, it was done by ignorance. When people believe that they have absolute knowledge with no test in reality, this is how they behave. This is what men do when they aspire to the knowledge of Gods.
Science is a very human form of knowledge. We are always at the brink of the known. We always feel forward for what is to be hoped. Every judgment in science stands on the edge of error and is personal. Science is a tribute to what we can know although we are fallible. In the end, the words were said by Oliver Cromwell: dq beseech you in the bowels of Christ: think it possible you may be mistaken.”
I owe it as a scientist to my friend Leo Szilard. I owe it as a human being to the many members of my family who died here to stand here as a survivor and a witness. We have to cure ourselves of the itch for absolute knowledge and power. We have to close the distance between the push-button order and the human act. We have to touch people.