December 3, 2010

PAGE 2

Until the 20th century the centres of mathematics research in the West were all located in Europe. Although the University of Göttingen in Germany, the University of Cambridge in England, the French Academy of Sciences and the University of Paris, and the University of Moscow in Russia retained their importance, the United States rose in prominence and reputation for mathematical research, especially the departments of mathematics at Princeton University and the University of Chicago.


German mathematician and philosopher David Hilbert transformed the study of geometry. He also began an effort to establish a consistent basis for all of mathematics, but later mathematicians showed that this was impossible.

At the Second International Congress of Mathematicians held in Paris in 1900, German mathematician David Hilbert spoke to the assembly. Hilbert was a professor at the University of Göttingen, the former academic home of Gauss and Riemann. Hilbert’s speech at Paris was a survey of 23 mathematical problems that he felt would guide the work being done in mathematics during the coming century. These problems stimulated a great deal of the mathematical research of the 20th century, and many of the problems were solved. When news breaks that another ‘Hilbert problem’ has been solved, mathematicians worldwide impatiently await further details.

Hilbert contributed to most areas of mathematics, starting with his classic Grundlagen der Geometrie (Foundations of Geometry), published in 1899. Hilbert’s work created the field of functional analysis (the analysis of functions as a group), a field that occupied many mathematicians during the 20th century. He also contributed to mathematical physics. From 1915 on he fought to have Emmy Noether, a noted German mathematician, hired at Göttingen. When the university refused to hire her because of objections to the presence of a woman in the faculty senate, Hilbert countered that the senate was not the changing room for a swimming pool. Noether later made major contributions to ring theory in algebra and wrote a standard text on abstract algebra.

Alfred North Whitehead, a British mathematician and metaphysician, collaborated with fellow British mathematician Bertrand Russell on the book Principia Mathematica, in which they discussed the logical foundations of mathematics. For centuries mathematicians had advanced the study of mathematics without understanding exactly how math worked. The three volume Principia identified and formalized the basic assumptions of mathematics and demonstrated how familiar truths, such as one plus one equals two, followed from them.

In some ways pure mathematics became more abstract in the 20th century, as it joined forces with the field of symbolic logic in philosophy. The scholars who bridged the fields of mathematics and philosophy early in the century were Alfred North Whitehead and Bertrand Russell, who worked together at Cambridge University. They published their major work, Principia Mathematica (Principles of Mathematics), in three volumes from 1910 to 1913. In it they demonstrated the principles of mathematical logic and attempted to show that all of mathematics could be deduced from a few premises and definitions by the rules of formal logic. In the late 19th century, German mathematician Gottlob Frége had provided the system of notation for mathematical logic and paved the way for the work of Russell and Whitehead. Mathematical logic influenced the direction of 20th-century mathematics, including the work of Hilbert.

Hilbert proposed that the underlying consistency of all mathematics could be demonstrated within mathematics. But logician Kurt Gödel in Austria proved that the goal of establishing the completeness and consistency of every mathematical theory is impossible. Despite its negative conclusion Gödel’s Theorem, published in 1931, opened up new areas in mathematical logic. One area, known as recursion theory, played a major role in the development of computers.

In the early 20th century Albert Einstein published several mathematical papers that were pivotal in the development of physics and that opened new areas of research in mathematics. They included papers on his theory of relativity. Einstein later joked, ‘Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore.’

Several revolutionary theories, including relativity and quantum theory, challenged existing assumptions in physics in the early 20th century. The work of a number of mathematicians contributed to these theories. Among them was Noether, whose gender had denied her a paid position at the University of Göttingen. Noether’s mathematical formulations on invariant (quantities that remain unchanged as other quantities change) contributed to Einstein’s theory of relativity. Russian mathematician Hermann Minkowski contributed to relativity the notion of the space-time continuum, with time as a fourth dimension. Hermann Weyl, a student of Hilbert’s, investigated the geometry of relativity and applied group theory to quantum mechanics. Weyl’s investigations helped advance the field of topology. Early in the century Hilbert quipped, ‘Physics is getting too difficult for physicists.’


After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.

Hungarian-born American mathematician John von Neumann built a solid mathematical basis for quantum theory with his text Mathematische Grundlagen der Quantenmechanik (1932, Mathematical Foundations of Quantum Mechanics). This investigation led him to explore algebraic operators and groups associated with them, opening a new area now known as Neumann algebra. Von Neumann, however, is probably best known for his work in game theory and computers.

This false-colour optical map, covering about 4300 square degrees, or 10 percent of the sky, shows the distribution in space of some 2 million galaxies. Galaxies tend to clump together—in this image, black represents areas of empty space and blue represents the galaxies. The clustering of galaxies suggests that much more matter than just the visible stars must exist to hold the cluster together. Astrophysicists use mathematics and chaos theory to try to explain the clumping of matter in the universe.

During World War II (1939-1945) mathematicians and physicists worked together on developing radar, the atomic bomb, and other technology that helped defeat the Axis powers. Polish-born mathematician Stanislaw Ulam solved the problem of how to initiate fusion in the hydrogen bomb. Von Neumann participated in numerous US defence projects during the war.

Mathematics plays an important role today in cosmology and astrophysics, especially in research into big bang theory and the properties of black holes, antimatter, elementary particles, and other unobservable objects and events. Stephen Hawking, among the best-known cosmologists of the 20th century, in 1979 was appointed Lucasian Professor of Mathematics at Trinity College, Cambridge, a position once held by Newton.

Mathematics formed an alliance with economics in the 20th century as the tools of mathematical analysis, algebra, probability, and statistics illuminated economic theories. A specialty called econometrics links enormous numbers of equations to form mathematical models for use as forecasting tools.

Game theory began in mathematics but had immediate applications in economics and military strategy. This branch of mathematics deals with situations in which some sort of decision must be made to maximize a profit—that is, to win. Its theoretical foundations were supplied by von Neumann in a series of papers written during the 1930s and 1940s. Von Neumann and economist Oskar Morgenstern published results of their investigations in The Theory of Games and Economic Behaviour (1944). John Nash, the Princeton mathematician profiled in the motion picture A Beautiful Mind, shared the 1994 Nobel Prize in economics for his work in game theory.

Considered a forerunner in the field of electronic computers, Alan Turing envisioned a device that could, in theory, perform any calculation. Referred to as the Turing Machine, it was designed to ‘read’ commands and data from a long piece of tape, using a table to determine the order in which the required operations would be carried out. In the related field of artificial intelligence, he originated the ‘Turing test,’ a process designed to determine if a computer can ‘think’ like a human.

Mathematicians, physicists, and engineers contributed to the development of computers and computer science. But the early, theoretical work came from mathematicians. English mathematician Alan Turing, working at Cambridge University, introduced the idea of a machine that could perform mathematical operations and solve equations. The Turing machine, as it became known, was a precursor of the modern computer. Through his work Turing brought together the elements that form the basis of computer science: symbolic logic, numerical analysis, electrical engineering, and a mechanical vision of human thought processes.

American physicist John Atanasoff built the first rudimentary electronic computer in the late 1930s and early 1940s, although for several decades afterward credit for the first electronic computer went to the scientists who assembled the Electronic Numerical Integrator and Computer (ENIAC) for the United States military by 1945. Danish physicist Allan Mackintosh recounts in a Scientific American article how Atanasoff first conceived of the design principles that are still used in present-day computers.

Computer theory is the third area with which von Neumann is associated, in addition to mathematical physics and game theory. He established the basic principles on which computers operate. Turing and von Neumann both recognized the usefulness of the binary arithmetic system for storing computer programs.

The first commercially available electronic computer, UNIVAC I, was also the first computer to handle both numeric and textual information. Designed by John Presper Eckert, Jr., and John Mauchly, whose corporation subsequently passed to Remington Rand, the implementation of the machine marked the beginning of the computer era. Here, a UNIVAC computer is shown in action. The central computer is in the background, and in the foreground is the supervisory control panel. Remington Rand delivered the first UNIVAC machine to the US Bureau of Census in 1951.

The first large-scale digital computers were pioneered in the 1940s. Von Neumann completed the EDVAC (Electronic Discrete Variable Automatic Computer) at the Institute of Advanced Study in Princeton in 1945. Engineers John Eckert and John Mauchly built ENIAC (Electronic Numerical Integrator and Calculator), which began operation at the University of Pennsylvania in 1946. As increasingly complex computers are built, the field of artificial intelligence has drawn attention. Researchers in this field attempt to develop computer systems that can mimic human thought processes.

Mathematician Norbert Wiener, working at the Massachusetts Institute of Technology (MIT), also became interested in automatic computing and developed the field known as cybernetics. Cybernetics grew out of Wiener’s work on increasing the accuracy of bombsights during World War II. From this came a broader investigation of how information can be translated into improved performance. Cybernetics is now applied to communication and control systems in living organisms.

Computers have exercised an enormous influence on mathematics and its applications. As ever more complex computers are developed, their applications proliferate. Computers have given great impetus to areas of mathematics such as numerical analysis and finite mathematics. Computer science has suggested new areas for mathematical investigation, such as the study of algorithms. Computers also have become powerful tools in areas as diverse as number theory, differential equations, and abstract algebra. In addition, the computer has made possible the solution of several long-standing problems in mathematics, such as the four-colour theorem first proposed in the mid-19th century.

The four-colour theorem stated that four colours are sufficient to colour any map, given that any two countries with a contiguous boundary require different colours. Mathematicians at the University of Illinois finally confirmed the theorem in 1976 by means of a large-scale computer that reduced the number of possible maps to slightly less than 2,000. The program they wrote ran thousands of lines in length and took more than 1,200 hours to run. Many mathematicians, however, do not accept the result as a proof because it has not been checked. Verification by hand would require far too many human hours. Some mathematicians object to the solution’s lack of elegance. This complaint has been paraphrased, ‘a good mathematical proof is like a poem—this is a telephone directory.’

Hilbert inaugurated the 20th century by proposing 23 problems that he expected to occupy mathematicians for the next 100 years. A number of these problems, such as the Riemann hypothesis about prime numbers, remain unsolved in the early 21st century. Hilbert claimed, ‘If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proven?’

The existence of old problems, along with new problems that continually arise, ensures that mathematics research will remain challenging and vital through the 21st century. Influenced by Hilbert, the Clay Mathematics Institute at Harvard University announced the Millennium Prize in 2000 for solutions to mathematics problems that have long resisted solution. Among the seven problems is the Riemann hypothesis. An award of $1 million awaits the mathematician who solves any of these problems.

Relativity, theory, developed in the early 20th century, which originally attempted to account for certain anomalies in the concept of relative motion, but which in its ramifications has developed into one of the most important basic concepts in physical science. The theory of relativity, developed primarily by German American physicist Albert Einstein, is the basis for later demonstration by physicists of the essential unity of matter and energy, of space and time, and of the forces of gravity and acceleration.

In 1905 German-born American physicist Albert Einstein published his first paper outlining the theory of relativity. It was ignored by most of the scientific community. In 1916 he published his second major paper on relativity, which altered mankind’s fundamental concepts of space and time.

With his brilliant theoretical work, German-born American physicist Albert Einstein single-handedly revolutionized 20th-century physics and opened up many new branches of scientific research. In this 1955 article from Scientific American, Nobel-laureate physicists Niels Bohr of Denmark and Isidor Isaac Rabi of the United States paid tribute to Einstein and discussed the importance of his contributions to physics.

In 1887 Albert Michelson and Edward Morley attempted to measure Earth’s speed with respect to the ether, a hypothetical substance thought to fill empty space. Their apparatus split a beam of light so that half went straight ahead and half went sideways. If the ether existed, they theorized, it should exert more of a drag on one of the beams than on the other as Earth moves through the ether along its orbit. They found no evidence of a difference in speed, however, which led to the demise of the ether theory.

Physical laws generally accepted by scientists before the development of the theory of relativity, now called classical laws, were based on the principles of mechanics enunciated late in the 17th century by the English mathematician and physicist Isaac Newton. Newtonian mechanics and relativistic mechanics differ in fundamental assumptions and mathematical development, but in most cases do not differ appreciably in net results; the behaviour of a billiard ball when struck by another billiard ball, for example, may be predicted by mathematical calculations based on either type of mechanics and produce approximately identical results. Inasmuch as the classical mathematics is enormously simpler than the relativistic, the former is the preferred basis for such a calculation. In cases of high speeds, however, assuming that one of the billiard balls was moving at a speed approaching that of light, the two theories would predict entirely different types of behaviour, and scientists today are quite certain that the relativistic predictions would be verified and the classical predictions would be proved incorrect.

In general, the difference between two predictions on the behaviour of any moving object involves a factor discovered by the Dutch physicist Hendrik Antoon Lorentz, and the Irish physicist George Francis FitzGerald late in the 19th century. This factor is generally represented by the Greek letter ß (beta) and is determined by the velocity of the object in accordance with the following equation: In which v is the velocity of the object and c is the velocity of light.

To understand the nature of light and how it is normally created, it is necessary to study matter at its atomic level. Atoms are the building blocks of matter, and the motion of one of their constituents, the electron, leads to the emission of light in most sources.

As light, or when a photon, or packet of light energy, is absorbed by an atom, the atom gains the energy of the photon, and one of the atom’s electrons may jump to a higher energy level. The atom is then said to be excited. When an electron of an excited atom falls to a lower energy level, the atom may emit the electron’s excess energy in the form of a photon. The energy levels, or orbitals, of the atoms shown here have been greatly simplified to illustrate these absorption and emission processes. For a more accurate depiction of electron orbitals.

Light can be emitted, or radiated, by electrons circling the nucleus of their atom. Electrons can circle atoms only in certain patterns called orbitals, and electrons have a specific amount of energy in each orbital. The amount of energy needed for each orbital is called an energy level of the atom. Electrons that circle close to the nucleus have less energy than electrons in orbitals farther from the nucleus. If the electron is in the lowest energy level, then no radiation occurs despite the motion of the electron. If an electron in a lower energy level gains some energy, it must jump to a higher level, and the atom is said to be excited. The motion of the excited electron causes it to lose energy, and it falls back to a lower level. The energy the electron releases is equal to the difference between the higher and lower energy levels. The electron may emit this quantum of energy in the form of a photon.

Each atom has a unique set of energy levels, and the energies of the corresponding photons it can emit make up what is called the atom’s spectrum. This spectrum is like a fingerprint by which the atom can be identified. The process of identifying a substance from its spectrum is called spectroscopy. The laws that describe the orbitals and energy levels of atoms are the laws of quantum theory. They were invented in the 1920s specifically to account for the radiation of light and the sizes of atoms.

The waves that accompany light are made up of oscillating, or vibrating, electric and magnetic fields, which are force fields that surround charged particles and influence other charged particles in their vicinity. These electric and magnetic fields change strength and direction at right angles, or perpendicularly, to each other in a plane (vertically and horizontally for instance). The electromagnetic wave formed by these fields travels in a direction perpendicular to the field’s strength (coming out of the plane). The relationship between the fields and the wave formed can be understood by imagining a wave in a taut rope. Grasping the rope and moving it up and down simulates the action of a moving charge upon the electric field. It creates a wave that travels along the rope in a direction that is perpendicular to the initial up and down movement.

Because electromagnetic waves are transverse—that is, the vibration that creates them is perpendicular to the direction in which they travel, they are similar to waves on a rope or waves traveling on the surface of water. Unlike these waves, however, which require a rope or water, light does not need a medium, or substance, through which to travel. Light from the Sun and distant stars reaches Earth by traveling through the vacuum of space.

The waves associated with natural sources of light are irregular, like the water waves in a busy harbor. Scientists think of such waves as being made up of many smooth waves, where the motion is regular and the wave stretches out indefinitely with regularly spaced peaks and valleys. Such regular waves are called monochromatic because they correspond to a single colour of light.

The wavelength of a monochromatic wave is the distance between two consecutive wave peaks. Wavelengths of visible light can be measured in meters or in nanometers (nm), which are one-billionth of a meter (or about 0.4 ten-millionths of an inch). Frequency corresponds to the number of wavelengths that pass by a certain point in space in a given amount of time. This value is usually measured in cycles per second, or hertz (Hz). All electromagnetic waves travel at the same speed, so in one second, more short waves will pass by a point in space than will long waves. This means that shorter waves have a higher frequency than longer waves. The relationship between wavelength, speed, and frequency is expressed by the equation: wave speed equals wavelength times frequency, or c = lf. Where c is the speed of a light wave in m/sec (3x108 m/sec in a vacuum), l is the wavelength in meters, and f is the wave’s frequency in Hz.

The amplitude of an electromagnetic wave is the height of the wave, measured from a point midway between a peak and a trough to the peak of the wave. This height corresponds to the maximum strength of the electric and magnetic fields and to the number of photons in the light.

The electromagnetic spectrum includes radio waves, microwaves, infrared light, visible light, ultraviolet light, x rays, and gamma rays. Visible light, which makes up only a tiny fraction of the electromagnetic spectrum, is the only electromagnetic radiation that humans can perceive with their eyes.

The electromagnetic spectrum refers to the entire range of frequencies or wavelengths of electromagnetic waves, of which in the Light is traditionally implicated in reference to the range of frequencies that can be seen by humans. The frequencies of these waves are very high, about one-half to three-quarters of a million billion (5 x 1014 to 7.5 x 1014) Hz. Their wavelengths range from 400 to 700 nm. X rays have wavelengths ranging from several thousandths of a nanometer to several nanometers, and radio waves have wavelengths ranging from several meters to several thousand metres.

Waves with frequencies a little lower than the range of human vision (and with wavelengths correspondingly longer) are called infrared. Waves with frequencies a little higher and wavelengths shorter than human eyes can see are called ultraviolet. About half the energy of sunlight at Earth’s surface is visible electromagnetic waves, about 3 percent is ultraviolet, and the rest is infrared.

Each different frequency or wavelength of visible light causes our eye to see a slightly different colour. The longest wavelength we can see is deep red at about 700 nm. The shortest wavelength humans can detect is deep blue or violet at about 400 nm. Most light sources do not radiate monochromatic light. What we call white light, such as light from the Sun, is a mixture of all the colours in the visible spectrum, with some represented more strongly than others. Human eyes respond best to green light at 550 nm, which is also approximately the brightest colour in sunlight at Earth’s surface.

Polarized light consists of individual photons whose electric field vectors are all aligned in the same direction. Ordinary light is unpolarized because the photons are emitted in a random manner, while laser light is polarized because the photons are emitted coherently. When light passes through a polarizing filter, the electric field interacts more strongly with molecules having certain orientations. This causes the incident beam to separate into two beams, whose electric vectors are perpendicular to each other. A horizontal filter, such as the one shown, absorbs photons whose electric vectors are vertical. The remaining photons are absorbed by a second filter turned 90° to the first. At other angles the intensity of transmitted light is proportional to the square of the cosine of the angle between the two filters. In the language of quantum mechanics, polarization is called state selection. Because photons have only two states, light passing through the filter separates into only two beams.


Polarization refers to the direction of the electric field in an electromagnetic wave. A wave whose electric field is oscillating in the vertical direction is said to be polarized in the vertical direction. The photons of such a wave would interact with matter differently than the photons of a wave polarized in the horizontal direction. The electric field in light waves from the Sun vibrates in all directions, so direct sunlight is called unpolarized. Sunlight reflected from a surface is partially polarized parallel to the surface. Polaroid sunglasses block light that is horizontally polarized and therefore reduce glare from sunlight reflecting off horizontal surfaces.

Photons may be described as packets of light energy, and scientists use this concept to refer to the particle-like aspect of light. Photons are unlike conventional particles, such as specks of dust or marbles, however, in that they are not limited to a specific volume in space or time. Photons are always associated with an electromagnetic wave of a definite frequency. In 1900 the German physicist Max Planck discovered that light energy is carried by photons. He found that the energy of a photon is equal to the frequency of its electromagnetic wave multiplied by a constant called h, or Planck's constant. This constant is very small because one photon carries little energy. Using the watt-second, or joule, as the unit of energy, Planck’s constant is 6.626 x 10-34 (a decimal point followed by 33 zeros and then the number 6626) joule-seconds in exponential notation. The energy consumed by a one-watt light bulb in one second, for example, is equivalent to two and a half million trillion photons of green light. Sunlight warms one square meter at the top of Earth’s atmosphere at noon at the equator with the equivalent of about 14 100-watt light bulbs. Light waves from the Sun, therefore, produce a very large number of photons.

Sources of light differ in how they provide energy to the charged particles, such as electrons, whose motion creates the light. If the energy comes from heat, then the source is called incandescent. If the energy comes from another source, such as chemical or electric energy, the source is called luminescent.

In an incandescent light source, hot atoms collide with one another. These collisions transfer energy to some electrons, boosting them into higher energy levels. As the electrons release this energy, they emit photons. Some collisions are weak and some are strong, so the electrons are excited to different energy levels and photons of different energies are emitted. Candle light is incandescent and results from the excited atoms of soot in the hot flame. Light from an incandescent light bulb comes from excited atoms in a thin wire called a filament that is heated by passing an electric current through it.

The Sun is an incandescent light source, and its heat comes from nuclear reactions deep below its surface. As the nuclei of atoms interact and combine in a process called nuclear fusion, they release huge amounts of energy. This energy passes from atom to atom until it reaches the surface of the Sun, where the temperature is about 6000°C (11,000°F). Different stars emit incandescent light of different frequencies—and therefore colour—depending on their mass and their age.

All thermal, or heat, sources have a broad spectrum, which means they emit photons with a wide range of energies. The colour of incandescent sources is related to their temperature, with hotter sources having more blue in their spectra, or ranges of photon energies, and cooler sources more red. About 75 percent of the radiation from an incandescent light bulb is infrared. Scientists learn about the properties of real incandescent light sources by comparing them to a theoretical incandescent light source called a black body. A black body is an ideal incandescent light source, with an emission spectrum that does not depend on what material the light comes from, but only its temperature.

A luminescent light source absorbs energy in some form other than heat, and is therefore usually cooler than an incandescent source. The colour of a luminescent source is not related to its temperature. A fluorescent light is a type of luminescent source that makes use of chemical compounds called phosphors. Fluorescent light tubes are filled with mercury vapour and coated on the inside with phosphors. As electricity passes through the tube, it excites the mercury atoms and makes them emit blue, green, violet, and ultraviolet light. The electrons in phosphor atoms absorb the ultraviolet radiation, then release some energy to heat before emitting visible light with a lower frequency.

Phosphor compounds are also used to convert electron energy to light in a television picture tube. Beams of electrons in the tube collide with phosphor atoms in small dots on the screen, exciting the phosphor electrons to higher energy levels. As the electrons drop back to their original energy level, they emit some heat and visible light. The light from all the phosphor dots combines to form the picture.

In certain phosphor compounds, atoms remain excited for a long time before radiating light. A light source is called phosphorescent if the delay between energy absorption and emission is longer than one second. Phosphorescent materials can glow in the dark for several minutes after they have been exposed to strong light.

The aurora borealis and aurora australis (northern and southern lights) in the night sky in high latitudes are luminescent sources. Electrons in the solar wind that sweeps out from the Sun become deflected in Earth’s magnetic field and dip into the upper atmosphere near the north and south magnetic poles. The electrons then collide with atmospheric molecules, exciting the molecules’ electrons and making them emit light in the sky.

Chemiluminescence occurs when a chemical reaction produces molecules with electrons in excited energy levels that can then radiate light. The colour of the light depends on the chemical reaction. When chemiluminescence occurs in plants or animals it is called bioluminescence. Many creatures, from bacteria to fish, make light this way by manufacturing substances called luciferase and luciferin. Luciferase helps luciferin combine with oxygen, and the resulting reaction creates excited molecules that emit light. Fireflies use flashes of light to attract mates, and some fish use bioluminescence to attract prey, or confuse predators.

Not all light comes from atoms. In a synchrotron light source, electrons are accelerated by microwaves and kept in a circular orbit by large magnets. The whole machine, called a synchrotron, resembles a large artificial atom. The circulating electrons can be made to radiate very monochromatic light at a wide range of frequencies.

A laser is a special kind of light source that produces very regular waves that permit the light to be very tightly focused. Laser is actually an acronym for Light Amplification by Stimulated Emission of Radiation. Each radiating charge in a non=laser light source produces a light wave that may be a little different from the waves produced by the other charges. Laser sources have atoms whose electrons radiate all in step, or synchronously. As a result, the electrons produce light that is polarized, monochromatic, and coherent, which means that its waves remain in step, with their peaks and troughs coinciding, over long distances.

This coherence is made possible by the phenomenon of stimulated emission. If an atom is immersed in a light wave with a frequency, polarization, and direction the same as light that the atom could emit, then the radiation already present stimulates the atom to emit more of the same, rather than emit a slightly different wave. So the existing light is amplified by the addition of one more photon from the atom. A luminescent light source can provide the initial amplification, and mirrors are used to continue the amplification.

Lasers have many applications in medicine, scientific research, military technology, and communications. They provide a very focused, powerful, and controllable energy source that can be used to perform delicate tasks. Laser light can be used to drill holes in diamonds and to make microelectronic components. The precision of lasers helps doctors perform surgery without damaging the surrounding tissue. Lasers are useful for space communications because laser light can carry a great deal of information and travel long distances without losing signal strength.

For each way of producing light there is a corresponding way of detecting it. Just as heat produces incandescent light, for example, light produces measurable heat when it is absorbed by a material.

The photoelectric effect is a process in which an atom absorbs a photon that has so much energy that the photon sets one of the atom’s electrons free to move outside the atom. Part of the photon’s energy goes toward releasing the electron from the atom. This energy is called the activation energy of the electron. The rest of the photon’s energy is transferred to the released electron in the form of motion, or kinetic energy. Since the photon energy is proportional to frequency, the released electron, or photoelectron, moves faster when it has absorbed high-frequency light.

Metals with low activation energies are used to make photodetectors and photoelectric cells whose electrical properties change in the presence of light. Solar cells use the photoelectric effect to convert sunlight into electricity. Solar cells are used in place of electric batteries in remote applications like space satellites or roadside emergency telephones, as hand-held calculators and watches often use solar cells so that battery replacement is unnecessary.

The change induced in photographic film exposed to light is an example of photochemical detection of photons. Light induces a chemical change in photosensitive chemicals on film. The film is then processed to convert the chemical change into a permanent image and to remove the photosensitive chemicals from the film so it will not continue to change when it is viewed in full light.

Human vision works on a similar principle. Light of different frequencies causes different chemical changes in the eye. The chemical action generates nerve impulses that our brains interpret as colour, shape, and location of objects.

The different colours that appear to streak the surface of soap bubbles correspond to different wavelengths of visible light interfering with each other at that point on the bubble’s surface.

Light behaviour can be divided into two categories: how light interacts with matter and how light travels, or propagates through space or through transparent materials. The propagation of light has much in common with the propagation of other kinds of waves, including sound waves and water waves.

Light from many sources, such as the Sun, appears white. When white light passes through a prism, however, it separates into a spectrum of different colours. The prism separates the light by refracting, or bending, light of different colours at different angles. Red light bends the least and violet light bends the most.

When light strikes a material, it interacts with the atoms in the material, and the corresponding effects depend on the frequency of the light and the atomic structure of the material. In transparent materials, the electrons in the material oscillate, or vibrate, while the light is present. This oscillation momentarily takes energy away from the light and then puts it back again. The result is to slow down the light wave without leaving energy behind. Denser materials generally slow the light more than less dense materials, but the effect also depends on the frequency or wavelength of the light. Under certain laboratory conditions, scientists can slow light down. In 2001 scientists brought a beam of light to a halt by temporarily trapping it within an extremely cold cloud of sodium atoms.


Materials that are not completely transparent either absorb light or reflect it. In absorbing materials, such as dark coloured cloth, the energy of the oscillating electrons does not go back to the light. The energy instead goes toward increasing the motion of the atoms, which causes the material to heat up. The atoms in reflective materials, such as metals, re-radiate light that cancels out the original wave. Only the light re-radiated back out of the material is observed. All materials exhibit some degree of absorption, refraction, and reflection of light. The study of the behavior of light in materials and how to use this behavior to control light is called optics.

Refraction (or, the curvature of light) is the bending of a light ray as it passes from one substance to another. The light ray bends at an angle that depends on the difference between the speed of light in one substance and the next. Sunlight reflecting off a fish in water, for instance, changes to a higher speed and bends when it enters air. The light appears to originate from a place in the water above the fish’s actual position.

Refraction is the bending of light when it passes from one kind of material into another. Because light travels at a different speed in different materials, it must change speeds at the boundary between two materials. If a beam of light hits this boundary at an angle, then light on the side of the beam that hits first will be forced to slow down or speed up before light on the other side hits the new material. This makes the beam bend, or refract, at the boundary. Light bouncing off an object underwater, for instance, travels first through the water and then through the air to reach an observer’s eye. From certain angles an object that is partially submerged appears bent where it enters the water because light from the part underwater is being refracted.

The refractive index of a substance measures how the substance affects light travelling through it. It is equal to the speed of light in a vacuum divided by the speed of light in that substance. When light travels between two materials with different refractive indexes, it bends at the boundary between them.

The refractive index of a material is the ratio of the speed of light in a vacuum to the speed of light inside the material. Because light of different frequencies travels at different speeds in a material, the refractive index is different for different frequencies. This means that light of different colours is bent by different angles as it passes from one material into another. This effect produces the familiar colourful spectrum seen when sunlight passes through a glass prism. The angle of bending at a boundary between two transparent materials is related to the refractive indexes of the materials through Snell’s Law, a mathematical formula that is used to design lenses and other optical devices to control light.


Reflection also occurs when light hits the boundary between two materials. Some of the light hitting the boundary will be reflected into the first material. If light strikes the boundary at an angle, the light is reflected at the same angle, similar to the way balls bounce when they hit the floor. Light that is reflected from a flat boundary, such as the boundary between air and a smooth lake, will form a mirror image. Light reflected from a curved surface may be focused into a point, a line, or onto an area, depending on the curvature of the surface.

Scattering occurs when the atoms of a transparent material are not smoothly distributed over distances greater than the length of a light wave, but are bunched up into lumps of molecules or particles. The sky is bright because molecules and particles in the air scatter sunlight. Light with higher frequencies and shorter wavelengths is scattered more than light with lower frequencies and longer wavelengths. The atmosphere scatters violet light the most, but human eyes do not see this colour, or frequency, well. The eye responds well to blue, though, which is the next most scattered colour. Sunsets look red because when the Sun is at the horizon, sunlight has to travel through a longer distance of atmosphere to reach the eye. The thick layer of air, dust and haze scatters away much of the blue. The spectrum of light scattered from small impurities within materials carries important information about the impurities. Scientists measure light scattered by the atmospheres of other planets in the solar system to learn about the chemical composition of the atmospheres.

When light passes through a slit with a size that is close to the light’s wavelength, the light will diffract, or spread out in waves. When light passes through two slits, the waves from one slit will interfere with the waves from the other. Constructive interference occurs when a wavefront, or crest, from one wave coincides with a wavefront from another, forming a wave with a larger crest. Destructive interference occurs when a wavefront of one wave coincides with a trough of another, cancelling each other to produce a smaller wave or no wave at all.

The first successful theory of light wave motion in three dimensions was proposed by Dutch scientist Christian Huygens in 1678. Huygens suggested that light wave peaks form surfaces like the layers of an onion. In a vacuum, or a uniform material, the surfaces are spherical. These wave surfaces advance, or spread out, through space at the speed of light. Huygens also suggested that each point on a wave surface can act like a new source of smaller spherical waves, which may be called wavelets, that are in step with the wave at that point. The envelope of all the wavelets is a wave surface. An envelope is a curve or surface that touches a whole family of other curves or surfaces like the wavelets. This construction explains how light seems to spread away from a pinhole rather than going in one straight line through the hole. The same effect blurs the edges of shadows. Huygens’s principle, with minor modifications, accurately describes all forms of wave motion.

Interference in waves occurs when two waves overlap. If a peak of one wave is aligned with the peak of the second wave, then the two waves will produce a larger wave with a peak that is the sum of the two overlapping peaks. This is called constructive interference. If a peak of one wave is aligned with a trough of the other, then the waves will tend to cancel each other out and they will produce a smaller wave or no wave at all. This is called destructive interference.

In 1803 English scientist Thomas Young studied interference of light waves by letting light pass through a screen with two slits. In this configuration, the light from each slit spreads out according to Huygens’s principle and eventually overlaps with light from the other slit. If a screen is set up in the region where the two waves overlap, a point on the screen will be light or dark depending on whether the two waves interfere constructively or destructively. If the difference between the distance from one slit to a point on the screen and the other slit to the same point on the screen is an exact number of wavelengths, then light waves arriving at that point will be in step and constructively interfere, making the point bright. If the difference is an exact odd number of half wavelengths, then light waves will arrive out of step, with one wave’s trough arriving at the same time as another wave’s peak. The waves will destructively interfere, making the point dark. The resulting pattern is a series of parallel bright and dark lines on the screen.

Instruments called interferometers use various arrangements of reflectors to produce two beams of light, which are allowed to interfere. These instruments can be used to measure tiny differences in distance or in the speed of light in one of the beams by observing the interference pattern produced by the two beams.

Holography is another application of interference. A hologram is made by splitting a light wave in two with a partially reflecting mirror. One part of the light wave travels through the mirror and is sent directly to a photographic plate. The other part of the wave is reflected first toward a subject, a face for example, and then toward the plate. The resulting photograph is a hologram. Instead of being an image of the face, it is an image of the interference pattern between the two beams. A normal photograph records only the light and dark features of the face and ignores the positions of peaks and troughs of the light wave that form the interference pattern. Since the full light wave is restored when a hologram is illuminated, the viewer can see whatever the original wave contained, including the three-dimensional quality of the original face.

Diffraction is the spreading of light waves as they pass through a small opening or around a boundary. Young’s principle of interference can be applied to Huygens’s explanation of diffraction to explain fringe patterns in diffracted light. As a beam of light emerges from a slit in an illuminated screen, the light some distance away from the screen will consist of overlapping wavelets from different points of the light wave in the opening of the slit. When the light strikes a spot on a display screen across from the slit, these points are at different distances from the spot, so their wavelets can interfere and lead to a pattern of light and dark regions. The pattern produced by light from a single slit will not be as pronounced as a pattern from two slits. This is because there are an infinite number of interfering waves, one from each point emerging from the slit, and their interference patterns overlap one another.

Monochromatic light, or light of one colour, has several characteristics that can be measured. As discussed in the section on electromagnetic waves, the length of light waves is measured in metres, and the frequency of light waves is measured in hertz. The wavelength can be measured with interferometers, and the frequency determined from the wavelength and a measurement of the velocity of light in metres per second. Monochromatic light also has a well-defined polarization that can be measured using devices called polarimeter. Sometimes the direction of scattered light is also an important quantity to measure.

When light is considered as a source of illumination for human eyes, its intensity, or brightness, is measured in units that are based on a modernized version of the perceived brightness of a candle. These units include the rate of energy flow in light, which, for monochromatic light travelling in a single direction, is determined by the rate of flow of photons. The rate of energy flow in this case can be stated in watts, or Joules per second. Usually light contains many colours and radiates in many directions away from a source such as a lamp.

Scientists use the units candela and lumen to measure the brightness of light as perceived by humans. These units account for the different response of the eye to light of different colours. The lumen measures the total amount of energy in the light radiated in all directions, and the candela measures the amount radiated in a particular direction. The candela was originally called the candle, and it was defined in terms of the light produced by a standard candle. It is now defined as the energy flow in a given direction of a yellow-green light with a frequency of 540 x 1012 Hz and a radiant intensity, or energy output, of 1/683 watt into the opening of a cone of one steradian. The steradian is a measure of angle in three dimensions.

The lumen can be defined in terms of a source that radiates one candela uniformly in all directions. If a sphere with a radius of one foot were centred on the light source, then one square foot of the inside surface of the sphere would be illuminated with a flux of one lumen. Flux means the rate at which light energy is falling on the surface. The illumination, or luminance, of that one square foot is defined to be one foot-candle.


The illumination at a different distance from a source can be calculated from the inverse square law: One lumen of flux spreads out over an area that increases as the square of the distance from the centre of the source. This means that the light per square foot decreases as the inverse square of the distance from the source. For instance, if 1 square foot of a surface that is 1 foot away from a source has an illumination of 1 foot-candle, then 1 square foot of a surface that is 4 feet away will have an illumination of 1/16 foot-candle. This is because 4 feet away from the source, the 1 lumen of flux landing on 1 square foot has had to spread out over 16 square feet. In the metric system, the unit of luminous flux is also called the lumen, and the unit of illumination is defined in meters and is called the lux.

Scientists have defined the speed of light in a vacuum to be exactly 299,792,458 meters per second (about 186,000 miles per second). This definition is possible because since 1983, scientists have known the distance light travels in one second more accurately than the definition of the standard meter. Therefore, in 1983, scientists defined the meter as 1/299,792,458, the distance light travels through a vacuum in one second. This precise measurement is the latest step in a long history of measurement, beginning in the early 1600s with an unsuccessful attempt by Italian scientist Galileo to measure the speed of lantern light from one hilltop to another.

The first successful measurements of the speed of light were astronomical. In 1676 Danish astronomer Olaus Roemer noticed a delay in the eclipse of a moon of Jupiter when it was viewed from the far side as compared with the near side of Earth’s orbit. Assuming the delay was the travel time of light across Earth’s orbit, and knowing roughly the orbital size from other observations, he divided distance by time to estimate the speed.

English physicist James Bradley obtained a better measurement in 1729. Bradley found it necessary to keep changing the tilt of his telescope to catch the light from stars as Earth went around the Sun. He concluded that Earth’s motion was sweeping the telescope sideways relative to the light that was coming down the telescope. The angle of tilt, called the stellar aberration, is approximately the ratio of the orbital speed of Earth to the speed of light. (This is one of the ways scientists determined that Earth moves around the Sun and not vice versa.)

In the mid-19th century, French physicist Armand Fizeau directly measured the speed of light by sending a narrow beam of light between gear teeth in the edge of a rotating wheel. The beam then traveled a long distance to a mirror and came back to the wheel where, if the spin were fast enough, a tooth would block the light. Knowing the distance to the mirror and the speed of the wheel, Fizeau could calculate the speed of light. During the same period, the French physicist Jean Foucault made other, more accurate experiments of this sort with spinning mirrors.


Scientists needed accurate measurements of the speed of light because they were looking for the medium that light traveled in. They called the medium ether, which they believed waved to produce the light. If ether existed, then the speed of light should appear larger or smaller depending on whether the person measuring it was moving toward or away from the ether waves. However, all measurements of the speed of light in different moving reference frames gave the same value.

In 1887 American physicists Albert A. Michelson and Edward Morley performed a very sensitive experiment designed to detect the effects of ether. They constructed an interferometer with two light beams—one that pointed along the direction of Earth’s motion, and one that pointed in a direction perpendicular to Earth’s motion. The beams were reflected by mirrors at the ends of their paths and returned to a common point where they could interfere. Along the first beam, the scientists expected Earth’s motion to increase or decrease the beam’s velocity so that the number of wave cycles throughout the path would be changed slightly relative to the second beam, resulting in a characteristic interference pattern. Knowing the velocity of Earth, it was possible to predict the change in the number of cycles and the resulting interference pattern that would be observed. The Michelson-Morley apparatus was fully capable of measuring it, but the scientists did not find the expected results.

The paradox of the constancy of the speed of light created a major problem for physical theory that German-born American physicist Albert Einstein finally resolved in 1905. Einstein suggested that physical theories should not depend on the state of motion of the observer. Instead, Einstein said the speed of light had to remain constant, and all the rest of physics had to be changed to be consistent with this fact. This special theory of relativity predicted many unexpected physical consequences, all of which have since been observed in nature.

The earliest speculations about light were hindered by the lack of knowledge about how the eye works. The Greek philosophers from as early as Pythagoras, who lived during the 5th century bc, believed light issued forth from visible things, but most also thought vision, as distinct from light, proceeded outward from the eye. Plato gave a version of this theory in his dialogue Timaeus, written in the 3rd century bc, which greatly influenced later thought.

Some early ideas of the Greeks, however, were correct. The philosopher and statesman Empedocles believed that light travels with finite speed, and the philosopher and scientist Aristotle accurately explained the rainbow as a kind of reflection from raindrops. The Greek mathematician Euclid understood the law of reflection and the properties of mirrors. Early thinkers also observed and recorded the phenomenon of refraction, but they did not know its mathematical law. The mathematician and astronomer Ptolemy was the first person on record to collect experimental data on optics, but he too believed vision issued from the eye. His work was further developed by Egyptian scientist Ibn al Haythen, who worked in Iraq and Egypt and was known to Europeans as Alhazen. Through logic and experimentation, Alhazen finally discounted Plato’s theory that vision issued forth from the eye. In Europe, Alhazen was the most well known among a group of Islamic scholars who preserved and built upon the classical Greek tradition. His work influenced all later investigations on light.

British physicist James Clerk Maxwell, considered one of the 19th century’s most important scientists, was the first to demonstrate that light consists of electromagnetic waves. Building upon the ideas of British scientist Michael Faraday, Maxwell developed his electromagnetic theory of light. This and other works by Maxwell helped pave the way for some of the major advances in physics in the 20th century. The following is Maxwell’s treatise ‘A Dynamical Theory of the Electromagnetic Field’ (1864), which contains the fundamental equations that describe the electromagnetic field.

The early modern scientists Galileo, Johannes Kepler of Germany, and René Descartes of France all made contributions to the understanding of light. Descartes discussed optics and reported the law of refraction in his famous Discours de la méthode (Discourse on Method), published in 1637. The Dutch astronomer and mathematician Willebrord Snell independently discovered the law of refraction in 1620, and the law is now named after him.

During the late 1600s, an important question emerged: Is light a swarm of particles or is it a wave in some pervasive medium through which ordinary matter freely moves? English physicist Sir Isaac Newton was a proponent of the particle theory, and Huygens developed the wave theory at about the same time. At the time it seemed that wave theories could not explain optical polarization because waves that scientists were familiar with moved parallel, not perpendicular, to the direction of wave travel. On the other hand, Newton had difficulty explaining the phenomenon of interference of light. His explanation forced a wavelike property on a particle description. Newton’s great prestige coupled with the difficulty of explaining polarization caused the scientific community to favour the particle theory, even after English physicist Thomas Young analyzed a new class of interference phenomena using the wave theory in 1803.

English physician, mathematician, and natural philosopher Sir Isaac Newton, one of the foremost figures in the history of Western science, produced insights into the natural world that were based on rigorously conducted experiments. Latin was the language of science at the time, but Newton also expressed himself in precise and direct English, as in this letter. Here, Newton reports on his experiments with prisms, which were begun in 1666 and led him to formulate a theory of the nature of light and colour.

The wave theory was finally accepted after French physicist Augustin Fresnel supported Young’s ideas with mathematical calculations in 1815 and predicted surprising new effects. Irish mathematician Sir William Hamilton clarified the relationship between wave and particle viewpoints by developing a theory that unified optics and mechanics. Hamilton’s theory was important in the later development of quantum mechanics.

Between the time of Newton and Fresnel, scientists developed mathematical techniques to describe wave phenomena in fluids and solids. Fresnel and his successors were able to use these advances to create a theory of transverse waves that would account for the phenomenon of optical polarization. As a result, an entire wave theory of light existed in mathematical form before British physicist James Clerk Maxwell began his work on electromagnetism. In his theory of electromagnetism, Maxwell showed that electric and magnetic fields affect each other in such a way as to permit waves to travel through space. The equations he derived to describe these electromagnetic waves matched the equations scientists already knew to describe light. Maxwell’s equations, however, were more general in that they described electromagnetic phenomena other than light and they predicted waves throughout the electromagnetic spectrum. In addition, his theory gave the correct speed of light in terms of the properties of electricity and magnetism. When German physicist Gustav Hertz later detected electromagnetic waves at lower frequencies, which the theory predicted, the basic correctness of Maxwell’s theory was confirmed.

Maxwell’s work left unsolved a problem common to all wave theories of light. A wave is a continuous phenomenon, which means that when it travels, its electromagnetic field must move at each of the infinite number of points in every small part of space. When we add heat to any system to raise its temperature, the energy is shared equally among all the parts of the system that can move. When this idea is applied to light, with an infinite number of moving parts, it appears to require an infinite amount of heat to give all the parts equal energy. But thermal radiation, the process in which heated objects emit electromagnetic waves, occurs in nature with a finite amount of heat. Something that could account for this process was missing from Maxwell’s theory. In 1900 Max Planck provided the missing concept. He proposed the existence of a light quantum, a finite packet of energy that became known as the photon.

Planck’s theory remained mystifying until Einstein showed how it could be used to explain the photoelectric effect, in which the speed of ejected electrons was related not to the intensity of light but to its frequency. This relationship was consistent with Planck’s theory, which suggested that a photon’s energy was related to its frequency. During the next two decades scientists recast all of physics to be consistent with Planck’s theory. The result was a picture of the physical world that was different from anything ever before imagined. Its essential feature is that all matter appears in physical measurements to be made of quantum bits, which are something like particles. Unlike the particles of Newtonian physics, however, a quantum particle cannot be viewed as having a definite path of movement that can be predicted through laws of motion. Quantum physics only permits the prediction of the probability of where particles may be found. The probability is the squared amplitude of a wave field, sometimes called the wave function associated with the particle. For photons the underlying probability field is what we know as the electromagnetic field. The current world view that scientists use, called the Standard Model, divides particles into two categories: fermions (building blocks of atoms, such as electrons, protons, and neutrons), which cannot exist in the same place at the same time, and bosons, such as photons, which can subsist as inferred by Elementary Particles. Bosons are the quantum particles associated with the force fields that act on the fermions. Just as the electromagnetic field is a combination of electric and magnetic force fields, there is an even more general field called the electroweak field. This field combines electromagnetic forces and the weak nuclear force. The photon is one of four bosons associated with this field. The other three bosons have large masses and decay, or break apart, quickly to lighter components outside the nucleus of the atom.

Hendrik Antoon Lorentz (1853-1928), Dutch physicist and Nobel laureate. Lorentz was born in Arnhem and educated at the Leiden University, where he became professor of mathematical physics in 1878. He developed the electromagnetic theory of light and the electron theory of matter and formulated a consistent theory of electricity, magnetism, and light. With the Irish physicist George Francis FitzGerald, he formulated a theory on the change in shape of a body resulting from its motion; the effect, known as the Lorentz-FitzGerald contraction, was one of several important contributions that Lorentz made to the development of the theory of relativity. For his explanation of the phenomenon known as the Zeeman effect, Lorentz shared the 1902 Nobel Prize in physics with the Dutch physicist Pieter Zeeman

The beta factor does not differ essentially from unity for any velocity that is ordinarily encountered; the highest velocity encountered in ordinary ballistics,  for example, is about 1.6 km/sec (about 1 mi/sec), the highest velocity obtainable by a rocket propelled by ordinary chemicals is a few times that, and the velocity of the earth as it moves around the sun is about 29 km/sec (about 18 mi/sec); at the last-named speed, the value of beta differs from unity by only five billionths. Thus, for ordinary terrestrial phenomena, the relativistic corrections are of little importance. When velocities are very large, however, as is sometimes the case in astronomical phenomena, relativistic corrections become significant. Similarly, relativity is important in calculating very large distances or very large aggregations of matter. As the quantum theory applies to the very small, so the relativity theory applies to the very large.

Until 1887 no flaw had appeared in the rapidly developing body of classical physics. In that year, the Michelson-Morley experiment, named after the American physicist Albert Michelson and the American chemist Edward Williams Morley, was performed. It was an attempt to determine the rate of the motion of the earth through the ether, a hypothetical substance that was thought to transmit electromagnetic radiation, including light, and was assumed to permeate all space. If the sun is at absolute rest in space, then the earth must have a constant velocity of 29 km/sec (18 mi/sec), caused by its revolution about the sun; if the sun and the entire solar system are moving through space, however, the constantly changing direction of the earth's orbital velocity will cause this value of the earth's motion to be added to the velocity of the sun at certain times of the year and subtracted from it at others. The result of the experiment was entirely unexpected and inexplicable; the apparent velocity of the earth through this hypothetical ether was zero at all times of the year.

What the Michelson-Morley experiment actually measured was the velocity of light through space in two different directions. If a ray of light is moving through space at 300,000 km/sec (186,000 mi/sec), and an observer is moving in the same direction at 29 km/sec (18 mi/sec), then the light should move past the observer at the rate of 299,971 km/sec (185,982 mi/sec); if the observer is moving in the opposite direction, the light should move past the observer at 300,029 km/sec (186,018 mi/sec). It was this difference that the Michelson-Morley experiment failed to detect. This failure could not be explained on the hypothesis that the passage of light is not affected by the motion of the earth, because such an effect had been observed in the phenomenon of the aberration of light.

In the 1890's FitzGerald and Lorentz advanced the hypothesis that when any object moves through space, its length in the direction of its motion is altered by the factor beta. The negative result of the Michelson-Morley experiment was explained by the assumption that the light actually traversed a shorter distance in the same time (that is, moved more slowly), but that this effect was masked because the distance was measured of necessity by some mechanical device which also underwent the same shortening, just as when an object 2 m long is measured with a 3-m tape measure which has shrunk to 2 m, the object will appear to be 3 m in length. Thus, in the Michelson-Morley experiment, the distance which light travelled in 1 sec appeared to be 300,000 km (186,000 mi) regardless of how fast the light actually travelled. The Lorentz-FitzGerald contraction was considered by scientists to be an unsatisfactory hypothesis because it could not be applied to any problem in which measurements of absolute motion could be made.

In 1905, Einstein published the first of two important papers on the theory of relativity, in which he dismissed the problem of absolute motion by denying its existence. According to Einstein, no particular object in the universe is suitable as an absolute frame of reference that is at rest with respect to space. Any object (such as the centre of the solar system) is a suitable frame of reference, and the motion of any object can be referred to that frame. Thus, it is equally correct to say that a train moves past the station, or that the station moves past the train. This example is not as unreasonable as it seems at first sight, for the station is also moving, due to the motion of the earth on its axis and its revolution around the sun. All motion is relative, according to Einstein. None of Einstein's basic assumptions was revolutionary; Newton had previously stated ‘absolute rest cannot be determined from the position of bodies in our regions.’ Einstein stated the relative rate of motion between any observer and any ray of light is always the same, 300,000 km/sec (186,000 mi/sec), and thus two observers, moving relative to one another even at a speed of 160,000 km/sec (100,000 mi/sec), each measuring the velocity of the same ray of light, would both find it to be moving at 300,000 km/sec (186,000 mi/sec), and this apparently anomalous result was proved by the Michelson-Morley experiment. According to classical physics, one of the two observers was at rest, and the other made an error in measurement because of the Lorentz-FitzGerald contraction of his apparatus; according to Einstein, both observers had an equal right to consider themselves at rest, and neither had made any error in measurement. Each observer used a system of coordinates as the frame of reference for measurements, and these coordinates could be transformed one into the other by a mathematical manipulation. The equations for this transformation, known as the Lorentz transformation equations, were adopted by Einstein, but he gave them an entirely new interpretation. The speed of light is invariant in any such transformation.

According to the relativistic transformation, not only would lengths in the line of a moving object be altered but also time and mass. A clock in motion relative to an observer would seem to be slowed down, and any material object would seem to increase in mass, both by the beta factor. The electron, which had just been discovered, provided a means of testing the last assumption. Electrons emitted from radioactive substances have speeds close to the speed of light, so that the value of beta, for example, might be as large as 0.5, and the mass of the electron doubled. The mass of a rapidly moving electron could be easily determined by measuring the curvature produced in its path by a magnetic field; the heavier the electron, the greater its inertia and the less the curvature produced by a given strength of field Magnetical fields. Experimentation dramatically confirmed Einstein's prediction; the electron increased in mass by exactly the amount he predicted. Thus, the kinetic energy of the accelerated electron had been converted into mass in accordance with the formula E=mc2, Einstein's theory was also verified by experiments on the velocity of light in moving water and on magnetic forces in moving substances.

The fundamental hypothesis on which Einstein's theory was based was the nonexistence of absolute rest in the universe. Einstein postulated that two observers moving relative to one another at a constant velocity would observe identically the phenomena of nature. One of these observers, however, might record two events on distant stars as having occurred simultaneously, while the other observer would find that one had occurred before the other; this disparity is not a real objection to the theory of relativity, because according to that theory simultaneity does not exist for distant events. In other words, it is not possible to specify uniquely the time when an event happens without reference to the place where it happens. Every particle or object in the universe is described by a so-called world line that describes its position in time and space. If two or more world lines intersect, an event or occurrence takes place; if the world line of a particle does not intersect any other world line, nothing has happened to it, and it is neither important nor meaningful to determine the location of the particle at any given instant. The ‘distance’ or ‘interval’ between any two events can be accurately described by means of a combination of space and time, but not by either of these separately. The space-time of four dimensions (three for space and one for time) in which all events in the universe occur is called the space-time continuum.

In 1915 Einstein developed the general theory of relativity in which he considered objects accelerated with respect to one another. He developed this theory to explain apparent conflicts between the laws of relativity and the law of gravity. To resolve these conflicts he developed an entirely new approach to the concept of gravity, based on the principle of equivalence.

The principle of equivalence holds that forces produced by gravity are in every way equivalent to forces produced by acceleration, so that it is theoretically impossible to distinguish between gravitational and accelerational forces by experiment. In the theory of special relativity, Einstein had stated that a person in a closed car rolling on an absolutely smooth railroad track could not determine by any conceivable experiment whether he was at rest or in uniform motion. In general relativity he stated that if the car were speeded up or slowed down or driven around a curve, the occupant could not tell whether the forces so produced were due to gravitation or whether they were acceleration forces brought into play by pressure on the accelerator or on the brake or by turning the car sharply to the right or left.

Acceleration is defined as the rate of change of velocity. Consider an astronaut standing in a stationary rocket. Because of gravity his or her feet are pressed against the floor of the rocket with a force equal to the person's weight, w. If the same rocket is in outer space, far from any other object and not influenced by gravity, the astronaut is again being pressed against the floor if the rocket is accelerating, and if the acceleration is 9.8 m/sec2 (32 ft/sec2) (the acceleration of gravity at the surface of the earth), the force with which the astronaut is pressed against the floor is again equal to w. Without looking out of the window, the astronaut would have no way of telling whether the rocket was at rest on the earth or accelerating in outer space. The force due to acceleration is in no way distinguishable from the force due to gravity. According to Einstein's theory, Newton's law of gravitation is an unnecessary hypothesis; Einstein attributes all forces, both gravitational and those associated with acceleration, to the effects of acceleration. Thus, when the rocket is standing still on the surface of the earth, it is attracted toward the centre of the earth. Einstein states that this phenomenon of attraction is attributable to an acceleration of the rocket. In three-dimensional space, the rocket is stationary and therefore is not accelerated; but in four-dimensional space-time, the rocket is in motion along its world line. According to Einstein, the world line is curved, because of the curvature of the continuum in the neighbourhood of the earth.

Thus, Newton's hypothesis that every object attracts every other object in direct proportion to its mass is replaced by the relativistic hypothesis that the continuum is curved in the neighbourhood of massive objects. Einstein's law of gravity states simply that the world line of every object is a geodesic in the continuum. A geodesic is the shortest distance between two points, but in curved space it is not generally a straight line. In the same way, geodesics on the surface of the earth are great circles, which are not straight lines on any ordinary map.

The results of two studies announced in early November 1997 provide unprecedented support for ‘frame-dragging,’ a concept predicted by physicist Albert Einstein's general theory of relativity. Frame-dragging describes how massive objects actually distort space and time around themselves as they rotate. One of the studies examined frame-dragging around black holes, an example of which is shown here in an artist's conception.

As in the cases mentioned above, classical and relativistic predictions are generally virtually identical, but relativistic mathematics is more complex. The famous apocryphal statement that only ten people in the world understood Einstein's theory referred to the complex tensor algebra and Riemannian geometry of general relativity; by comparison, special relativity can be understood by any college student who has studied elementary calculus.

General relativity theory has been confirmed in a number of ways since it was introduced. For example, it predicts that the world line of a ray of light will be curved in the immediate vicinity of a massive object such as the sun. To verify this prediction, scientists first chose to observe a star appearing very close to the edge of the sun. Such observations cannot normally be made, because the brightness of the sun obscures a nearby star. During a total eclipse, however, stars can be observed and their positions accurately measured even when they appear quite close to the edge of the sun. Expeditions were sent out to observe the eclipses of 1919 and 1922 and made such observations. The apparent positions of the stars were then compared with their apparent positions some months later, when they appeared at night far from the sun. Einstein predicted an apparent shift in position of 1.745 seconds of arc for a star at the very edge of the sun, with progressively smaller shifts for more distant stars. The expeditions that were sent to study the eclipses verified these predictions. In recent years, comparable tests were made of radio-wave deflections from distant quasars, using radio-telescope interferometers. The tests yielded results that agreed, to within 1 percent, with the values predicted by general relativity.

Another confirmation of general relativity involves the perihelion of the planet Mercury. For many years it had been known that the perihelion (the point at which Mercury passes closest to the sun) revolves about the sun at the rate of once in 3 million years, and that part of this perihelion motion is completely inexplicable by classical theories. The theory of relativity, however, does predict this part of the motion, and recent radar measurements of Mercury's orbit have confirmed this agreement to within about 0.5 percent.

Yet another phenomenon predicted by general relativity is the time-delay effect, in which signals sent past the sun to a planet or spacecraft on the far side of the sun experience a small delay, when relayed back, compared to the time of return as indicated by classical theory. Although the time intervals involved are very small, various tests made by means of planetary probes have provided values quite close to those predicted by general relativity. Numerous other tests of the theory could also be described, and thus far they have served to confirm it.

The general theory of relativity predicts that a massive rotating body will drag space and time around with it as it moves. This effect, called frame dragging, is more noticeable if the object is very massive and very dense. In 1997 a group of Italian astronomers announced that they had detected frame dragging around very dense, rapidly spinning astronomical objects called neutron stars. The astronomers found evidence of frame dragging by examining radiation emitted when the gravitational pull of a dense neutron star sucks matter onto its surface. This radiation showed slight differences from the radiation that was predicted by classical physics.

In 1998 another group of astronomers from the United States and Europe announced that the orbits of some artificial satellites around the earth showed the effects of frame dragging. The earth is much lighter and less dense than a neutron star, so the effects of the earth’s frame dragging are much more subtle than those of the neutron star’s frame dragging. The astronomers found that the orbits of two Italian satellites seem to shift about 2 m (about 7 ft) in the direction of the earth’s rotation every year. The launch of the US spacecraft Gravity Probe B in 2000 should provide even more evidence of frame dragging around the earth and other bodies.

After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.

Since 1915 the theory of relativity has undergone much development and expansion by Einstein and by the British astronomers James Hopwood Jeans, Arthur Stanley Eddington, and Edward Arthur Milne, the Dutch astronomer Willem de Sitter, and the German American mathematician Hermann Weyl. Much of their work has been devoted to an effort to extend the theory of relativity to include electromagnetic phenomena succumbing to Unified Field Theory. Although some progress has been made in this area, these efforts have been marked thus far by less success. No complete development of this application of the theory has yet been generally accepted. The astronomers mentioned above also devoted much effort to developing the cosmological consequences of the theory of relativity. Within the framework of the axioms laid down by Einstein, many lines of development are possible. Space, for example, is curved, and its exact degree of curvature in the neighbourhood of heavy bodies is known, but its curvature in empty space is not certain. Moreover, scientists disagree on whether it is a closed curve (such as a sphere) or an open curve (such as a cylinder or a bowl with sides of infinite height). The theory of relativity leads to the possibility that the universe is expanding; this is the most likely theoretical explanation of the experimentally observed fact that the spectral lines of all distant nebulae are shifted to the red; on the other hand the expanding-universe theory also supplies other possible explanations. The latter theory makes it reasonable to assume that the past history of the universe is finite, but it also leads to alternative possibilities.

Much of the later work on relativity was devoted to creating a workable relativistic quantum mechanics. A relativistic electron theory was developed in 1928 by the British mathematician and physicist Paul Dirac, and subsequently a satisfactory quantized field theory, called quantum electrodynamics, was evolved, unifying the concepts of relativity and quantum theory in relation of the interaction between electrons, positrons, and electromagnetic radiation. In recent years, the work of the British physicist Stephen Hawking has been devoted to an attempted full integration of quantum mechanics with relativity theory.

Einstein hated the dull regimentation and unimaginative spirit of school in Munich. When repeated business failure led the family to leave Germany for Milan, Italy, Einstein, who was then 15 years old, used the opportunity to withdraw from the school. He spent a year with his parents in Milan, and when it became clear that he would have to make his own way in the world, he finished secondary school in Aarau, Switzerland, and entered the Swiss Federal Institute of Technology in Zürich. Einstein did not enjoy the methods of instruction there. He often cut classes and used the time to study physics on his own or to play his beloved violin. He passed his examinations and graduated in 1900 by studying the notes of a classmate. His professors did not think highly of him and would not recommend him for a university position.

After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.

For two years Einstein worked as a tutor and substitute teacher. In 1902 he secured a position as an examiner in the Swiss patent office in Bern. In 1903 he married Mileva Maric, who had been his classmate at the polytechnic. They had two sons but eventually divorced. Einstein later remarried.

German-born American physicist Albert Einstein’s elegant equation E=mc2 predicted that energy could be converted to matter. Using a linear accelerator and high-energy laser light, physicists have done just that.

In 1905 Einstein received his doctorate from the University of Zürich for a theoretical dissertation on the dimensions of molecules, and he also published three theoretical papers of central importance to the development of 20th-century physics. In the first of these papers, on Brownian motion, he made significant predictions about the motion of particles that are randomly distributed in a fluid. These predictions were later confirmed by experiment.

The second paper, on the photoelectric effect, contained a revolutionary hypothesis concerning the nature of light. Einstein not only proposed that under certain circumstances light can be considered as consisting of particles, but he also hypothesized that the energy carried by any light particle, called a photon, is proportional to the frequency of the radiation. The formula for this is E = h?, where E is the energy of the radiation, h is a universal constant known as Planck’s constant, and ? is the frequency of the radiation. This proposal—that the energy contained within a light beam is transferred in individual units, or quanta—contradicted a hundred-year-old tradition of considering light energy a manifestation of continuous processes. Virtually no one accepted Einstein’s proposal. In fact, when the American physicist Robert Andrews Millikan experimentally confirmed the theory almost a decade later, he was surprised and somewhat disquieted by the outcome.

Einstein, whose prime concern was to understand the nature of electromagnetic radiation, subsequently urged the development of a theory that would be a fusion of the wave and particle models for light. Again, very few physicists understood or were sympathetic to these ideas.

Einstein’s third major paper in 1905, ‘On the Electrodynamics of Moving Bodies,’ contained what became known as the special theory of relativity. Since the time of the English mathematician and physicist Sir Isaac Newton, natural philosophers (as physicists and chemists were known) had been trying to understand the nature of matter and radiation, and how they interacted in some unified world picture. The position that mechanical laws are fundamental has become known as the mechanical world view, and the position that electrical laws are fundamental has become known as the electromagnetic world view. Neither approach, however, is capable of providing a consistent explanation for the way radiation (light, for example) and matter interact when viewed from different inertial frames of reference, that is, an interaction viewed simultaneously by an observer at rest and an observer moving at uniform speed.

In the spring of 1905, after considering these problems for ten years, Einstein realized that the crux of the problem lay not in a theory of matter but in a theory of measurement. At the heart of his special theory of relativity was the realization that all measurements of time and space depend on judgments as to whether two distant events occur simultaneously. This led him to develop a theory based on two postulates: the principle of relativity, that physical laws are the same in all inertial reference systems, and the principle of the invariance of the speed of light, that the speed of light in a vacuum is a universal constant. He was thus able to provide a consistent and correct description of physical events in different inertial frames of reference without making special assumptions about the nature of matter or radiation, or how they interact. Virtually no one understood Einstein’s argument.

The difficulty that others had with Einstein’s work was not because it was too mathematically complex or technically obscure; the problem resulted, rather, from Einstein’s beliefs about the nature of good theories and the relationship between experiment and theory. Although he maintained that the only source of knowledge is experience, he also believed that scientific theories are the free creations of a finely tuned physical intuition and that the premises on which theories are based cannot be connected logically to experiment. A good theory, therefore, is one in which a minimum number of postulates is required to account for the physical evidence. This sparseness of postulates, a feature of all Einstein’s work, was what made his work so difficult for colleagues to comprehend, let alone support.

Einstein did have important supporters, however. His chief early patron was the German physicist Max Planck. Einstein remained at the patent office for four years after his star began to rise within the physics community. He then moved rapidly upward in the German-speaking academic world; his first academic appointment was in 1909 at the University of Zürich. In 1911 he moved to the German-speaking university at Prague, and in 1912 he returned to the Swiss National Polytechnic in Zürich. Finally, in 1914, he was appointed director of the Kaiser Wilhelm Institute for Physics in Berlin.

Even before he left the patent office in 1907, Einstein began work on extending and generalizing the theory of relativity to all coordinate systems. He began by enunciating the principle of equivalence, a postulate that gravitational fields are equivalent to accelerations of the frame of reference. For example, people in a moving elevator cannot, in principle, decide whether the force that acts on them is caused by gravitation or by a constant acceleration of the elevator. The full general theory of relativity was not published until 1916. In this theory the interactions of bodies, which heretofore had been ascribed to gravitational forces, are explained as the influence of bodies on the geometry of space-time (four-dimensional space, a mathematical abstraction, having the three dimensions from Euclidean space and time as the fourth dimension).

On the basis of the general theory of relativity, Einstein accounted for the previously unexplained variations in the orbital motion of the planets and predicted the bending of starlight in the vicinity of a massive body such as the sun. The confirmation of this latter phenomenon during an eclipse of the sun in 1919 became a media event, and Einstein’s fame spread worldwide.

For the rest of his life Einstein devoted considerable time to generalizing his theory even more. His last effort, the unified field theory, which was not entirely successful, was an attempt to understand all physical interactions—including electromagnetic interactions and weak and strong interactions—in terms of the modification of the geometry of space-time between interacting entities.

After 1919, Einstein became internationally renowned. He accrued honours and awards, including the Nobel Prize in physics in 1921, from various world scientific societies. His visit to any part of the world became a national event; photographers and reporters followed him everywhere. While regretting his loss of privacy, Einstein capitalized on his fame to further his own political and social views.

German-born physicist Albert Einstein became an avowed pacifist during World War I (1914-1918) and continued to speak out for antiwar efforts throughout his life, although he renounced pacifism in the 1930s in the face of the threat to humanity posed by Nazi Germany. In this message, written from Berlin, Germany, in 1931, Einstein stresses the importance of the upcoming World Disarmament Conference, held in 1932. The conference did not produce any substantive agreements, however, and Einstein left Germany in 1933 when Nazi leader Adolf Hitler came to power. 

The two social movements that received his full support were pacifism and Zionism. During World War I he was one of a handful of German academics willing to publicly decry Germany’s involvement in the war. After the war his continued public support of pacifist and Zionist goals made him the target of vicious attacks by anti-Semitic and right-wing elements in Germany. Even his scientific theories were publicly ridiculed, especially the theory of relativity.

When Hitler came to power, Einstein immediately decided to leave Germany for the United States. He took a position at the Institute for Advanced Study at Princeton, New Jersey. While continuing his efforts on behalf of world Zionism, Einstein renounced his former pacifist stand in the face of the awesome threat to humankind posed by the Nazi regime in Germany.

In 1939 Einstein collaborated with several other physicists in writing a letter to President Franklin D. Roosevelt, pointing out the possibility of making an atomic bomb and the likelihood that the German government was embarking on such a course. The letter, which bore only Einstein’s signature, helped lend urgency to efforts in the US to build the atomic bomb, but Einstein himself played no role in the work and knew nothing about it at the time.

After the war, Einstein was active in the cause of international disarmament and world government. He continued his active support of Zionism but declined the offer made by leaders of the state of Israel to become president of that country. In the US during the late 1940s and early ‘50s he spoke out on the need for the nation’s intellectuals to make any sacrifice necessary to preserve political freedom. Einstein died in Princeton on April 18, 1955.

Einstein’s efforts in behalf of social causes have sometimes been viewed as unrealistic. In fact, his proposals were always carefully thought out. Like his scientific theories, they were motivated by sound intuition based on a shrewd and careful assessment of evidence and observation. Although Einstein gave much of himself to political and social causes, science always came first, because, he often said, only the discovery of the nature of the universe would have lasting meaning. His writings include Relativity: The Special and General Theory (1916); About Zionism (1931); Builders of the Universe (1932); Why War? (1933), with Sigmund Freud; The World as I See It (1934); The Evolution of Physics (1938), with the Polish physicist Leopold Infeld; and Out of My Later Years (1950). Einstein’s collected papers are being published in a multi-volume work, beginning in 1987.

Time is distorted in regions of large masses, such as stars and black holes. In Einstein’s general theory of relativity, which was introduced in 1916, the very existence of time depends on the presence of space. Einstein’s general theory explains how gravity warps and slows time and why time moves very slightly slower in regions of high gravity, such as near stars, compared to regions of lesser gravity, such as on planets. This time-slowing effect becomes pronounced in regions of extremely high gravity, such as near black holes.

Geologists—scientists who study Earth—use the geologic time scale to measure spans of time in the 4.5-billion-year history of Earth. This time scale measures blocks of time and is important for understanding the biological and geologic history—and evolution—of Earth. The longest blocks of time, eons, are divided into shorter blocks called eras. Eras are divided into periods, which are made up of epochs.

Many organisms exhibit biological rhythms. These are periodic biological fluctuations—changes in sleep patterns or hibernation patterns, for example—that occur in response to periodic environmental changes such as the cycles of night and day, darkness and light, and winter and summer. Organisms use biological clocks—such as circadian, or daily, rhythms—to remain in harmony with the cycles of day and night and the seasons.

Philosophers have long argued about the nature of time. Some philosophers, notably German philosopher Immanuel Kant, have proposed that newborn babies may experience the passage of time. Others have proposed that the human mind must learn to construct time. For example, French philosopher Henri Bergson thought of time as something entirely derived from experience. In Bergson's doctoral dissertation, Time and Free Will (1889; translated 1910), he proposed that time is a matter of subjective experience. According to Bergson, an infant would not experience time directly but rather would have to learn how to experience it.

Time is not a physical constant. Motion and gravity effect time by dilating (slowing) it or by expanding its duration. In 1905 Albert Einstein described the effect of motion on time in his special theory of relativity. In 1916 he described the effect of gravity on time in his general theory of relativity.

Time dilation effects due to motion were experimentally observed in the early 1970s. Researchers placed atomic clocks on commercial airliners and observed the expected changes in time as measured by those clocks relative to similar clocks on the ground. In particular, when the planes travelled east, in the direction of Earth’s rotation, the clocks on the airliners were 59 nanoseconds (59 billionths of a second) slow relative to the atomic clocks on the ground. When the aeroplanes travelled west, the clocks were 273 nanoseconds fast. This discrepancy is caused by the rotation of Earth, which causes an additional time dilation. If the effect of Earth's rotation is removed, the time dilation produced by the motion of the airliners confirms Einstein's theory of how time changes with motion, as the dilation is in agreement with predictions made by the theory.

Time dilation effects due to gravity have been experimentally verified in many ways. For example, time on the Sun's surface runs about two parts in a million slower than on Earth because of the Sun's much higher gravity. In 1968 American physicist Irwin Shapiro confirmed this effect when he showed that radar signals, and their reflections from planets are delayed when the Sun is near the path of the signals.

Einstein's original theory, formulated in 1905 and known as the special theory of relativity, was limited to frames of reference moving at constant velocity relative to each other. In 1915, he generalized his hypothesis to formulate the general theory of relativity that applied to systems that accelerate with reference to each other. This extension showed gravitation to be a consequence of the geometry of space-time and predicted the bending of light in its passage close to a massive body like a star, an effect first observed in 1919. General relativity, although less firmly established than the special theory, has deep significance for an understanding of the structure of the universe and its evolution.

Nobody really understands quantum physics, says scientist John Gribbin. Even to advanced physicists, the question of why subatomic particles can act as both waves and particles is still a puzzle. But the classic 19th-century ‘experiment with two holes’ is still the best way to illustrate how they behave that way. Gribbin’s simple explanation of the experiment illuminates why quantum mechanics, which provides the basis for modern physics and the scientific understanding of the structure of matter, still challenges common sense.

The quandary posed by the observed spectra emitted by solid bodies was first explained by the German physicist Max Planck. According to classical physics, all molecules in a solid can vibrate with the amplitude of the vibrations directly related to the temperature. All vibration frequencies should be possible and the thermal energy of the solid should be continuously convertible into electromagnetic radiation as long as energy is supplied. Planck made a radical assumption by postulating that the molecular oscillator could emit electromagnetic waves only in discrete bundles, now called quanta, or photons. Each photon has a characteristic wavelength in the spectrum and an energy E given by E = hf, where f is the frequency of the wave. The wavelength ? related to the frequency by ?f = c, where c is the speed of light. With the frequency specified in hertz (Hz), or cycles per second, h, now known as Planck's constant, is extremely small (6.626 × 10-27 erg-sec). With his theory, Planck again introduced a partial duality into the theory of light, which for nearly a century had been considered to be wavelike only.

A false-colour computer image shows the diffraction pattern generated by electrons that have been scattered by passing through an alloy of titanium and nickel. The pattern reveals two characteristic properties of waves, diffraction and interference, showing that electrons can behave like waves as well as like particles.

If electromagnetic radiation of appropriate wavelength falls upon suitable metals, negative electric charges, later identified as electrons, are ejected from the metal surface. The important aspects of this phenomenon are the following: (1) the energy of each photoelectron depends only on the frequency of the illumination and not on its intensity; (2) the rate of electron emission depends only on the illuminating intensity and not on the frequency (provided that the minimum frequency to cause emission is exceeded); and (3) the photoelectrons emerge as soon as the illumination hits the surface. These observations, which could not be explained by Maxwell's electromagnetic theory of light, led Einstein to assume in 1905 that light can be absorbed only in quanta or photons, and that the photon completely vanishes in the absorption process, with all of its energy E (=hf) going to one electron in the metal. With this simple assumption Einstein extended Planck's quantum theory to the absorption of electromagnetic radiation, giving additional importance to the wave-particle duality of light. It was for this work that Einstein was awarded the 1921 Nobel Prize in physics.

These very penetrating rays, first discovered by Roentgen, were shown to be electromagnetic radiation of very short wavelength in 1912 by the German physicist Max Theodor Felix von Laue and his coworkers. The precise mechanism of X-ray production was shown to be a quantum effect, and in 1914 the British physicist Henry Gwyn Jeffreys Moseley used his X-ray spectrograms to prove that the atomic number of an element, and hence the number of positive charges in an atom, is the same as its position in the periodic table. The photon theory of electromagnetic radiation was further strengthened and developed by the prediction and observation of the so-called Compton effect by the American physicist Arthur Holly Compton in 1923.

That electric charges were carried by extremely small particles had already been suspected in the 19th century and, as indicated by electrochemical experiments, the charge of these elementary particles was a definite, invariant quantity. Experiments on the conduction of electricity through low-pressure gases led to the discovery of two kinds of rays: cathode rays, coming from the negative electrode in a gas discharge tube, and positive or canal rays from the positive electrode. Sir Joseph John Thomson's 1895 experiment measured the ratio of the charge q to the mass m of the cathode-ray particles. Lenard in 1899 confirmed that the ratio of q to m for photoelectric particles was identical to that of cathode rays. The American inventor Thomas Alva Edison had noted in 1883 that very hot wires emit electricity, called thermionic emission (now called the Edison effect), and in 1899 Thomson showed that this form of electricity also consisted of particles with the same q to m ratio as the others. About 1911 Millikan finally determined that electric charge always arises in multiples of a basic unit e, and measured the value of e, now known to be 1.602 × 10-19 coulombs. From the measured value of q to m ratio, with q set equal to e, the mass of the carrier, called electron, could now be determined as 9.110 × 10-31 kg.

Finally, Thomson and others showed that the positive rays also consisted of particles, each carrying a charge e, but of the positive variety. These particles, however, now recognized as positive ions resulting from the removal of an electron from a neutral atom, are much more massive than the electron. The smallest, the hydrogen ion, is a single proton with a mass of 1.673 × 10-27 kg, about 1837 times more massive than the electron, the ‘quantized’ nature of electric charge was now firmly established and, at the same time, two of the fundamental subatomic particles identified.

Experimental data has been the impetus behind the creation and dismissal of physical models of the atom. Rutherford's model, in which electrons move around a tightly packed, positively charged nucleus, successfully explained the results of scattering experiments, but was unable to explain discrete atomic emission—that is, why atoms emit only certain wavelengths of light. Bohr began with Rutherford’s model, but then postulated further that electrons can only move in certain quantized orbits; this model was able to explain certain qualities of discrete emission for hydrogen, but failed completely for other elements. Schrödinger’s model, in which electrons are described not by the paths they take but by the regions where they are most likely to be found, can explain certain qualities of emission spectra for all elements; however, further refinements of the model, made throughout the 20th century, have been needed to explain all observable spectral phenomenon.

In 1913 the New Zealand-born British physicist Ernest Rutherford, making use of the newly discovered radiations from radioactive nuclei, found Thomson's earlier model of an atom with uniformly distributed positive and negative charged particles to be untenable. The very fast, massive, positively charged alpha particles he employed were found to deflect sharply in their passage through matter. This effect required an atomic model with a heavy positive scattering centre. Rutherford then suggested that the positive charge of an atom was concentrated in a massive stationary nucleus, with the negative electron moving in orbits about it, and positioned by the electric attraction between opposite charges. This solar-system-like atomic model, however, could not persist according to Maxwell's theory, where the revolving electrons should emit electromagnetic radiation and force a total collapse of the system in a very short time.

Another sharp break with classical physics was required at this point. It was provided by the Danish physicist Niels Henrik David Bohr, who postulated the existence within atoms of certain specified orbits in which electrons could revolve without electromagnetic radiation emission. These allowed orbits, or so-called stationary states, are determined by the condition that the angular momentum J of the orbiting electron must be a positive multiple integral of Planck's constant, divided by 2 p, that is, J = nh/2p, where the quantum number n may have any positive integer value. This extended ‘quantization’ to dynamics, fixed the possible orbits, and allowed Bohr to calculate their radii and the corresponding energy levels. Also in 1913 the model was confirmed experimentally by the German-born American physicist James Franck and the German physicist Gustav Hertz.

Bohr developed his model much further. He explained how atoms radiate light and other electromagnetic waves, and also proposed that an electron ‘lifted’ by a sufficient disturbance of the atom from the orbit of smallest radius and least energy (the ground state) into another orbit, would soon ‘fall’ back to the ground state. This falling back is accompanied by the emission of a single photon of energy E = hf, where E is the difference in energy between the higher and lower orbits. Each orbit shift emits a characteristic photon of sharply defined frequency and wavelength; thus one photon would be emitted in a direct shift from the n = 3 to the n = 1 orbit, which will be quite different from the two photons emitted in a sequential shift from the n = 3 to n = 2 orbit, and then from there to the n = 1 orbit. This model now allowed Bohr to account with great accuracy for the simplest atomic spectrum, that of hydrogen, which had defied classical physics.

Although Bohr's model was extended and refined, it could not explain observations for atoms with more than one electron. It could not even account for the intensity of the spectral colours of the simple hydrogen atom. Because it had no more than a limited ability to predict experimental results, it remained unsatisfactory for theoretical physicists.

In the 20th century, physicists discovered that matter behaved as both a wave and a particle. Austrian physicist and Nobel Prize winner Erwin Schrödinger discussed this apparent paradox in a lecture in Geneva, Switzerland, in 1952. A condensed and translated version of his lecture appeared in Scientific American the following year.

Within a few years, roughly between 1924 and 1930, an entirely new theoretical approach to dynamics was developed to account for subatomic behaviour. Named quantum mechanics or wave mechanics, it started with the suggestion in 1923 by the French physicist Louis Victor, Prince de Broglie, that not only electromagnetic radiation but matter could also have wave as well as particle aspects. The wavelength of the so-called matter waves associated with a particle is given by the equation ? = h/mv, where m is the particle mass and v its velocity. Matter waves were conceived of as pilot waves guiding the particle motion, a property that should result in diffraction under suitable conditions. This was confirmed in 1927 by the experiments on electron-crystal interactions by the American physicists Clinton Joseph Davisson and Lester Halbert Germer and the British physicist Sir George Paget Thomson. Subsequently, Werner Heisenberg, Max Born, and Ernst Pascual Jordan of Germany and the Austrian physicist Erwin Schrödinger developed Broglie's idea into a mathematical form capable of dealing with a number of physical phenomena and with problems that could not be handled by classical physics. In addition to confirming Bohr's postulate regarding the quantization of energy levels in atoms, quantum mechanics now provides an understanding of the most complex atoms, and has also been a guiding spirit in nuclear physics. Although quantum mechanics is usually needed only on the microscopic level (with Newtonian mechanics still satisfactory for macroscopic systems), certain macroscopic effects, such as the properties of crystalline solids, also exist that can only be satisfactorily explained by principles of quantum mechanics.

Going beyond Broglie's notion of the wave-particle duality of matter, additional important concepts have since been incorporated into the quantum-mechanical picture. These include the discovery that electrons must have some permanent magnetism and, with it, an intrinsic angular momentum, or spin, as a fundamental property. Spin was subsequently found in almost all other elementary particles. In 1925 the Austrian physicist Wolfgang Pauli expounded the exclusion principle, which states that in an atom no two electrons can have precisely the same set of quantum numbers. Four quantum numbers are needed to specify completely the state of an electron in an atom. The exclusion principle is vital for an understanding of the structure of the elements and of the periodic table. Heisenberg in 1927 put forth the uncertainty principle, which asserted the existence of a natural limit to the precision with which certain pairs of physical quantities can be known simultaneously.

Finally, a synthesis of quantum mechanics and relativity was made in 1928 by the British mathematical physicist Paul Adrien Maurice Dirac, leading to the prediction of the existence of the positron and bringing the development of quantum mechanics to a culmination.

Largely as a result of Bohr's ideas, a different and statistical approach developed in modern physics. The fully deterministic cause-effect relations produced by Newtonian mechanics were supplanted by predictions of future events in terms of statistical probability only. Thus, the wave property of matter also implies that, in accordance with the uncertainty principle, the motion of the particles can never be predicted with absolute certainty even if the forces are known completely. Although this statistical aspect plays no detectable role in macroscopic motions, it is dominant on the molecular, atomic, and subatomic scale.

The understanding of atomic structure was also facilitated by Becquerel's discovery in 1896 of radioactivity in uranium ore. Within a few years radioactive radiation was found to consist of three types of emissions: alpha rays, later found by Rutherford to be the nuclei of helium atoms; beta rays, shown by Becquerel to be very fast electrons; and gamma rays, identified later as very short wavelength electromagnetic radiation. In 1898 the French physicists Marie and Pierre Curie separated two highly radioactive elements, radium and polonium, from uranium ore, thus showing that radiations could be identified with particular elements. By 1903 Rutherford and the British physical chemist Frederick Soddy had shown that the emission of alpha or beta rays resulted in the transmutation of the emitting element into a different one. Radioactive processes were shortly thereafter found to be completely statistical; no method exists that could indicate which atom in a radioactive material will decay at any one time. These developments, in addition to leading to Rutherford's and Bohr's model of the atom, also suggested that alpha, beta, and gamma rays could only come from the nuclei of very heavy atoms. In 1919 Rutherford bombarded nitrogen with alpha particles and converted it to hydrogen and oxygen, thus producing the first artificial transmutation of elements.

Meanwhile, a knowledge of the nature and abundance of isotopes was growing, largely through the development of the mass spectrograph. A model emerged in which the nucleus contained all the positive charge and almost all the mass of the atom. The nuclear-charge carriers were identified as protons, but except for hydrogen, the nuclear mass could be accounted for only if some additional uncharged particles were present. In 1932 the British physicist Sir James Chadwick discovered the neutron, an electrically neutral particle of mass 1.675 × 10-27 kg, slightly more than that of the proton. Now nuclei could be understood as consisting of protons and neutrons, collectively called nucleons, and the atomic number of the element was simply the number of protons in the nucleus. On the other hand, the isotope number, also called the atomic mass number, was the sum of the neutrons and protons present. Thus, all atoms of oxygen (atomic no. 8) have eight protons, but the three isotopes of oxygen, O16, O 17, and O18, also contain within their respective nuclei eight, nine, or ten neutrons.

Positive electric charges repel each other, and because atomic nuclei (except for hydrogen) have more than one proton, they would fly apart except for a strong attractive force, called the nuclear force, or strong interaction that binds the nucleons to each other. The energy associated with this strong force is very great, millions of times greater than the energies characteristic of electrons in their orbits or chemical binding energies. An escaping alpha particle (consisting of two protons and two neutrons), therefore, will have to overcome this strong interaction force to escape from a radioactive nucleus such as uranium. This apparent paradox was explained by the American physicists Edward U. Condon, George Gamow, and Ronald Wilfred Gurney, who applied quantum mechanics to the problem of alpha emission in 1928 and showed that the statistical nature of nuclear processes allowed alpha particles to ‘leak’ out of radioactive nuclei, even though their average energy was insufficient to overcome the nuclear force. Beta decay was explained as a result of a neutron disruption within the nucleus, the neutron changing into an electron (the beta particle), which is promptly ejected, and a residual proton. The proton left behind leaves the ‘daughter’ nucleus with one more proton than its ‘parent’ and thus increases the atomic number and the position in the periodic table. Alpha or beta emission usually leaves the nucleus with excess energy, which it unloads by emitting a gamma-ray photon.

In all these nuclear processes a large amount of energy, given by Einstein's E = mc2 equation, is released. After the process is over, the total mass of the product is less than that of the parent, with the mass difference appearing as energy.

British physicist Clifford Johnson, one of the world’s leading theoreticians in the field of elementary particle physics, points to the mathematical patterns that underlie the physical world as a way of inspiring curiosity about science and mathematics. In answering a variety of questions about physics, ranging from chaos theory to the question of whether humans can travel at the speed of light, Johnson lucidly explains how the physical world works.

The rapid expansion of physics in the last few decades was made possible by the fundamental developments during the first third of the century, coupled with recent technological advances, particularly in computer technology, electronics, nuclear-energy applications, and high-energy particle accelerators.

Cosmic rays are not ‘rays’ at all, but high-energy subatomic particles from space that continuously bombard the earth’s atmosphere. In this 1998 article from Scientific American, Nobel-Prize-winning physicist James W. Cronin and physicists Thomas K. Gaisser and Simon P. Swordy detailed recent efforts by scientists to discover the sources of ultrahigh-energy cosmic rays. The authors presented current theories about the origin of the majority of cosmic rays, and of the more mysterious ultrahigh-energy cosmic rays, which are believed to originate outside our galaxy.

Van de Graaff generators can build up very high charges of static electricity. As this girl touches a generator, her body becomes charged along with the generator. The charged strands of her hair repel each other, causing her hair to stand on end.

Rutherford and other early investigators of nuclear properties were limited to the use of high-energy emissions from naturally radioactive substances to probe the atom. The first artificial high-energy emissions were produced in 1932 by the British physicist Sir John Douglas Cockcroft and the Irish physicist Ernest Thomas Sinton Walton, who used high-voltage generators to accelerate protons to about 700,000 eV and to bombard lithium with them, transmuting it into helium. One electron volt is the energy gained by an electron when the accelerating voltage is 1 V; it is equivalent to about 1.6 × 10-19 joule (J). Modern accelerators produce energies measured in million electron volts (usually written mega-electron volts, or MeV), billion electron volts (giga-electron volts, or GeV), or trillion electron volts (Tera-electron volts, or TeV). Higher-voltage sources were first made possible by the invention, also in 1932, of the Van de Graaff generator by the American physicist Robert van de Graaff.

This was followed almost immediately by the invention of the cyclotron by the American physicists Ernest Orlando Lawrence and Milton Stanley Livingston. The cyclotron uses a magnetic field to bend the trajectories of charged particles into circles, and during each half-revolution the particles are given a small electric ‘kick’ until they accumulate the high energy level desired. Protons could be accelerated to about 10 MeV by a cyclotron, but higher energies had to await the development of the synchrotron after the end of World War II (1939-1945), based on the ideas of the American physicist Edwin Mattison McMillan and the Soviet physicist Vladimir I. Veksler. After World War II, accelerator design made rapid progress, and accelerators of many types were built, producing high-energy beams of electrons, protons, deuterons, heavier ions, and X rays. For example, the accelerator at the Stanford Linear Accelerator Centre (SLAC) in Stanford, California, accelerates electrons down a straight ‘runway,’ 3.2 km (2 mi) long, at the end of which they attain an energy of more than 20 GeV.

The big circle marks the location of the Large Hadron Collider (LHC) at the European particle physics laboratory in CERN. The tunnel where the particles are accelerated is located 100 m (320 ft) underground and is 27 km (16.7 mi) in circumference. The smaller circle is the site of the smaller proton-antiproton collider. The border of France and Switzerland bisects the CERN site and the two accelerator rings.

While lower-energy accelerators are used in various applications in industry and laboratories, the most powerful ones are used in studying the structure of elementary particles, the fundamental building blocks of nature. In such studies elementary particles are broken up by hitting them with beams of projectiles that are usually protons or electrons. The distribution of the fragments yields information on the structure of the elementary particles.

To obtain more detailed information in this manner, the use of more energetic projectiles is necessary. Since the acceleration of a projectile is achieved by ‘pushing’ it from behind, to obtain more energetic projectiles it is necessary to keep pushing for a longer time. Thus, high-energy accelerators are generally larger in size. The highest beam energy reached at the end of World War II was less than 100 MeV. A bigger accelerator, reaching 3 GeV, was built in the early 1950s at the Brookhaven National Laboratory at Upton, New York. A breakthrough in accelerator design occurred with the introduction of the strong focusing principle in 1952 by the American physicists Ernest D. Courant, Livingston, and Hartland S. Snyder. Today the world's largest accelerators have been or are being built to produce beams of protons beyond 1 TeV. Two are located at the Fermi National Accelerator Laboratory, near Batavia, Illinois, and at the European Organization for Nuclear Research, known as CERN, in Geneva, Switzerland.

A Geiger counter is a device used by scientists and surveyors for detecting the presence and intensity of radiation. The tube is filled with low-pressure gas and acts as an ionizing chamber. An electronic circuit maintains a strong electric field between a fine wire in the centre of the tube and its walls. When ionizing radioactive particles enter the tube and collide with gas atoms, they ionize the gas, producing free electrons. These electrons flow along the centre wire and create an electrical pulse, which is amplified and counted electronically. When radiation is detected, the Geiger counter produces a clicking, static like sound.

These tracks were formed by elementary particles in a bubble chamber at the CERN facility located outside of Geneva, Switzerland. By examining these tracks, physicists can determine certain properties of particles that travelled through the bubble chamber. For example, a particle's charge can be determined by noting the type of path the particle followed. The bubble chamber is placed within a magnetic field, which causes a positively charged particle's track to curve in one direction, and a negatively charged particle's track to curve the opposite way; neutral particles, unaffected by the magnetic field, move in a straight line.

Detection and analysis of elementary particles were first accomplished through the ability of these particles to affect photographic emulsions and to energize fluorescent materials. The actual paths of ionized particles were first observed by the British physicist Charles Thomson Rees Wilson in a cloud chamber, where water droplets condensed on the ions produced by the particles during their passage. Electric or magnetic fields can be used to bend the particle paths, yielding information about their momentum and electric charges. A significant advance on the cloud chamber was the construction of the bubble chamber by the American physicist Donald Arthur Glaser in 1952. It uses a liquid, usually hydrogen, instead of air, and the ions produced by a fast particle become centres of boiling, leaving an observable bubble track. Because the density of the liquid is much higher than that of air, more interactions take place in a bubble chamber than in a cloud chamber. Furthermore, the bubbles clear out faster than water droplets, allowing more frequent cycling of the bubble chamber. A third development, the spark chamber, evolved in the 1950s. In this device, many parallel plates are kept at a high voltage in a suitable gas atmosphere. An ionizing particle passing between the plates breaks down the gas, forming sparks that delineate its path.

A different type of detector, the discharge counter, was developed early during the 20th century, largely by the German physicist Hans Wilhelm Geiger, and later improved by the German American physicist Walther Müller. It is now commonly known as the Geiger-Müller counter, and although small and convenient, it has been largely replaced by faster and more convenient solid-state counting devices, such as the scintillation counter, developed about 1947 by the German American physicist Hartmut Paul Kallmann and others. It uses the ability of ionized particles to produce a flash of light as they pass through certain organic crystals and liquids.

Cosmic rays are extremely energetic subatomic particles that travel through outer space at nearly the speed of light. Scientists learn about deep space by studying galactic cosmic rays, which originate many light-years away (a light-year represents the distance light travels in one year). This photograph, taken in the late 1940s with a special photographic emulsion called the Kodak NT4, records a collision of a cosmic-ray particle with a particle in the film. A cosmic-ray particle produced the track that starts at the top left corner of the photograph; this particle collided with a nucleus in the centre of the photograph to create a spray of subatomic particles.

About 1911 the Austrian-American physicist Victor Franz Hess discovered that cosmic radiation, consisting of rays originating outside the earth's atmosphere, arrived in a pattern determined by the earth's magnetic field. The rays were found to be positively charged and to consist mostly of protons with energies ranging from about 1 GeV to 1011 GeV (compared to about 30 GeV for the fastest particles produced by artificial accelerators). Cosmic rays trapped into orbits around the earth account for the Van Allen radiation belts discovered during an artificial-satellite flight in 1959.

When a very energetic primary proton smashes into the atmosphere and collides with the nitrogen and oxygen nuclei present, it produces large numbers of different secondary particles that spread toward the earth as a cosmic-ray shower. The origin of the cosmic-ray protons is not yet fully understood; some undoubtedly come from the sun and the other stars. Except for the slowest rays, however, no mechanism can be found to account for their high energies and the likelihood is that weak galactic fields operate over very long periods to accelerate interstellar protons.

Theoretical physicist C. Llewellyn Smith discusses the discoveries that scientists have made to date about the electron and other elementary particles—subatomic particles that scientists believe cannot be split into smaller units of matter. Scientists have discovered what Smith refers to as sibling and cousin particles to the electron, but much about the nature of these particles is still a mystery. One way scientists learn about these particles is to accelerate them to high energies, smash them together, and then study what happens when they collide. By observing the behaviour of these particles, scientists hope to learn more about the fundamental structures of the universe.

To the electron, proton, neutron, and photon have been added a number of fundamental particles. In 1932 the American physicist Carl David Anderson discovered the antielectron, or positron, predicted in 1928 by Dirac. Anderson found that the stopping of an energetic cosmic gamma ray near a heavy nucleus yielded an electron-positron pair out of pure energy. When a positron subsequently meets an electron, they annihilate each other with a burst of photons of energy.

By 1977, physicists had confirmed the existence of five of the six elementary particles known as quarks. Scientific American describes the first detection, in 1995, of the sixth type of quark, which physicists had been seeking for 20 years.

The masses of elementary particles are usually given in units of MeV (million electron volts). One MeV mass equivalent is equal to 1.8 x 10-27 g. The mean lives of the unstable particles are in units of seconds.

In 1935 the Japanese physicist Yukawa Hideki developed a theory explaining how a nucleus is held together, despite the mutual repulsion of its protons, by postulating the existence of a particle intermediate in mass between the electron and the proton. In 1936 Anderson and his coworkers discovered a new particle of 207 electron masses in secondary cosmic radiation; now called the mu-meson or muon, it was first thought to be Yukawa's nuclear ‘glue.’ Subsequent experiments by the British physicist Cecil Frank Powell and others led to the discovery of a somewhat heavier particle of 270 electron masses, the pi-meson or pion (also obtained from secondary cosmic radiation), which was eventually identified as the missing link in Yukawa's theory.

Many additional particles have since been found in secondary cosmic radiation and through the use of large accelerators. They include numerous massive particles, classed as hadrons (particles that take part in the ‘strong’ interaction, which binds atomic nuclei together), including hyperons and various heavy mesons with masses ranging from about one to three proton masses; and intermediate vector bosons such as the W and Z0 particles, the carriers of the ‘weak’ nuclear force. They may be electrically neutral, positive, or negative, but never have more than one elementary electric charge e. Enduring from 10-8 to 10-14 sec, they decay into a variety of lighter particles. Each particle has its antiparticle and carries some angular momentum. They all obey certain conservation laws involving quantum numbers, such as baryon number, strangeness, and isotopic spin.

In 1931 Pauli, in order to explain the apparent failure of some conservation laws in certain radioactive processes, postulated the existence of electrically neutral particles of zero-rest mass that nevertheless could carry energy and momentum. This idea was further developed by the Italian-born American physicist Enrico Fermi, who named the missing particle the neutrino. Uncharged and tiny, it is elusive, easily able to penetrate the entire earth with only a small likelihood of capture. Nevertheless, it was eventually discovered in a difficult experiment performed by the Americans Frederick Reines and Clyde Lorrain Cowan, Jr. Understanding of the internal structure of protons and neutrons has also been derived from the experiments of the American physicist Robert Hofstadter, using fast electrons from linear accelerators.

Japanese physicist Yukawa Hideki won the 1949 Nobel Prize in physics. Based on his research into quantum mechanics and the fields of force affecting elementary particles, he theoretically deduced the existence of mesons, a family of subatomic particles composed of quarks and antiquarks and having intermediate mass.

In the late 1940s a number of experiments with cosmic rays revealed new types of particles, the existence of which had not been anticipated. They were called strange particles, and their properties were studied intensively in the 1950s. Then, in the 1960s, many new particles were found in experiments with the large accelerators. The electron, proton, neutron, photon, and all the particles discovered since 1932 are collectively called elementary particles. But the term is actually a misnomer, for most of the particles, such as the proton, have been found to have very complicated internal structure.

Elementary particle physics is concerned with (1) the internal structure of these building blocks and (2) how they interact with one another to form nuclei. The physical principles that explain how atoms and molecules are built from nuclei and electrons are already known. At present, vigorous research is being conducted on both fronts in order to learn the physical principles upon which all matter is built.

One popular theory about the internal structure of elementary particles is that they are made of so-called quarks, which are subparticles of fractional charge; a proton, for example, is made up of three quarks. This theory was first proposed in 1964 by the American physicists Murray Gell-Mann and George Zweig. The theory explains a number of phenomena, and physicists have collected a great deal of evidence of quarks in combinations with each other. No individual quarks have been observed, however, and current theory suggests that quarks may never be released as separate entities except under such extreme conditions as those found during the very creation of the universe. The theory postulated three kinds of quarks, but later experiments, especially the discovery of the J/psi particle in 1974 by the American physicists Samuel C. C. Ting and Burton Richter, called for the introduction of three additional kinds.

Physicists have sought for decades to demonstrate that the forces governing the behaviour of elementary particles at the atomic level are different aspects of the same fundamental force. Progress toward a unified theory was made in the 1960s and 1970s, when physicists unified the electromagnetic force with the lesser-known weak force (the force responsible for slow nuclear processes, such as beta decay). The two forces are now sometimes referred to collectively as the electroweak interaction. American physicist Steven Weinberg and Pakistani physicist Abdus Salam independently proposed similar unified theories for these two interactions in 1967 and 1968. In 1979 the two scientists shared the Nobel Prize in physics for their contribution. Weinberg described the electroweak theory in this 1974 Scientific particle mystery calling the new nuclear force particles as the W and Z particles. Elementary particles—and if quarks exist, between the quarks—is a more difficult area of research. The most successful theories, thus far, are called gauge theories. In these, the interaction between two kinds of particles is characterized by symmetry. The symmetry between neutrons and protons, for example, is such that if the identities of the particles are interchanged, nothing changes as far as the ‘strong’ force is concerned. The first of the gauge theories applied to the electric and magnetic interactions between charged particles. Here, the symmetry consists in the fact that changes in the combination of electric and magnetic potentials have no effect on the results. A powerful gauge theory, which has since been verified, was that proposed independently by both the American physicist Steven Weinberg and the Pakistani physicist Abdus Salam in 1967 and 1968. Their model linked the intermediate vector boson with the photon, thus uniting the electromagnetic and weak interactions, although only for leptons. Later work by others (Sheldon Lee Glashow, J. Iliopolis, and L. Maiani) showed how the model could be applied to hadrons (the strongly interacting particles) as well.

Gauge theory, in principle, can be applied to any force field, holding out the possibility that all the interactions, or forces, can be brought together into a single unified field theory. Such efforts inevitably involve the concept of symmetry. Generalized symmetries extend to particle interchanges that vary from point to point in space and time. The difficulty for physicists is that such symmetries, while mathematically elegant, do not extend scientific understanding of the underlying nature of matter. For this reason, many physicists are exploring the possibilities of so-called supersymmetry theories, which would directly relate fermions and bosons to one another by postulating further particle ‘twins’ to those now known, differing only in spin. Doubts have been expressed about such efforts, but another approach known as ‘superstring’ theory is attracting a good deal of interest. In such theories, fundamental particles are considered not as dimensionless objects but as ‘strings’ that extend one-dimensionally to lengths of no more than 10-35 metres. Such theories solve a number of problems for the physicists who are working on unified field theories, but they are still only highly theoretical constructs.

In 1993 scientists at the Tokamak Fusion Test Reactor, at Princeton University’s plasma physics laboratory in New Jersey, produced a controlled fusion reaction, during which the temperature in the reactor surpassed three times that of the core of the sun. In a tokamak reactor, massive magnets confine hydrogen plasma under extremely high temperatures and pressures, forcing the hydrogen nuclei to fuse. When atomic nuclei are forced together in nuclear fusion, the reaction releases an extraordinary amount of energy.

In 1931 the American physicist Harold Clayton Urey discovered the hydrogen isotope deuterium and made heavy water from it. The deuterium nucleus, or deuteron (one proton plus one neutron), makes an excellent bombarding particle for inducing nuclear reactions. The French physicists Irène and Frédéric Joliot-Curie produced the first artificially radioactive nucleus in 1933 and 1934, leading to the production of radioisotopes for use in archaeology, biology, medicine, chemistry, and other sciences.

Fermi and many collaborators attempted a series of experiments to produce elements beyond uranium by bombarding uranium with neutrons. They succeeded, and now at least a dozen such transuranium elements have been made. As their work continued, an even more important discovery was made. Irène Joliot-Curie, the German physicists Otto Hahn and Fritz Strassmann, the Austrian physicist Lise Meitner, and the British physicist Otto Robert Frisch found that some uranium nuclei broke into two parts, a phenomenon called nuclear fission. At the same time, a huge amount of energy was released by mass conversion, as well as some neutrons. These results suggested the possibility of a self-sustained chain reaction, and this was achieved by Fermi and his group in 1942, when the first nuclear reactor went into operation. Technological developments followed rapidly; the first atomic bomb was produced in 1945 as a result of a massive program under the direction of the American physicist J. Robert Oppenheimer, and the first nuclear power reactor for the production of electricity went into operation in England in 1956, yielding 78 million watts.

Further developments were based on the investigation of the energy source of the stars, which the German American physicist Hans Albrecht Bethe showed to be a series of nuclear reactions occurring at temperatures of millions of degrees. In these reactions, four hydrogen nuclei are converted into a helium nucleus, with two positrons and massive amounts of energy forming the by-products. This nuclear-fusion process was adopted in modified form, largely based on ideas developed by the Hungarian-American physicist Edward Teller, as the basis of the fusion or hydrogen bomb. First detonated in 1952, it is a weapon much more powerful than the fission bomb. A small fission bomb provides the high temperature necessary to trigger fusion of hydrogen.


Much current research is devoted to producing a controlled, rather than an explosive, fusion device, which would be less radioactive than a fission reactor and would provide an almost limitless source of energy. In December 1993 significant progress was made toward this goal when researchers at Princeton University used the Tokamak Fusion Test Reactor to produce a controlled fusion reaction that output 5.6 million watts of power. However, the tokamak consumed more power than it produced during its operation.

In solids, the atoms are closely packed, leading to strong interactive forces and numerous interrelated effects that are not observed in gases, where the molecules largely act independently. Interaction effects lead to the mechanical, thermal, electrical, magnetic, and optical properties of solids, which is an area that remains difficult to handle theoretically, although much progress has been made.

A principal characteristic of most solids is their crystalline structure, with the atoms arranged in regular and geometrically repeating arrays. The specific arrangement of the atoms may arise from a variety of forces; thus, some solids, such as sodium chloride, or common salt, are held together by ionic bonds originating in the electric attraction between the ions of which the materials are composed. In others, such as diamond, atoms share electrons, giving rise to covalent bonding. Inert substances, such as neon, exhibit neither of these bonds. Their existence is a result of the so-called van der Waals forces, named after the Dutch physicist Johannes Diderik van der Waals. These forces exist between neutral molecules or atoms as a result of electric polarization. Metals, on the other hand, are bonded by a so-called electron gas, or electrons that are freed from the outer atomic shell and shared by all atoms, and that define most properties of the metal.

The sharp, discrete energy levels permitted to the electrons in individual atoms become broadened into energy bands when the atoms become closely packed in a solid. The width and separation of these bands define many properties, and thus the separation by a so-called forbidden band, where no electrons may exist, restricts their motion and results in a good electric and thermal insulator. Overlapping energy bands and their associated ease of electron motion results in their being good conductors of electricity and heat. If the forbidden band is narrow, a few fast electrons may be able to jump across, yielding a semiconductor. In this case the energy-band spacing may be greatly affected by minute amounts of impurities, such as arsenic in silicon. The lowering of a high-energy band by the impurity results in a so-called donor of electrons, or an n-type semiconductor. The raising of a low-energy band by an impurity like gallium results in an acceptor, where the vacancies or ‘holes’ in the electron structure act like movable positive charges and are characteristic of p-type semiconductors. A number of modern electronic devices, notably the transistor, developed by the American physicists John Bardeen, Walter Houser Brattain, and William Bradford Shockley, are based on these semiconductor properties.

Magnetic properties in a solid arise from the electrons' acting like tiny magnetic dipoles. Electron spin plays a big role in magnetism, leading to spin waves that have been observed in some solids. Almost all solid properties depend on temperature. Thus, ferromagnetic materials, including iron and nickel, lose their normal strong residual magnetism at a characteristic high temperature, called the Curie temperature. Electrical resistance usually decreases with decreasing temperature, and for certain materials, called superconductors, it becomes extremely low, near absolute zero. These and many other phenomena observed in solids depend on energy quantization and can best be described in terms of effective ‘particles’ such as phonons, polarons, and magnons.

A small cylindrical magnet floats above a high temperature superconductor. The vapour is from boiling liquid nitrogen, which keeps the superconductor in a zero-resistance state. As the magnet is lowered toward the superconductor, it induces an electric current, which creates an opposing magnetic field in accordance with Ampere’s law. Because the superconductor has no electrical resistance, this induced current continues to flow, keeping the magnet suspended indefinitely.

At very low temperatures (near absolute zero), many materials exhibit strikingly different characteristics. At the beginning of the 20th century the Dutch physicist Heike Kamerlingh Onnes developed techniques for producing these low temperatures and discovered the superconductivity of mercury: It loses all electrical resistance at about 4 K. Many other elements, alloys, and compounds do the same at their characteristic near-zero temperature, with originally magnetic materials becoming magnetic insulators. The theory of superconductivity, developed largely by the American physicists John Bardeen, Leon N. Cooper, and John Robert Schrieffer, is extremely complicated, involving the pairing of electrons in the crystal lattice.

Dutch physicist Heike Kamerlingh Onnes was sometimes called the ‘gentleman of absolute zero’ for his pioneering work in cryogenics, the study of materials at extremely low temperatures. Onnes began his low-temperature work because of his interest in the behaviour of gases. He went on to become the first person to liquefy helium and the first to discover that some metals, when sufficiently cooled, become superconductors—that is, materials that have no resistance to the flow of an electrical current. Scientific American presents an overview of Onnes's work.

Another fascinating discovery was that helium does not freeze but changes at about 2 K from an ordinary liquid, He I, to the superfluid He II, which has no viscosity and has a thermal conductivity about 1000 times greater than silver. Films of He II can creep up the walls of their containing vessels and He II can readily permeate some materials like platinum. No fully satisfactory theory is yet available for this behaviour.

A plasma is any substance (usually a gas) whose atoms have one or more electrons detached and therefore become ionized. The detached electrons remain, however, in the gas volume that in an overall sense remains electrically neutral. The ionization can be effected by the introduction of large concentrations of energy, such as bombardment with fast external electrons, irradiation with laser light, or by heating the gas to very high temperatures. The individually charged plasma particles respond to electric and magnetic fields and can therefore be manipulated and contained.

Plasmas are found in gas-filled light sources, such as a neon lamp, in interstellar space where residual hydrogen is ionized by radiation, and in stars whose great interior temperatures produce a high degree of ionization, a process closely connected with the nuclear fusion that supplies the energy of stars. For the hydrogen nuclei to fuse into heavier nuclei, they must be fast enough to overcome their mutual electric repulsion. This implies high temperature (millions of degrees) when the hydrogen ionizes into a plasma. In order to produce a controlled fusion, or thermonuclear reaction, it is necessary to generate and contain plasmas magnetically; this is an important but difficult problem that falls in the field of magnetohydrodynamics.

One of the many applications of laser beams involves welding pieces of metal together. Laser welders fuse metals at temperatures of over 5500° C (10,000° F). Metal machinists also use lasers to cut small holes accurately or carve fine details in metals.

Important recent development is that of the laser, an acronym for light amplification by stimulated emission of radiation. In lasers, which may have gases, liquids, or solids as the working substance, a large number of atoms are raised to a high energy level and caused to release this energy simultaneously, producing coherent light where all waves are in phase. Similar techniques are used for producing microwave emissions by the use of masers. The coherence of the light allows for very high intensity, sharp wavelength light beams that remain narrow over tremendous distances; they are far more intense than light from any other source. Continuous lasers can deliver hundreds of watts of power, and pulsed lasers can produce millions of watts of power for very short periods. Developed during the 1950s and 1960s, largely by the American engineer and inventor Gordon Gould and the American physicists Charles Hard Townes, T. H. Maiman, Arthur Leonard Schawlow, and Ali Javan, the laser today has become an extremely powerful tool in research and technology, with applications in communications, medicine, navigation, metallurgy, fusion, and material cutting.

By 1980 most scientists believed that the big bang theory—which holds that the universe was created from one vast explosion and subsequent expansion—was the most likely explanation for the origin of the universe. However, the original big bang theory had several shortcomings. For instance, it was unable to account for the large-scale uniformity in the distribution of matter in the observed universe. In 1981 American cosmologist Alan Guth introduced a new theory known as the inflationary model, which presents a more detailed explanation of what may have occurred in the first fraction of a second of the universe’s existence. Guth’s ideas, developed from an area of study known as unified field theory, were modified and elaborated on by American theoretical physicist Paul Steinhardt and others in the early 1980s. The inflationary model has many interesting implications, including the possibility that our universe is only one of billions of universes and that our universe may have been created out of nothing—what one cosmologist jokingly called ‘the ultimate free lunch.’ Guth and Steinhardt explain inflationary theory in this 1984 Scientific American article.

The construction of large and specially designed optical telescopes has led to the discovery of new stellar objects, including a number of quasars, which are billions of light-years away, and has led to a better understanding of the structure of the universe. Radio astronomy has yielded other important discoveries, such as pulsars and the cosmic background radiation, which probably dates from the origin of the universe. The evolutionary history of the stars is now well understood in terms of nuclear reactions. As a result of recent observations and theoretical calculations, the belief is now widely held that all matter was originally in one dense location and that about 14 billion years ago it exploded in one titanic event often called the big bang. The aftereffects of the explosion have led to a universe that appears to be still expanding. A puzzling aspect of this universe, recently revealed, is that the galaxies are not uniformly distributed. Instead, vast voids are bordered by galactic clusters shaped like filaments. The pattern of these voids and filaments lends itself to nonlinear mathematical analysis of the sort used in chaos theory.

Cosmology, study of the universe as a whole, including its distant past and its future. Cosmologists study the universe observationally—by looking at the universe—and theoretically—by using physical laws and theories to predict how the universe should behave. Cosmology is a branch of astronomy, but the observational and theoretical techniques used by cosmologists involve a wide range of other sciences, such as physics and chemistry. Cosmology is distinguished from cosmogony, which used to mean the study of the origin of the universe but now usually refers only to the study of the origin of the solar system.

The Hubble Deep Field project used the Hubble Space Telescope to peer deep into space—farther than had ever been possible before. In this image, the telescope focused on one tiny area of the sky for ten days. Every object visible is a far-distant galaxy containing hundreds of billions of stars, revealing the awesome magnitude of the universe.

In the 1950s cosmologists (scientists who study the evolution of the universe) were considering two theories for the origin of the universe. The first, the currently accepted big bang theory, held that the universe was created from one enormous explosion. The second, known as the steady state theory, suggested that the universe had always existed. Russian-American theoretical physicist George Gamow advanced the big bang theory and its underpinnings in a 1956 Scientific American article. Gamow’s estimate of a 5-billion-year-old universe is no longer considered accurate; the universe is now thought to be significantly older.

By 1980 most scientists believed that the big bang theory—which holds that the universe was created from one vast explosion and subsequent expansion—was the most likely explanation for the origin of the universe. However, the original big bang theory had several shortcomings. For instance, it was unable to account for the large-scale uniformity in the distribution of matter in the observed universe. In 1981 American cosmologist Alan Guth introduced a new theory known as the inflationary model, which presents a more detailed explanation of what may have occurred in the first fraction of a second of the universe’s existence. Guth’s ideas, developed from an area of study known as unified field theory, were modified and elaborated on by American theoretical physicist Paul Steinhardt and others in the early 1980s. The inflationary model has many interesting implications, including the possibility that our universe is only one of billions of universes and that our universe may have been created out of nothing—what one cosmologist jokingly called ‘the ultimate free lunch.’ Guth and Steinhardt explain inflationary theory in this 1984 Scientific American article.

Humans have been examining and wondering about the sky for many millennia. As scientific discoveries have been made, ideas about the origin of the universe have changed and are still changing.

As far back as 1100 bc, Mesopotamian astronomers drew constellations, or formations of stars perceived to form shapes. Some of today’s constellation names date back to that time. Mesopotamian and Babylonian cultures mapped the motion of the planets across the sky by observing how they moved against the background of stars.

Until the 16th century, most people (including early astronomers) considered Earth to be at the centre of the universe. Greek philosopher Aristotle proposed a cosmology in about 350 bc that held for thousands of years. Aristotle theorized that the Sun, the Moon, and the planets all revolved around Earth on a set of celestial spheres. These celestial spheres were made of the quintessence—a perfect, unchanging, transparent element. According to Aristotle, the outermost sphere was made of the stars, which appear to be fixed in position. Early astronomers called the stars ‘fixed stars’ to differentiate between stars and planets. The spheres inside the sphere of the fixed stars held the planets, which astronomers called the ‘wandering stars.’ The Sun and Moon occupied the two innermost spheres. Four elements (earth, air, fire, and water) less pure than the quintessence made up everything below the innermost sphere of the Moon. In about 250 bc, Greek astronomer Aristarchus of Sámos became the first known person to assert that Earth moved around the Sun, but Aristotle’s model of the universe prevailed for almost 1,800 years after that assertion.

Early astronomers called the planets wandering stars because they move against the background of the stars. Astronomers noted that the planets sometimes moved ahead with respect to the stars but sometimes reversed themselves, making retrograde loops. In about ad 140, Greek scientist Ptolemy explained the retrograde motion as the result of a set of small circles, called epicycles, on which the planets moved. Ptolemy hypothesized that the epicycles moved on larger circles called deferents and that the combination of these motions caused the dominant forward motion and the occasional retrograde loops.

Currently, most people consider it obvious that the sun is at the centre of the solar system, but the sun-centred (heliocentric) concept was slow to evolve. In the 2nd century ad, Claudius Ptolemy proposed a model of the universe with the earth at the centre (geocentric). His model (shown left) depicts the earth as stationary with the planets, moon, and sun moving around it in small, circular orbits called epicycles. Ptolemy’s system was accepted by astronomers and religious thinkers alike for several hundred years. It was not until the 16th century that Nicolaus Copernicus developed a model for the universe in which the sun was at the centre instead of the earth. The new model was rejected by the church, but it gradually gained popular acceptance because it provided better explanations for observed phenomena. Ironically, Copernicus’ initial measurements were no more accurate than Ptolemy’s, they just made more sense.

The ideas of Ptolemy were accepted in an age when standards of scientific accuracy and proof had not yet been developed. Even when Polish astronomer Nicolaus Copernicus developed his model of a Sun-centred universe, published in 1543, he based his ideas on philosophy instead of new observations. Copernicus’s theory was simpler and therefore more sound scientifically than the idea of an Earth-centred universe. A Sun-centred universe neatly explained why Mars appears to move backward across the sky: Because Earth is closer to the Sun, Earth moves faster than Mars. When Mars is ahead of or relatively far behind Earth, Mars appears to move across Earth’s night sky in the usual west-to-east direction. As Earth overtakes Mars, Mars’s motion seems to stop, then begin an east-to-west motion that stops and reverses when Earth moves far enough away again. Copernicus’s model also explained the daily and yearly motion of the Sun and stars in Earth’s sky. Scientists were slow to accept Copernicus’s model of the universe, but followers grew in number throughout the 16th century. By the mid-17th century, most scientists in western Europe accepted the Copernican universe.

Polish astronomer Nicolaus Copernicus believed that the Earth revolved around the Sun, upsetting the long-accepted Ptolemaic view that Earth was the centre of the universe. Fearing his theory would be judged heretical by the Roman Catholic Church, Copernicus delayed its publication until shortly before his death in 1543. Later scientists were punished for similar beliefs, including the Italian astronomer Galileo Galilei, who was forced to renounce his theories in 1633. By the late 1700s, however, many great thinkers of Europe had adopted the Copernican view, among them English physicist Sir Isaac Newton. The following is a translation of Copernicus’s theory of how the universe operates.

In the 16th century, Danish astronomer Tycho Brahe made the most scientific and accurate observations of the universe to that time. Brahe discovered discrepancies between astronomical predictions and the actual events, and built a set of large instruments that enabled him to record the positions of the planets and stars with unprecedented accuracy. He moved to Prague, and, after his death, his observations were taken over by German astronomer Johannes Kepler. Kepler discovered that the planets orbited around the Sun in ellipses (elongated circles) with the Sun a bit off-centre at one focus. This discovery was Kepler’s first law, and he developed two more laws about how the speeds and periods of the planets changed. The first two laws were published in 1609 and the third was published in 1619.

Italian astronomer Galileo made major discoveries about celestial objects in our solar system with newly-invented telescopes in the early 17th century. His discoveries helped turn cosmology into a science based on observation, rather than philosophy. These telescopes are now in the Museo della Scienza in Florence, Italy.

The Italian scientist Galileo Galilei lived and worked during the same time period as Kepler. Galileo was the first astronomer to use a telescope to observe the sky and to recognize what he saw there. He saw that the Moon had craters, that Venus went through a full set of phases like the Moon, and that Jupiter had satellites, or moons, of its own. His discoveries, published in 1610, marked the scientific end of the cosmological systems of Ptolemy and Aristotle, though it took some time for his findings to be generally accepted.

Later in the 17th century, British astronomer Edmond Halley presented British physicist Isaac Newton with a query about the shape of planetary orbits. Newton responded with his three laws of motion. Newton also developed the idea of universal gravitation, realizing that the same force that makes an apple fall to Earth also keeps the Moon constantly falling toward Earth, although in the Moon’s case Earth continually moves out of the way, resulting in the Moon orbiting the planet. Newton's calculations were eventually expanded into his greatest book, Philosophiae Naturalis Principia Mathematica, which was published in 1687. In the Principia, Newton derived a wide range of theoretical results about planetary orbits and advanced the law of universal gravity. Newton's laws were the foundation of cosmological thought until the 20th century.

Newton’s laws, however, left some questions unanswered. Beginning in the 17th century, scientists wondered why the sky was dark at night if space is indeed infinite (an idea proposed in ancient Greece and still accepted by most cosmologists today) and stars are distributed throughout that infinite space. An infinite amount of starlight should make the sky very bright at night. This cosmological question came to be called Olbers’s paradox after the German astronomer Heinrich Olbers, who wrote about the paradox in the 1820s. The paradox was not solved until the 20th century.

In the 19th century, counts of the numbers of stars appearing in different directions in the sky left astronomers with the incorrect idea that Earth and the Sun were approximately in the centre of the universe. This conclusion did not take into account the modern idea that dust in our Milky Way Galaxy prevented astronomers from seeing very far in any direction.

In 1924 American astronomer Edwin Hubble showed that fuzzy patches in the sky called ‘spiral nebulas’ were in fact galaxies like our Milky Way. The orbiting telescope named after him, the Hubble Space Telescope, took this picture of a distant galaxy called M100 in 1995.

In 1917 American scientist Harlow Shapley measured the distance to several groups of stars known as globular clusters. He measured these distances by using a method developed in 1912 by American astronomer Henrietta Leavitt. Leavitt’s method relates distance to variations in brightness of Cepheid variables, a class of stars that vary periodically in brightness. Shapley’s distance measurements showed that the clusters were centred around a point far from the Sun. The arrangement of the clusters was presumed to reflect the overall shape of the galaxy, so Shapley realized that the Sun was not in the centre of the galaxy. Just as Copernicus’s observations revealed that Earth was not at the centre of the universe, Shapley’s observations revealed that the Sun was not at the centre of the galaxy. Cosmologists now realize that Earth and the Sun do not occupy any special position in the universe.

Starting in about 1913, new large telescopes and advances in photography and spectroscopy, the study of the particular colours making up a beam of light, allowed astronomers to observe and begin measuring a reddening of the light from distant galaxies. These redshifts are similar to those caused by the Doppler effect. The Doppler effect is observed when an object emitting radiation moves with respect to the observer of that radiation. If the object is moving toward the observer, each wave of radiation originates from a place that is a little bit closer to the observer than the previous wave’s point of origin, so the distance between successive wave peaks, called wavelength, is shorter than usual. If the object is moving away from the observer, the wavelength is longer than usual. The wavelength change is proportional to the speed at which the object is moving relative to the observer. In visible light, a shift to longer wavelengths is equivalent to a shift toward the red end of the visible spectrum. Therefore, cosmologists refer to shifts in the colour of light coming from galaxies that are moving away from Earth as redshifts. The faster a galaxy is moving away, the more red its light will appear. By measuring the redshifts of distant galaxies, astronomers began to understand how the universe was evolving.

In 1915 German American physicist Albert Einstein, who was working in Switzerland, advanced a theory of gravitation known as the general theory of relativity. His theory involves a four-dimensional space-time continuum that bends in the presence of massive objects. This bending causes light and other objects that are moving near these massive objects to follow a curved path, just as a golfer's ball curves on a warped putting green. In this way, Einstein explained gravity. His theory showed that Newton’s theory of gravitation was a special case, valid in conditions normal to Earth but not in very strong gravitational fields or in other extreme conditions. Einstein’s theory also made several predictions that were not part of Newton's theory. When these predictions were verified, Einstein's theory was accepted. Einstein's equations were very complicated, though, and it was other scientists who eventually found widely accepted solutions to Einstein’s equations. Most of cosmology today is based on the set of solutions found in the 1920s by Russian mathematician Alexander Friedmann. Dutch astronomer Willem de Sitter and Belgian astronomer Georges Lemaître also developed cosmological models based on solutions to Einstein’s equations.

In the early 1920s, astronomers debated about whether the spiral structures seen in the sky, called spiral nebulae, were galaxies like our own Milky Way Galaxy or smaller objects in the Milky Way. Measuring the distances to these galaxies depended on the Leavitt-Shapley method of observing Cepheid variable stars. In 1924 American astronomer Edwin Hubble was able to detect Cepheid variables in other galaxies and show that the galaxies were beyond our own. These findings indicated that the spiral structures were probably galaxies separate from the Milky Way.

In 1929 Hubble had measured enough spectra of galaxies to realize that the galaxies’ light, except for that of the few nearest galaxies, was all shifted toward the red end of the visible spectrum. This shift increased the more distant the galaxies were. Cosmologists soon interpreted these redshifts as akin to Doppler shifts, which meant that the galaxies were moving away from Earth. The redshift, and therefore the speed of the galaxy, was greater for more distant galaxies. Galaxies in different directions at equivalent distances from Earth, however, had equivalent redshifts. This constant relationship between distance and speed led cosmologists to believe that the universe is expanding uniformly. The uniform relationship between velocity of expansion and distance from Earth is known as Hubble's law. The redshifts are not true Doppler shifts but rather result from the expansion of space, which carries the galaxies along with it.

Modern cosmologists base their theories on astronomical observations, physical concepts such as quantum mechanics, and an element of imagination and philosophy. Cosmologists have moved beyond trying to find Earth’s place in the universe to explaining the origins, nature, and fate of the universe.

British astrophysicist Sir Martin Rees, astronomer and Royal Society research professor at the Institute of Astronomy, Cambridge University, in Cambridge, England, is one of the world’s leading cosmologists (scientists who study the origin and evolution of the universe). Rees is the recipient of numerous awards and honorary degrees, including the Gold Medal of the Royal Astronomical Society, the Bruce Medal of the Astronomical Society of the Pacific, and the Cosmology Prize of the Peter Gruber Foundation. A former president of the British Association for the Advancement of Science, Sir Martin has also been at the forefront of communicating the current understanding of the universe to the wider public. He has written several books for the general reader, including Just Six Numbers (1999) and Our Cosmic Habitat (2002). Conglomerated World English editor Latha Menon interviewed Rees about his views on some of the latest developments in cosmology, including why the universe seems to have a unique affinity for life. British spellings have been retained.

The current ‘standard model’ of the origin of the universe, called the big bang theory, proposes that a major event, not unlike a huge explosion, set free all the matter and energy in the universe and started its expansion. Theories of the evolution and fate of the universe go on to describe a universe that has been expanding and cooling since the big bang. Early versions of the theory held that the universe would keep expanding forever or eventually collapse back to its initial state, an extremely dense object that contains all of the matter in the universe. When the big bang theory was developed in the mid-20th century, some cosmologists found the idea of a sudden beginning of the universe philosophically unacceptable. They proposed the steady-state theory, which said that the universe has always looked more-or-less the same as it does now and that it does not change over time. The steady-state theory could not explain the background radiation, though, and essentially all cosmologists have abandoned it.

Russian-born American physicist George Gamow developed the theory that the universe began as a hot explosion of matter and energy, or a ‘big bang.’ Gamow also made important contributions to other fields of physics, including radioactivity and nuclear physics.

The big bang theory describes a hot explosion of energy and matter at the time the universe came into existence. This theory explains why the universe is expanding. Recent versions of the theory also explain why the universe seems so uniform in all directions and at all places.

The work of Edwin Hubble, which showed that the universe is expanding, led cosmologists to begin tracking the history of the universe. The dominant idea is that the universe would have been hotter and denser billions of years ago. In the 1940s Russian American physicist George Gamow and his students, American physicists Ralph Alpher and Robert Herman, developed the idea of a hot explosion of matter and energy at the time of the origin of the universe. (This theory of an explosion at the beginning of the universe was given the originally derisive name ‘big bang’ by British astronomer Fred Hoyle in 1950.) Current calculations place the age of the universe at about 13.7 billion years. Gamow and his students realized that some of the chemical elements in the universe today were forged in the hot early stage of the universe’s existence. They also hypothesized that some radiation that remains from the big bang explosion may still be circulating in the universe, though this idea was forgotten for some time.

Current methods of particle physics allow the universe to be traced back to a tiny fraction of a second—1 × 10-43 seconds—after the big bang explosion initiated the expansion of the universe. To understand the behaviour of the universe before that point cosmologists would need a theory that merges quantum mechanics and general relativity. Scientists do not actually study the big bang itself, but infer its existence from the universe’s expansion.

In the 1950s American astronomer William Fowler and British astronomers Fred Hoyle, Geoffrey Burbidge, and Margaret Burbidge worked out a series of calculations that showed that the lightest of the chemical elements (those of lowest atomic weight) were formed in the early universe shortly after the big bang. These light elements include ordinary hydrogen, hydrogen’s isotope deuterium, and helium. Heavier elements, according to those calculations, were formed later. Scientists now know that the elements heavier than helium and lighter than iron were formed in nuclear processes in stars, and the heaviest elements (those heavier than iron) were formed in supernova explosions.

English astronomer and mathematician Sir Fred Hoyle predicted the existence of quasars. Hoyle also coined the term big bang as a disparaging reference to the theory that the universe originated billions of years ago from a hot explosion of matter and energy. Hoyle proposed an alternate version of the steady-state theory explanation, which suggested that the density of the expanding universe remained constant as new matter was slowly created.

In the 1940s British scientists Hermann Bondi, Thomas Gold, and Fred Hoyle were philosophically opposed to the requirements that the big bang theory put forth for the extreme conditions in the early universe. The big bang theory was framed in terms of what they called the cosmological principle—that the universe is homogeneous (the same in all locations) and isotropic (looks the same in all directions) on a large scale. Bondi, Gold, and Hoyle suggested an additional postulate, which they called the perfect cosmological principle. This principle stated that the universe is not only homogeneous and isotropic but also looks the same at all times. Since the universe is expanding, though, one might think that the density of the universe would decrease. Such a decrease would be a change that would not fit with the perfect cosmological principle. Bondi, Gold, and Hoyle thus suggested that matter could be continuously created out of nothing to maintain the density over time. The rate at which matter would have to be created was much too low to be observationally testable, however. They called this theory the steady-state theory.

Quasars are distant astronomical objects, most of which emit very strong radio waves. This false-colour radio map of a quasar was taken by the Very Large Array radio telescope in New Mexico. All the quasars that astronomers have found have been very distant. Their light has taken billions of years to reach the earth, so studying quasars is like studying what the universe looked like billions of years ago. The fact that no quasars exist closer to the earth is evidence that the universe has changed over time.

The only evidence necessary for supporters of the big bang theory to prove that this theory was more acceptable than the steady-state theory was to show that the universe changed over time. Just such a change was found in 1963 when Dutch American astronomer Maarten Schmidt identified quasars while working at the Palomar Observatory in California. As seen from Earth, quasars are bluish astronomical objects that resemble stars. Astronomers believe that quasars are the cores of certain types of galaxies. Quasars are all quite far from Earth, which means they must have originated during the early formation of the universe. They are distant from Earth in both time and space. The lack of quasars near Earth (and therefore nearer in time to Earth) shows that the universe has been evolving. This finding dealt a serious blow to steady-state cosmology.

Most scientists believe that the microwave background radiation is left over from the big bang at the beginning of the universe. The Wilkinson Microwave Anisotropy Probe (WMAP) produced this image, which shows variations in cosmic microwave radiation. The coloured spots correspond to fluctuations in the density of matter and energy in the early universe, about 380,000 years after the big bang. Cosmologists believe that as the universe expanded and cooled, these fluctuations structured the formation of galaxies.

Even when all Earthly and astronomical sources of radio waves are screened out, some static remains on the most sensitive radios. This static is caused by radiation left over from the big bang, the explosion that created the universe.

In 1965 a piece of evidence was found that almost all scientists agree conclusively rules out the steady-state theory of the universe. At that time, American physicists Arno Penzias and Robert W. Wilson, working at the Bell Laboratories in New Jersey (now part of Lucent Technologies), discovered faint isotropic radio waves. American astronomers James Peebles, David Roll, David Wilkinson, and Robert Dicke at Princeton University had recently predicted that just such radiation would have been emitted as a result of the hot, dense early universe predicted by the big bang theory. These scientists were themselves preparing a radio telescope to search for this radiation. (Scientists only later recalled that Gamow and colleagues had earlier predicted such radiation.) This cosmic background radiation is now widely accepted as proof of the big bang theory. The existence of cosmic background radiation is the third pillar of modern cosmology. The other two pillars are: (1) the uniform expansion of the universe and (2) the match between calculations of the amounts of the lightest chemical elements that would be formed in the first few minutes after a big bang and observations of these elements’ actual relative abundance in space.

According to the widely accepted theory of the big bang, the universe originated about 14 billion years ago and has been expanding ever since. Astronomers recognize four models of possible futures for the universe. According to the closed model, many billions of years from now expansion will slow, stop, and the universe will contract back in upon itself. In the flat model, the universe will not collapse upon itself, but expansion will slow and the universe will approach a stable size. According to the open model, the universe will continue expanding forever. In the accelerating expansion model, the universe will expand faster and faster until even the particles in normal matter are torn away from each other. Astronomers currently favour the accelerating expansion model.

In current cosmological models, the universe was at first both extremely hot and incredibly dense, with temperatures exceeding billions of degrees. In the first second after the big bang, as the universe expanded and cooled, elementary particles such as quarks and electrons formed.

After about one second, the universe had cooled enough that protons had formed out of the quarks. For the next 1,000 seconds—in what is now known as the era of nucleosynthesis—hydrogen, deuterium, helium, and some lithium and beryllium formed. Electrons began to combine with protons to make hydrogen atoms about 300,000 years after the big bang. The process continued until about 1 million years after the big bang, when the universe had cooled to about 3000°C (about 5000°F). Before this era, photons of light could not travel far in the universe without bouncing off electrons. The formation of hydrogen atoms, however, used up many of the free electrons and allowed light to travel quite far. The radiation that was set free at that time has cooled as the universe has expanded. Today the temperature of this background radiation is approximately 3 K (-270°C, or -450°F).

The Cosmic Background Explorer (COBE) spacecraft accurately measured the spectrum of the background radiation from 1989 to 1993. COBE measured radiation from the sky, then subtracted known sources of radiation from its measurements to reveal the background radiation. The measured background radiation fits the radiation predicted by the big bang theory so accurately that scientists consider it conclusive evidence that the big bang theory is the correct explanation for the beginning of the universe.

One of the experiments on the COBE spacecraft found small irregularities, or ripples, in the background radiation that are thought to be the clumps of matter in the early universe - the seeds from which galaxies and clusters of galaxies developed. These ripples were studied in more detail in limited regions of the sky by a variety of ground-based and balloon-based experiments. A more recent spacecraft, NASA's Wilkinson Microwave Anisotropy Probe (WMAP), was designed to make even more accurate observations of these ripples across the entire sky, as COBE did. In 2003 WMAP’s results confirmed and extended the intermediate experiments, providing a full-sky map of the ripples.

In the 1980s American scientists Alan Guth and Paul Steinhardt and Soviet American astronomer Andreas Linde advanced an important cosmological theory called the inflationary theory. This theory deals with the behaviour of the universe for only a tiny fraction of a second at the beginning of the universe. Theorists believe that the events of that fraction of a second, however, determined how the universe came to be the way it is now and how it will change in the future.

The inflationary theory states that, starting only about 1 × 10-35 second after the big bang and lasting for only about 1 × 10-32 second, the universe expanded to 1 × 1050 times its previous size. The numbers 1 × 10-35 and 1 × 10-32 are very small—a decimal point followed by 34 zeros and then a 1, and a decimal point followed by 31 zeros and then a 1, respectively. The number 1 × 1050 is incredibly large—a 1 followed by 50 zeros. This extremely rapid inflation would explain why the universe appears so homogeneous: In its earliest moments, the universe had been compact enough to become uniform, and the expansion was rapid enough to preserve that uniformity over the portion of the universe observable to us.

A fundamental issue addressed in cosmology is the future of the universe—whether the universe will expand forever or eventually collapse. The first case (eternal expansion) is known as an open universe, and the second case (eventual collapse) is known as a closed universe. A closed universe would require sufficiently high density to cause gravity to eventually stop the universe’s expansion and begin its contraction. Such a collapse would require a deviation from Hubble's law, so observational cosmologists try to observe the distances between very distant galaxies and Earth using methods other than measurement of redshifts. The scientists can then compare these distance measurements with the galaxies’ redshifts to see if Hubble’s law holds or not. In the late 1990s astronomers compared the redshifts of supernovas in distant galaxies. Surprisingly, distant supernovas were slightly fainter than had been expected. This result was tentatively interpreted as an acceleration of the expansion of the universe. Astronomers were so surprised by the suggestion that the universe might be accelerating its expansion that they attempted to find other explanations for the relative dimness of distant supernovas, such as absorption by dust. By a few years into the 21st century, however, these other conceivable explanations had been ruled out, and the accelerating universe concept became widely accepted. The search continues to discover more and more distant supernovas.

One way to understand the concept of an expanding universe is to draw dots, representing galaxies, on a balloon. As the balloon is inflated, each dot moves away from all the others. To a person viewing the universe from a galaxy, all other galaxies would seem to be receding. The distant galaxies appear to be moving away faster than the near ones, which demonstrates Hubble’s law. Most astronomers now believe that this expansion will continue forever.

Cosmologists use telescopes, astronomical satellites, and other instruments to study the universe. The data that these instruments provide allow scientists to evaluate current theories and to come up with ideas to better explain the universe. Modern cosmologists are continuously calculating the age, density, and rate of expansion of the universe.

Will the universe expand forever or eventually stop expanding and collapse in on itself? Jay M. Pasachoff, professor of astronomy at Williams College in Williamstown, Massachusetts, confronts this question in this discussion of cosmology. Whether the universe will go on expanding forever depends on whether there is enough critical density to halt or reverse the expansion, and the answer to that question may, in turn, depend on the existence of something the German-born American physicist Albert Einstein once labelled the cosmological constant.

The universe’s density, expansion rate, and age are all related. The density of the universe’s matter determines how much the gravitational force will slow the expansion rate. The rate of expansion depends on the age and density of the universe. If cosmologists measure the rate of expansion by examining galactic redshifts and estimate the density of the universe, they can calculate an estimate of the universe’s age. Cosmologists calculate the expansion rate of the universe by finding the relationship between the distance of an object from Earth and the rate at which it is moving away from Earth. This relationship is represented by Hubble’s constant (H) in the formula v = H × d, where v is velocity (or the speed of the object) and d is the distance between the object and Earth. If Hubble's constant is relatively large, the universe is expanding relatively rapidly. A measure of the distance scale in a universe that is rapidly expanding would be larger than a measure of the distance scale in a universe of the same age with a smaller value of Hubble's constant.

In 1998 two teams of astronomers studying distant supernovae (exploding stars) independently reported finding evidence that the expansion of the universe may actually be accelerating over time, suggesting the existence of a mysterious new force that counteracts gravity. Astronomers from one of these teams described their findings and the reasoning behind their conclusions in this January 1999 Scientific American article.

For a universe with very low density, the age of the universe would be directly related to its expansion rate. This universe would expand forever; this eternal expansion defines an open universe. If, on the other hand, the density of a universe is sufficiently high, the expansion rate is changing—slowing down as the universe ages. This universe would eventually stop expanding and begin contracting, which defines it as a closed universe. Astronomers and cosmologists have been able to estimate the density of the universe, but until the Wilkinson Microwave Anisotropy Probe (WMAP) results were released the density estimates covered a wide range of values. Some estimates of density fell in the range for an open universe, others in the range for a closed universe, and still others near the boundary between the two. Age calculations for the higher densities are about two-thirds of those for the lower densities.

Estimates of the age, density, and expansion rate of the universe include many possible sources of uncertainty. For example, many galaxies orbit each other as members of clusters of galaxies. The velocity of any one galaxy in the cluster as seen from Earth varies over time as it circles the cluster, moving toward Earth through part of its orbit and away through the remainder. Cosmologists, therefore, must find the average expansion velocity of the entire cluster. Recent studies drawing on data collected by the Hubble Key Project, the Hipparcos satellite, and WMAP have helped reduce the uncertainty of estimates for age, density, and expansion rate.

Several groups of astronomers conducted observational projects to determine Hubble's constant, the most important cosmological parameter, during the late 1990s. Notably, the Hubble Key Project, carried out by American astronomers Wendy Freedman, Robert Kennicutt, and Barry Madore, used the Hubble Space Telescope to observe Cepheid variable stars in distant galaxies, following the Leavitt-Shapley method. The Hubble Space Telescope can distinguish and follow such stars in galaxies much farther away from Earth than ground-based telescopes can. Their final value was a Hubble constant of 72 kilometers per second per megaparsec (45 miles per second per megaparsec). A parsec is about 3.26 light-years (a light-year is the distance that light could travel in a year—9.5 × 1012 km, or 5.9 × 1012 mi). The units of the Hubble constant mean that for each million parsec (megaparsec) of distance between two objects, the space between them expands by 72 kilometers every second. Their result was accurate to within about 10 percent. It corresponds to an age of the universe of 12 billion to 14 billion years, depending on the rate of deceleration.

The European Space Agency’s (ESA) Hipparcos satellite made accurate measurements of the distance between Earth and 100,000 different stars, and moderately accurate measurements of the distance between Earth and 1 million other stars, from 1989 to 1993. The ESA released the data to the scientific community in 1997, and the measurements soon began affecting cosmological theories. For example, the measurements changed the accepted distances to some globular clusters (clusters of stars outside the main disk of the Milky Way Galaxy) and led to revisions of calculations of the ages of these clusters. Before the Hipparcos data, some of these clusters appeared to be older than the universe (as predicted by Hubble’s constant), but the revised distance measurements give the clusters an age within cosmologists’ estimates of the age of the universe.

In 2003 astronomers released results from the Wilkinson Microwave Anisotropy Probe (WMAP) that thoroughly confirmed existing ideas of cosmology and also produced several revelations about the nature of the universe. The probe studied the distribution of the ripples in the cosmic background radiation. A major conclusion from WMAP data linked with other observations is that the universe follows Euclidean geometry—that is, given any line in the universe, one and only one parallel line may be drawn through any point not on the original line. Such a universe is known as ‘flat,’ although it extends infinitely in all directions. If the universe is flat, it must be at the critical density that marks the boundary between an open and closed universe.

WMAP results also confirmed that the density of baryons—the elementary particles that make up regular matter—account for only 4 percent of the critical density. The probe further showed that another 23 percent of the universe consists of dark matter, a mysterious substance that does not shine in any part of the spectrum. The gravity of dark matter, however, is detectable. It binds clusters of galaxies together and causes the outer portions of galaxies to rotate faster than they would otherwise. Astronomers do not know the composition of dark matter, but they can theorize what it might be like. A slowly moving, cold dark matter, for example, could consist of not-yet-discovered particles that have names such as axions and weakly interacting massive particles (WIMPs). A rapidly moving, hot dark matter could be made up of particles called neutrinos, but measurements of neutrino mass indicate that they are too lightweight to account for much of the dark matter.

Since normal matter and dark matter account for only 27 percent of the material necessary for the universe to be at the critical density, the remaining 73 percent of the universe must be composed of a still more mysterious substance that astronomers have named ‘dark energy.’ The composition of dark energy is not known, but its effect on the universe is detectable. Dark energy exerts a negative pressure that acts as antigravity, accelerating the universe's expansion. The effect of dark energy was smaller in the past, allowing gravity to slow the universe's expansion, but on the largest scale the repulsive force of dark energy now overwhelms the attractive force of gravity.

WMAP results also showed that the universe is 13.7 billion years old, with an uncertainty of only 0.2 billion years, and that the cosmic background radiation was set free 389,000 years after the big bang, a value uncertain by only 8,000 years. WMAP estimated a value for the Hubble constant of 71 kilometers per second per megaparsec (44 miles per second per magaparsec), in near agreement with the value predicted by the Hubble Key Project. WMAP’s wide-ranging results will be refined as the spacecraft makes additional observations. Observations made by the European Space Agency's Planck spacecraft, scheduled for launch in 2007, will be even more precise.

Big Bang Theory, currently accepted explanation of the beginning of the universe. The big bang theory proposes that the universe was once extremely compact, dense, and hot. Some original event, a cosmic explosion called the big bang, occurred about 13.7 billion years ago, and the universe has since been expanding and cooling.

One way to understand the concept of an expanding universe is to draw dots, representing galaxies, on a balloon. As the balloon is inflated, each dot moves away from all the others. To a person viewing the universe from a galaxy, all other galaxies would seem to be receding. The distant galaxies appear to be moving away faster than the near ones, which demonstrates Hubble’s law. Most astronomers now believe that this expansion will continue forever.

Even when all Earthly and astronomical sources of radio waves are screened out, some static remains on the most sensitive radios. This static is caused by radiation left over from the big bang, the explosion that created the universe.

The theory is based on the mathematical equations, known as the field equations, of the general theory of relativity set forth in 1915 by Albert Einstein. In 1922 Russian physicist Alexander Friedmann provided a set of solutions to the field equations. These solutions have served as the framework for much of the current theoretical work on the big bang theory. American astronomer Edwin Hubble provided some of the greatest supporting evidence for the theory with his 1929 discovery that the light of distant galaxies was universally shifted toward the red end of the spectrum. Once ‘tired light’ theories—that light slowly loses energy naturally, becoming more red over time—were dismissed, this shift proved that the galaxies were moving away from each other. Hubble found that galaxies farther away were moving away proportionally faster, showing that the universe is expanding uniformly. However, the universe’s initial state was still unknown.

In the 1950's cosmologists (scientists who study the evolution of the universe) were considering two theories for the origin of the universe. The first, the currently accepted big bang theory, held that the universe was created from one enormous explosion. The second, known as the steady state theory, suggested that the universe had always existed. Russian-American theoretical physicist George Gamow advanced the big bang theory and its underpinnings in a 1956 Scientific American article. Gamow’s estimate of a 5-billion-year-old universe is no longer considered accurate; the universe is now thought to be significantly older.

In the 1940s Russian-American physicist George Gamow worked out a theory that fit with Friedmann’s solutions in which the universe expanded from a hot, dense state. In 1950 British astronomer Fred Hoyle, in support of his own opposing steady-state theory, referred to Gamow’s theory as a mere ‘big bang,’ but the name stuck. Indeed, a contest in the 1990s by Sky & Telescope magazine to find a better (perhaps more dignified) name did not produce one.

Many astronomers believe that as much as 90 percent of the matter in the universe is dark matter (matter that does not emit light). Some scientists think dark matter may be exotic particles that do not consist of the atoms making up ordinary matter as we know it. Although dark matter currently cannot be observed directly, scientists have evidence of its existence from observations of its gravitational influence on visible bodies, such as the vast collections of stars known as galaxies. In this article from Scientific American Presents, astronomer Vera Rubin explores contemporary views on the nature of dark matter.

The overall framework of the big bang theory came out of solutions to Einstein’s general relativity field equations and remains unchanged, but various details of the theory are still being modified today. Einstein himself initially believed that the universe was static. When his equations seemed to imply that the universe was expanding or contracting, Einstein added a constant term to cancel out the expansion or contraction of the universe. When the expansion of the universe was later discovered, Einstein stated that introducing this ‘cosmological constant’ had been a mistake.

After Einstein’s work of 1917, several scientists, including the abbé Georges Lemaître in Belgium, Willem de Sitter in Holland, and Alexander Friedmann in Russia, succeeded in finding solutions to Einstein’s field equations. The universes described by the different solutions varied. De Sitter’s model had no matter in it. This model is actually not a bad approximation since the average density of the universe is extremely low. Lemaître’s universe expanded from a ‘primeval atom.’ Friedmann’s universe also expanded from a very dense clump of matter, but did not involve the cosmological constant. These models explained how the universe behaved shortly after its creation, but there was still no satisfactory explanation for the beginning of the universe.

In the 1940s George Gamow was joined by his students Ralph Alpher and Robert Herman in working out details of Friedmann’s solutions to Einstein’s theory. They expanded on Gamow’s idea that the universe expanded from a primordial state of matter called ylem consisting of protons, neutrons, and electrons in a sea of radiation. They theorized the universe was very hot at the time of the big bang (the point at which the universe explosively expanded from its primordial state), since elements heavier than hydrogen can be formed only at a high temperature. Alpher and Hermann predicted that radiation from the big bang should still exist. Cosmic background radiation roughly corresponding to the temperature predicted by Gamow’s team was detected in the 1960s, further supporting the big bang theory, though the work of Alpher, Herman, and Gamow had been forgotten.

By 1980 most scientists believed that the big bang theory—which holds that the universe was created from one vast explosion and subsequent expansion—was the most likely explanation for the origin of the universe. However, the original big bang theory had several shortcomings. For instance, it was unable to account for the large-scale uniformity in the distribution of matter in the observed universe. In 1981 American cosmologist Alan Guth introduced a new theory known as the inflationary model, which presents a more detailed explanation of what may have occurred in the first fraction of a second of the universe’s existence. Guth’s ideas, developed from an area of study known as unified field theory, were modified and elaborated on by American theoretical physicist Paul Steinhardt and others in the early 1980s. The inflationary model has many interesting implications, including the possibility that our universe is only one of billions of universes and that our universe may have been created out of nothing—what one cosmologist jokingly called ‘the ultimate free lunch.’ Guth and Steinhardt explain inflationary theory in this 1984 Scientific American article.

The big bang theory seeks to explain what happened at or soon after the beginning of the universe. Scientists can now model the universe back to 10-43 seconds after the big bang. For the time before that moment, the classical theory of gravity is no longer adequate. Scientists are searching for a theory that merges gravity (as explained by Einstein's general theory of relativity) and quantum mechanics but have not found one yet. Many scientists have hope that string theory, also known as M-theory, will tie together gravity and quantum mechanics and help scientists explore further back in time.

Because scientists cannot look back in time beyond that early epoch, the actual big bang is hidden from them. There is no way at present to detect the origin of the universe. Further, the big bang theory does not explain what existed before the big bang. It may be that time itself began at the big bang, so that it makes no sense to discuss what happened ‘before’ the big bang.

According to the big bang theory, the universe expanded rapidly in its first microseconds. A single force existed at the beginning of the universe, and as the universe expanded and cooled, this force separated into those we know today: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. A theory called the electroweak theory now provides a unified explanation of electromagnetism and the weak nuclear force theory established of the Unified Field Theory. Physicists are now searching for a grand unification theory to also incorporate the strong nuclear force. String theory seeks to incorporate the force of gravity with the other three forces, providing a theory of everything (TOE).

One widely accepted version of big bang theory includes the idea of inflation. In this model, the universe expanded much more rapidly at first, to about 1050 times its original size in the first 10-32 second, then slowed its expansion. The theory was advanced in the 1980s by American cosmologist Alan Guth and elaborated upon by American astronomer Paul Steinhardt, Russian American scientist Andrei Linde, and British astronomer Andreas Albrecht. The inflationary universe theory, solves a number of problems of cosmology. For example, it shows that the universe now appears close to the type of flat space described by the laws of Euclid’s geometry: We see only a tiny region of the original universe, similar to the way we do not notice the curvature of the earth because we see only a small part of it. The inflationary universe also shows why the universe appears so homogeneous. If the universe we observe was inflated from some small, original region, it is not surprising that it appears uniform.

Once the expansion of the initial inflationary era ended, the universe continued to expand more slowly. The inflationary model predicts that the universe is on the boundary between being open and closed. If the universe is open, it will keep expanding forever. If the universe is closed, the expansion of the universe will eventually stop and the universe will begin contracting until it collapses. Whether the universe is open or closed depends on the density, or concentration of mass, in the universe. If the universe is dense enough, it is closed.

Most scientists believe that the microwave background radiation is left over from the big bang at the beginning of the universe. The Wilkinson Microwave Anisotropy Probe (WMAP) produced this image, which shows variations in cosmic microwave radiation. The coloured spots correspond to fluctuations in the density of matter and energy in the early universe, about 380,000 years after the big bang. Cosmologists believe that as the universe expanded and cooled, these fluctuations structured the formation of galaxies.

The universe cooled as it expanded. After about one second, protons formed. In the following few minutes—often referred to as the ‘first three minutes’—combinations of protons and neutrons formed the isotope of hydrogen known as deuterium as well as some of the other light elements, principally helium, as well as some lithium, beryllium, and boron. The study of the distribution of deuterium, helium, and the other light elements is now a major field of research. The uniformity of the helium abundance around the universe supports the big bang theory and the abundance of deuterium can be used to estimate the density of matter in the universe.

British astrophysicist Sir Martin Rees, astronomer and Royal Society research professor at the Institute of Astronomy, Cambridge University, in Cambridge, England, is one of the world’s leading cosmologists (scientists who study the origin and evolution of the universe). Rees is the recipient of numerous awards and honorary degrees, including the Gold Medal of the Royal Astronomical Society, the Bruce Medal of the Astronomical Society of the Pacific, and the Cosmology Prize of the Peter Gruber Foundation. A former president of the British Association for the Advancement of Science, Sir Martin has also been at the forefront of communicating the current understanding of the universe to the wider public. He has written several books for the general reader, including Just Six Numbers (1999) and Our Cosmic Habitat (2002). The World English editor Latha Menon interviewed Rees about his views on some of the latest developments in cosmology, including why the universe seems to have a unique affinity for life. British spellings have been retained.

From about 380,000 to about 1 million years after the big bang, the universe cooled to about 3000°C (about 5000°F) and protons and electrons combined to make hydrogen atoms. Hydrogen atoms can only absorb and emit specific colours, or wavelengths, of light. The formation of atoms allowed many other wavelengths of light, wavelengths that had been interfering with the free electrons, to travel much farther than before. This change set free radiation that we can detect today. After billions of years of cooling, this cosmic background radiation is at about 3 K (-270°C/-454°F).The cosmic background radiation was first detected and identified in 1965 by American astrophysicists Arno Penzias and Robert Wilson.

The Cosmic Background Explorer (COBE) spacecraft, a project of the National Aeronautics and Space Administration (NASA), mapped the cosmic background radiation between 1989 and 1993. It verified that the distribution of intensity of the background radiation precisely matched that of matter that emits radiation because of its temperature, as predicted for the big bang theory. It also showed that cosmic background radiation is not uniform, that it varies slightly. These variations are thought to be the seeds from which galaxies and other structures in the universe grew.

Evidence indicates that the matter that scientists detect in the universe is only a small fraction of all the matter that exists. For example, observations of the speeds at which individual galaxies move within clusters of galaxies show that a great deal of unseen matter must exist to exert sufficient gravitational force to keep the clusters from flying apart. Cosmologists now think that much of the universe is dark matter—matter that has gravity but does not give off radiation that we can see or otherwise detect. One kind of dark matter theorized by scientists is cold dark matter, with slowly moving (cold) massive particles. No such particles have yet been detected, though astronomers have made up fanciful names for them, such as Weakly Interacting Massive Particles (WIMPs). Other cold dark matter could be non-radiating stars or planets, which are known as MACHOs (Massive Compact Halo Objects).

An alternative theory that explains the dark-matter model involves hot dark matter, where hot implies that the particles are moving very fast. Neutrinos, fundamental particles that travel at nearly the speed of light, are the prime example of hot dark matter. However, scientists think that the mass of a neutrino is so low that neutrinos can only account for a small portion of dark matter. If the inflationary version of big bang theory is correct, then the amount of dark matter and of whatever else might exist is just enough to bring the universe to the boundary between open and closed.

Scientists develop theoretical models to show how the universe’s structures, such as clusters of galaxies, have formed. Their models invoke hot dark matter, cold dark matter, or a mixture of the two. This unseen matter would have provided the gravitational force needed to bring large structures such as clusters of galaxies together. The theories that include dark matter match the observations, although there is no consensus on the type or types of dark matter that must be included. Supercomputers are important for making such models.

According to the widely accepted theory of the big bang, the universe originated about 14 billion years ago and has been expanding ever since. Astronomers recognize four models of possible futures for the universe. According to the closed model, many billions of years from now expansion will slow, stop, and the universe will contract back in upon itself. In the flat model, the universe will not collapse upon itself, but expansion will slow and the universe will approach a stable size. According to the open model, the universe will continue expanding forever. In the accelerating expansion model, the universe will expand faster and faster until even the particles in normal matter are torn away from each other. Astronomers currently favour the accelerating expansion model.

Astronomers continue to make new observations that are also interpreted within the framework of the big bang theory. No major problems with the big bang theory have been found, but scientists constantly adjust the theory to match the observed universe. In particular, a ‘standard model’ of the big bang has been established by results from NASA's Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001. The probe studied the anisotropies, or ripples, in the temperature of cosmic background radiation at a higher resolution than COBE was capable of. These ripples indicate that regions of the young universe were very slightly hotter or cooler, by a factor of about 1/1000, than adjacent regions. WMAP’s observations suggest that the rate of expansion of the universe, called Hubble’s constant, is about 71 km/s/Mpc (kilometers per second per million parsecs, where a parsec is about 3.26 light-years). In other words, the distance between any two objects in space that are separated by a million parsecs increases by about 71 km every second in addition to any other motion they may have relative to one another. In combination with previously existing observations, this rate of expansion tells cosmologists that the universe is ‘flat,’ though flatness here does not refer to the actual shape of the universe but rather that the geometric laws that apply to the universe match those of a flat plane.

To be flat, the universe must contain a certain amount of matter and energy, known as the critical density. The distribution of sizes of ripples detected by WMAP show that ordinary matter—like that making up objects and living things on Earth—accounts for only 4.4 percent of the critical density. Dark matter makes up an additional 23 percent. Astoundingly, the remaining 73 percent of the universe is composed of something else—a substance so mysterious that nobody knows much about it. Called ‘dark energy,’ this substance provides the antigravity-like negative pressure that causes the universe's expansion to accelerate rather than slow down. This ‘accelerating universe’ was detected independently by two competing groups of astronomers in the last years of the 20th century. The ideas of an accelerating universe and the existence of dark energy have caused astronomers to substantially modify previous ideas of the big bang universe.

WMAP's results also show that cosmic background radiation was set free about 389,000 years after the big bang, later than was previously thought, and that the first stars formed about 200 million years after the big bang, earlier than anticipated. Further refinements to the big bang theory are expected from WMAP, which continues to collect data. An even more precise mission to study the beginnings of the universe, the European Space Agency’s Planck spacecraft, is scheduled to be launched in 2007.

Inflationary Theory, in cosmology, theory that proposes that the universe expanded very rapidly for a fraction of the first second of life after the big bang, the explosion at the beginning of the universe, the idea that the universe is expanding has long been one of the pillars of modern cosmology. The present rate of expansion seems to be fairly uniform, but inflationary theory proposes that the rate of expansion was much faster at the beginning of the universe.

By 1980 most scientists believed that the big bang theory—which holds that the universe was created from one vast explosion and subsequent expansion—was the most likely explanation for the origin of the universe. However, the original big bang theory had several shortcomings. For instance, it was unable to account for the large-scale uniformity in the distribution of matter in the observed universe. In 1981 American cosmologist Alan Guth introduced a new theory known as the inflationary model, which presents a more detailed explanation of what may have occurred in the first fraction of a second of the universe’s existence. Guth’s ideas, developed from an area of study known as unified field theory, were modified and elaborated on by American theoretical physicist Paul Steinhardt and others in the early 1980s. The inflationary model has many interesting implications, including the possibility that our universe is only one of billions of universes and that our universe may have been created out of nothing—what one cosmologist jokingly called ‘the ultimate free lunch.’ Guth and Steinhardt explain inflationary theory in this 1984 Scientific American article.

The inflationary theory was developed in the 1970s to solve several mysteries still remaining in the universe as it was described by the big bang theory. In particular, it explains why the universe is expanding at approximately its current rate. It also explains why the universe appears so homogeneous, or uniform. Further, it explains why scientists have never detected magnetic monopoles (single north or south poles that are not connected to an opposite pole).

The inflationary theory holds that very early in its life, the universe expanded much more rapidly than it now expands. This period of expansion started 1 × 10-35 sec after the big bang and lasted only 1 × 10-32 sec (1 × 10-35 and 1 × 10-32 are extremely small numbers—a decimal point followed by thirty-four zeroes and then a 1, or thirty one zeroes and then a 1, respectively). During that brief interval, the universe expanded to be 1 × 1050 times its previous size (1 × 1050 is a very large number—a 1 followed by 50 zeroes). The intense inflation then ended—still well within the first second of time. Then the universe assumed a rate of expansion much closer to that of the present.

In the inflationary theory, the observable universe began in a minuscule region of space. The universe was so compact that all parts of it could be in contact with each other. This allowed the universe to become very homogeneous. The inflationary period rapidly and evenly spread out this region, giving us the almost homogeneous universe we see today. This homogeneity of the universe may not seem apparent on a small scale, with planets, stars, galaxies, and clusters of galaxies adding matter in lumps. But on a larger scale, clusters of galaxies are spread out fairly evenly.

The inflationary theory also explains why the universe’s density is so close to the limit that determines whether the universe will expand forever or will eventually begin to contract. The universe’s outward expansion is propelled by the initial force of the big bang. If the universe is dense enough, the force of gravity may eventually overcome the universe’s expansion and start pulling matter in the universe back together. Cosmologists are still trying to find out how dense the universe is. The estimations fall very close to the dividing value between a universe that expands forever (an open universe) and a universe that eventually contracts (a closed universe). The inflationary period in the theory would have expanded the universe just fast enough for it to reach the size that led to the estimated density of today’s universe.

The inflationary theory incorporates the existence of magnetic monopoles, which other physical and cosmological theories predict. But it also explains that the universe expanded so much in that fraction of a second that the relatively few magnetic monopoles were spread far apart. Cosmologists use the inflationary theory to predict that there may be only one monopole in the observable universe.

The best evidence for the inflationary universe—the homogeneity of the universe, the mass density of the universe, and the absence of magnetic monopoles—was also the justification for the theory. For this reason, some scientists have criticized the theory as not being testable. In the philosophy of science, an idea only becomes a theory when it can be tested for truth. Because of this, astrophysicists are divided on the question of whether the inflationary theory should really be considered a theory. The current version of the inflationary theory predicts that the density of the universe is very close to the value that divides the two possibilities for the future of the universe: expanding forever or eventually ending expansion and beginning to shrink. If scientists collect definitive evidence that the universe is ‘open,’ or will expand forever, the inflationary theory would be proved false.

American cosmologist Alan Guth first formulated the idea of an inflationary universe at the Massachusetts Institute of Technology (MIT) in the 1970s. Guth developed the inflationary theory when he was applying the ideas of particle physics to questions about the expansion of the universe. In Guth's original version of the theory, the inflationary era did not end automatically. Several scientists helped to remove this flaw of the original theory in subsequent models. These scientists included Soviet-American cosmologist Andrei Linde, then at the Lebedev Institute in Moscow; British cosmologist Andreas Albrecht, now at Imperial College, London; and American cosmologist Paul Steinhardt, of the University of Pennsylvania. The equations they generated explained why the initial burst of rapid expansion ended, changing the expansion of the universe into today’s more placid era.

Antimatter, matter is composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties related to electromagnetism. For example, the antimatter electron, or positron, has opposite electric charge and magnetic moment (a property that determines how it behaves in a magnetic field), but is identical in all other respects to the electron. The antimatter equivalent of the chargeless neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass, spin, and partial decay, antiparticles are identical with their corresponding particles.

The existence of antiparticles was first proposed by the British physicist Paul Adrien Maurice Dirac, arising from his attempt to apply the techniques of relativistic mechanics to quantum theory. In 1928 he developed the concept of a positively charged electron but its actual existence was established experimentally in 1932. The existence of other antiparticles was presumed but not confirmed until 1955, when antiprotons and antineutrons were observed in particle accelerators. Since then, the full range of antiparticles has been observed or indicated. Antimatter atoms were created for the first time in September 1995 at the European Organization for Nuclear Research (CERN). Positrons were combined with antimatter protons to produce antimatter hydrogen atoms. These atoms of antimatter exist only for forty-billionths of a second, but physicists hope future experiments will determine what differences there are between normal hydrogen and its antimatter counterpart.

A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their nonexistence, except momentarily, on earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet. Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most of what is known about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of ‘ordinary’ matter, and explanations for this have been proposed by recent cosmological theory.

In 1997 scientists studying data gathered by the Compton Gamma Ray Observatory (GRO) operated by the National Aeronautics and Space Administration (NASA) found that the earth’s home galaxy—the Milky Way—contains large clouds of antimatter particles. Astronomers suggest that these clouds form when high-energy events—such as the collision of neutron stars, exploding stars, or black holes—create radioactive elements that decay into matter and antimatter or heat matter enough to make it split into particles of matter and antimatter. When antimatter particles meet particles of matter, the two annihilate each other and produce a burst of gamma rays. It was these gamma rays that GRO detected.

Astrophysics, the branch of astronomy that seeks to understand the birth, evolution, and end states of celestial objects and systems in terms of the physical laws that govern them. For each object or system under study, astrophysicists observe radiations emitted over the entire electromagnetic spectrum and variations of these emissions over time, spectroscopy, Spectrum). This information is then interpreted with the aid of theoretical models. It is the task of such a model to explain the mechanisms by which radiation is generated within or near the object, and how the radiation then escapes. Radiation measurements can be used to estimate the distribution and energy states of the atoms, as well as the kinds of atoms, making up the object. The temperatures and pressures in the object may then be estimated using the laws of thermodynamics.

A star begins life as a large, relatively cool mass of gas in a nebula, such as the Orion Nebula (left). As gravity causes the gas to contract, the nebula’s temperature rises, eventually becoming hot enough to trigger nuclear reactions in its atoms and form a star. A main sequence star (middle) shines because of the massive, fairly steady output of energy from the fusion of hydrogen nuclei to form helium. The main sequence phase of a medium-sized star is believed to last as long as 10 billion years. The Sun is just over halfway through this phase. Stars eventually use up their energy supply, ending their lives as white dwarfs, which are extremely small, dense globes, or in the case of larger stars, as spectacular explosions called supernovas. A supernova is shown within the Large Magellanic Cloud at the bottom right of the rightmost photo.

Models of celestial objects in equilibrium are based on balances among the forces being exerted on and within the objects, with slow evolution taking place as nuclear and chemical transformations occur. Cataclysmic phenomena are interpreted in terms of models in which these forces are out of balance.

Stars are among the best understood celestial objects. If the light of a star is dispersed into its wavelength spectrum, the relative intensities at various wavelengths yield considerable information about the star. The surface temperature can be estimated, using the laws of thermal radiation.

All the elements in the universe except hydrogen and helium were forged within stars, according to a theory first proposed in the 1940's by British astronomer and mathematician Sir Fred Hoyle. American physicist William Fowler, who collaborated with Hoyle and British-born astronomers Geoffrey and Margaret Burbidge, shared the Nobel Prize in physics in 1983 for his research on the synthesis of the heavy elements within stars.

If the distance of the star is known, its luminosity can be found by summing the observed intensities over all wavelengths. Its radius can then be found using the fact that the luminosity is the product of the energy emitted per unit area (which depends only on the surface temperature) and the total surface area.

If the spectrum of a star is studied under high resolution, many dark lines are seen at specific wavelengths. These lines are due to the absorption of light from deeper layers by atoms in the cooler layers above. The kinds of atoms present in the star can then be identified by comparing stellar absorption lines with those produced in the laboratory by known gases, and the temperature and pressure of the atmosphere as well as the relative abundances of the chemical elements can be calculated.

In 1978, when renowned German-born American physicist Hans Bethe began studying how stars collapse, he was 72 years old and had already won the 1967 Nobel Prize in physics for describing how stars burn their nuclear fuel. American physicist Gerald Brown presented a problem to Bethe: What happens to a star when it runs out of fuel? By the next morning, Bethe ‘had solved the problem of the collapse of stars,’ Brown recalled in 1996. Bethe and Brown described their collaborative work on the supernova (an exploding star) in the following article, which appeared in Scientific American in 1985.

Most stars are classified as part of a ‘main sequence’ in which both temperature and luminosity increase with mass. Some stars are much brighter and hence much larger than main-sequence stars of the same temperature, and are called red giants. Many stars are much fainter and hence much smaller than main-sequence stars of the same temperature, including white dwarfs (1 percent the size of the Sun) and neutron stars (0.001 percent the size of the Sun). Black holes emit no light, but absorb all light that passes within a few kilometers.

Theoretical models of stellar interiors have been calculated based on the theory that an equilibrium exists between the force of gravity, which tends to cause the star to collapse, and the pressure of superheated gases, which tend to expand. High stellar temperatures also drive a flow of heat from inside the star to the outside. If the star is to be in equilibrium, this heat loss must be compensated by the energy released by nuclear reactions in the core. As various nuclear fuels are exhausted, the star slowly evolves, and the core contracts to increasingly higher densities.

For stars of low mass, this process ends when the outer layers are gently ejected to form a planetary nebula; the core then cools down to form a white dwarf. More massive stars become unstable; as they evolve, this core suddenly collapses to form a neutron star or black hole, and the energy thereby released ejects the outer layers at very high speed, forming a supernova.

The elliptical galaxies M86 (centre) and M84 (right) are members of the Virgo cluster of galaxies, located about 50 million light-years away from our smaller cluster, the Local Group. Elliptical galaxies are populated by older stars and contain little interstellar matter. They are usually the brightest galaxies.

Galaxies are giant systems of stars at very great distances from each other. Many galaxies also contain interstellar material in the form of diffuse gas and dust particles, permeated by weak magnetic fields in which are trapped energetic charged particles called cosmic rays.

Astronomer Jay M. Pasachoff, author of A Field Guide to the Stars and Planets (Fourth edition, 2001), is one of the world’s leading experts on the universe and the celestial bodies it contains. Here, Pasachoff answers a wide range of intriguing questions, including what could be done if an asteroid threatened to collide with the Earth, what are brown dwarfs and wormholes, is there life on Mars or on extrasolar planets, and can you ever escape Earth’s gravity. 

Elliptical galaxies are spheroidal in shape and have little interstellar matter; spiral galaxies are highly flattened rotating disks composed of interstellar matter and large numbers of massive stars, as well as the less massive stars that are also common in ellipticals. The matter in the disk forms a spiral pattern, usually with two spiral arms.

In the nucleus of some galaxies active sources of relativistic particles (particles with speeds approaching that of light) emit radio waves and X rays as well as visible light. This phenomenon is observed in both elliptical and spiral galaxies; objects called quasars seem to be extreme forms of such activity, with luminosities ranging up to 100 times that of all the stars in the galaxy. At present the explanation of the energy source in active galaxies is unknown.

Theoretical models of galaxies are based on the exchange of matter and energy between stars and interstellar matter. When a galaxy forms, it consists at first entirely of interstellar matter; but stars then form from this gas. From the supernovas occurring among these stars, matter enriched in heavy elements is injected back into space. Thus, interstellar matter is progressively enriched with heavy elements, which then become part of new generations of stars. In ellipticals, the process is largely complete, and little interstellar matter remains. In spirals, however, much interstellar matter remains; in these galaxies the rate of star formation is much higher in the spiral arms than in the core. Apparently, spiral density waves compress interstellar matter to form dark clouds, and these subsequently collapse to form new stars.

Cosmology seeks to understand the structure of the universe. Modern cosmology is based on the American astronomer Edwin Hubble's discovery in 1929 that all galaxies are receding from each other with velocities proportional to their distances. In 1922 the Russian astronomer Alexander Friedmann proposed that the universe is everywhere filled with the same amount of matter. Using Albert Einstein's general theory of relativity to calculate the gravitational effects, he showed that such a system must originate in a singular state of infinite density (now called the big bang) and expand from that state in just the way Hubble observed. Most astronomers today interpret their data in terms of the big bang model, which in the early 1980s was further refined by the so-called inflationary theory, an attempt to account for conditions leading to the big bang. According to the theory, the big bang occurred about 14 billion years ago. The discovery in 1965 of cosmic background microwave radiation, a faint glow or radio transmission almost identical in all directions, fulfilled a prediction of the big bang model that radiation created in the big bang itself should still be present in the universe.

Theorists now believe that the universe will continue to expand forever. There does not appear to be enough mass in the universe for the attraction of gravity to slow and eventually reverse the universe’s expansion. Observations of extremely distant supernovas indicate instead that the universe’s expansion is accelerating. Astronomers have invented the name ‘dark energy’ for the cause of this expansion, but they do not know what dark energy is.

Gravitation, the force of attraction between all objects that tends to pull them toward one another. It is a universal force, affecting the largest and smallest objects, all forms of matter, and energy. Gravitation governs the motion of astronomical bodies. It keeps the moon in orbit around the earth and keeps the earth and the other planets of the solar system in orbit around the sun. On a larger scale, it governs the motion of stars and slows the outward expansion of the entire universe because of the inward attraction of galaxies to other galaxies. Typically the term gravitation refers to the force in general, and the term gravity refers to the earth's gravitational pull.

Gravitation is one of the four fundamental forces of nature, along with electromagnetism and the weak and strong nuclear forces, which hold together the particles that make up atoms. Gravitation is by far the weakest of these forces and, as a result, is not important in the interactions of atoms and nuclear particles or even of moderate-sized objects, such as people or cars. Gravitation is important only when very large objects, such as planets, are involved. This is true for several reasons. First, the force of gravitation reaches great distances, while nuclear forces operate only over extremely short distances and decrease in strength very rapidly as distance increases. Second, gravitation is always attractive. In contrast, electromagnetic forces between particles can be repulsive or attractive depending on whether the particles both have a positive or negative electrical charge, or they have opposite electrical charges, as these attractive and repulsive forces tend to cancel each other out, leaving only a weak net force. Gravitation has no repulsive force and, therefore, no such cancellation or weakening.

After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.

The gravitational attraction of objects for one another is the easiest fundamental force to observe and was the first fundamental force to be described with a complete mathematical theory by the English physicist and mathematician Sir Isaac Newton. A more accurate theory called general relativity was formulated early in the 20th century by the German-born American physicist Albert Einstein. Scientists recognize that even this theory is not correct for describing how gravitation works in certain circumstances, and they continue to search for an improved theory.

Gravitation plays a crucial role in most processes on the earth. The ocean tides are caused by the gravitational attraction of the moon and the sun on the earth and its oceans. Gravitation drives weather patterns by making cold air sink and displace less dense warm air, forcing the warm air to rise. The gravitational pull of the earth on all objects holds the objects to the surface of the earth. Without it, the spin of the earth would send them floating off into space.

The gravitational attraction of every bit of matter in the earth for every other bit of matter amounts to an inward pull that holds the earth together against the pressure forces tending to push it outward. Similarly, the inward pull of gravitation holds stars together. When a star's fuel nears depletion, the processes producing the outward pressure weaken and the inward pull of gravitation eventually compresses the star to a very compact size.

Falling objects accelerate in response to the force exerted on them by Earth’s gravity. Different objects accelerate at the same rate, regardless of their mass. This illustration shows the speed at which a ball and a cat would be moving and the distance each would have fallen at intervals of a tenth of a second during a short fall.

If an object held near the surface of the earth is released, it will fall and accelerate, or pick up speed, as it descends. This acceleration is caused by gravity, the force of attraction between the object and the earth. The force of gravity on an object is also called the object's weight. This force depends on the object's mass, or the amount of matter in the object. The weight of an object is equal to the mass of the object multiplied by the acceleration due to gravity.

A bowling ball that weighs 16 lb is actually being pulled toward the earth with a force of 16 lb. In the metric system, the bowling ball is pulled toward the earth with a force of 71 newtons (a metric unit of force abbreviated N). The bowling ball also pulls on the earth with a force of 16 lb (71 N), but the earth is so massive that it does not move appreciably. In order to hold the bowling ball up and keep it from falling, a person must exert an upward force of 16 lb (71 N) on the ball. This upward force acts to oppose the 16 lb (71 N) downward weight force, leaving a total force of zero. The total force on an object determines the object's acceleration.


If the pull of gravity is the only force acting on an object, then all objects, regardless of their weight, size, or shape, will accelerate in the same manner. At the same place on the earth, the 16 lb (71 N) bowling ball and a 500 lb (2200 N) boulder will fall with the same rate of acceleration. As each second passes, each object will increase its downward speed by about 9.8 m/sec (32 ft/sec), resulting in an acceleration of 9.8 m/sec/sec (32 ft/sec/sec). In principle, a rock and a feather both would fall with this acceleration if there were no other forces acting. In practice, however, air friction exerts a greater upward force on the falling feather than on the rock and makes the feather fall more slowly than the rock.

The mass of an object does not change as it is moved from place to place, but the acceleration due to gravity, and therefore the object's weight, will change because the strength of the earth's gravitational pull is not the same everywhere. The earth's pull and the acceleration due to gravity decrease as an object moves farther away from the centre of the earth. At an altitude of 4000 miles (6400 km) above the earth's surface, for instance, the bowling ball that weighed 16 lb (71 N) at the surface would weigh only about 4 lb (18 N). Because of the reduced weight force, the rate of acceleration of the bowling ball at that altitude would be only one quarter of the acceleration rate at the surface of the earth. The pull of gravity on an object also changes slightly with latitude. Because the earth is not perfectly spherical, but bulges at the equator, the pull of gravity is about 0.5 percent stronger at the earth's poles than at the equator.

The ancient Greek philosophers developed several theories about the force that caused objects to fall toward the earth. In the 4th century bc, the Greek philosopher Aristotle proposed that all things were made from some combination of the four elements, earth, air, fire, and water. Objects that were similar in nature attracted one another, and as a result, objects with more earth in them were attracted to the earth. Fire, by contrast, was dissimilar and therefore tended to rise from the earth. Aristotle also developed a cosmology, that is, a theory describing the universe, that was geocentric, or earth-centred, with the moon, sun, planets, and stars moving around the earth on spheres. The Greek philosophers, however, did not propose a connection between the force behind planetary motion and the force that made objects fall toward the earth.

At the beginning of the 17th century, the Italian physicist and astronomer Galileo discovered that all objects fall toward the earth with the same acceleration, regardless of their weight, size, or shape, when gravity is the only force acting on them. Galileo also had a theory about the universe, which he based on the ideas of the Polish astronomer Nicolaus Copernicus. In the mid-16th century, Copernicus had proposed a heliocentric, or sun-centred system, in which the planets moved in circles around the sun, and Galileo agreed with this cosmology. However, Galileo believed that the planets moved in circles because this motion was the natural path of a body with no forces acting on it. Like the Greek philosophers, he saw no connection between the force behind planetary motion and gravitation on earth.

In the late 16th and early 17th centuries the heliocentric model of the universe gained support from observations by the Danish astronomer Tycho Brahe, and his student, the German astronomer Johannes Kepler. These observations, made without telescopes, were accurate enough to determine that the planets did not move in circles, as Copernicus had suggested. Kepler calculated that the orbits had to be ellipses (slightly elongated circles). The invention of the telescope made even more precise observations possible, and Galileo was one of the first to use a telescope to study astronomy. In 1609 Galileo observed that moons orbited the planet Jupiter, a fact that could not reasonably fit into an earth-centred model of the heavens.

The new heliocentric theory changed scientists' views about the earth's place in the universe and opened the way for new ideas about the forces behind planetary motion. However, it was not until the late 17th century that Isaac Newton developed a theory of gravitation that encompassed both the attraction of objects on the earth and planetary motion.

Because the Moon has significantly less mass than Earth, the weight of an object on the Moon’s surface is only one-sixth the object’s weight on Earth’s surface. This graph shows how much an object that weighs w on Earth would weigh at different points between the Earth and Moon. Since the Earth and Moon pull in opposite directions, there is a point, about 346,000 km (215,000 mi) from Earth, where the opposite gravitational forces would cancel, and the object's weight would be zero.

To develop his theory of gravitation, Newton first had to develop the science of forces and motion called mechanics. Newton proposed that the natural motion of an object is motion at a constant speed on a straight line, and that it takes a force to slow down, speed up, or change the path of an object. Newton also invented calculus, a new branch of mathematics that became an important tool in the calculations of his theory of gravitation.

Newton proposed his law of gravitation in 1687 and stated that every particle in the universe attracts every other particle in the universe with a force that depends on the product of the two particles' masses divided by the square of the distance between them. The gravitational force between two objects can be expressed by the following equation: F= GMm/d2 where F is the gravitational force, G is a constant known as the universal constant of gravitation, M and m are the masses of each object, and d is the distance between them. Newton considered a particle to be an object with a mass that was concentrated in a small point. If the mass of one or both particles increases, then the attraction between the two particles increases. For instance, if the mass of one particle is doubled, the force of attraction between the two particles is doubled. If the distance between the particles increases, then the attraction decreases as the square of the distance between them. Doubling the distance between two particles, for instance, will make the force of attraction one quarter as great as it was.

According to Newton, the force acts along a line between the two particles. In the case of two spheres, it acts along the line between their centres. The attraction between objects with irregular shapes is more complicated. Every bit of matter in the irregular object attracts every bit of matter in the other object. A simpler description is possible near the surface of the earth where the pull of gravity is approximately uniform in strength and direction. In this case there is a point in an object (even an irregular object) called the centre of gravity, at which all the force of gravity can be considered to be acting.

Newton's law affects all objects in the universe, from raindrops in the sky to the planets in the solar system. It is therefore known as the universal law of gravitation. In order to know the strength of gravitational forces in general, however, it became necessary to find the value of G, the universal constant of gravitation. Scientists needed to perform an experiment, but gravitational forces are very weak between objects found in a common laboratory and thus hard to observe. In 1798 the English chemist and physicist Henry Cavendish finally measured G with a very sensitive experiment in which he nearly eliminated the effects of friction and other forces. The value he found was 6.754 x 10-11 N-m2/kg2—close to the currently accepted value of 6.670 x 10-11 N-m2/kg2 (a decimal point followed by 10 zeros and then the number 6670). This value is so small that the force of gravitation between two objects with a mass of 1 metric ton each, 1 metre from each other, is about 67 millionths of a newton, or about 15 millionths of a pound.

Gravitation may also be described in a completely different way. A massive object, such as the earth, may be thought of as producing a condition in space around it called a gravitational field. This field causes objects in space to experience a force. The gravitational field around the earth, for instance, produces a downward force on objects near the earth surface. The field viewpoint is an alternative to the viewpoint that objects can affect each other across distance. This way of thinking about interactions has proved to be very important in the development of modern physics.

Newton's law of gravitation was the first theory to accurately describe the motion of objects on the earth as well as the planetary motion that astronomers had long observed. According to Newton's theory, the gravitational attraction between the planets and the sun holds the planets in elliptical orbits around the sun. The earth's moon and moons of other planets are held in orbit by the attraction between the moons and the planets. Newton's law led to many new discoveries, the most important of which was the discovery of the planet Neptune. Scientists had noted unexplainable variations in the motion of the planet Uranus for many years. Using Newton's law of gravitation, the French astronomer Urbain Leverrier and the British astronomer John Couch each independently predicted the existence of a more distant planet that was perturbing the orbit of Uranus. Neptune was discovered in 1864, in an orbit close to its predicted position.

Scientists used Newton's theory of gravitation successfully for many years. Several problems began to arise, however, involving motion that did not follow the law of gravitation or Newtonian mechanics. One problem was the observed and unexplainable deviations in the orbit of Mercury (which could not be caused by the gravitational pull of another orbiting body).

Another problem with Newton's theory involved reference frames, that is, the conditions under which an observer measures the motion of an object. According to Newtonian mechanics, two observers making measurements of the speed of an object will measure different speeds if the observers are moving relative to each other. A person on the ground observing a ball that is on a train passing by will measure the speed of the ball as the same as the speed of the train. A person on the train observing the ball, however, will measure the ball's speed as zero. According to the traditional ideas about space and time, then, there could not be a constant, fundamental speed in the physical world because all speed is relative. However, near the end of the 19th century the Scottish physicist James Clerk Maxwell proposed a complete theory of electric and magnetic forces that contained just such a constant, which he called c. This constant speed was 300,000 km/sec (186,000 mi/sec) and was the speed of electromagnetic waves, including light waves. This feature of Maxwell's theory caused a crisis in physics because it indicated that speed was not always relative.

Albert Einstein resolved this crisis in 1905 with his special theory of relativity. An important feature of Einstein's new theory was that no particle, and even no information, could travel faster than the fundamental speed c. In Newton's gravitation theory, however, information about gravitation moved at infinite speed. If a star exploded into two parts, for example, the change in gravitational pull would be felt immediately by a planet in a distant orbit around the exploded star. According to Einstein's theory, such forces were not possible.

Though Newton's theory contained several flaws, it is still very practical for use in everyday life. Even today, it is sufficiently accurate for dealing with earth-based gravitational effects such as in geology (the study of the formation of the earth and the processes acting on it), and for most scientific work in astronomy. Only when examining exotic phenomena such as black holes (points in space with a gravitational force so strong that not even light can escape them) or in explaining the big bang (the origin of the universe) is Newton's theory inaccurate or inapplicable.

In 1915 Einstein formulated a new theory of gravitation that reconciled the force of gravitation with the requirements of his theory of special relativity. He proposed that gravitational effects move at the speed of c. He called this theory general relativity to distinguish it from special relativity, which only holds when there is no force of gravitation. General relativity produces predictions very close to those of Newton's theory in most familiar situations, such as the moon orbiting the earth. Einstein's theory differed from Newton's theory, however, in that it described gravitation as a curvature of space and time.

In Einstein's general theory of relativity, he proposed that space and time may be united into a single, four-dimensional geometry consisting of 3 space dimensions and 1 time dimension. In this geometry, called space-time, the motions of particles from point to point as time progresses are represented by curves called world lines. If there is no gravity acting, the most natural lines in this geometry are straight lines, and they represent particles that are moving always in the same direction with the same speed—that is, particles that have no force acting on them. If a particle is acted on by a force, then its world line will not be straight. Einstein also proposed that the effect of gravitation should not be represented as the deviation of a world line from straightness, as it would be for an electrical force. If gravitation is present, it should not be considered a force. Rather, gravitation changes the most natural world lines and thereby curves the geometry of space-time. In a curved geometry, such as the two-dimensional surface of the earth, there are no straight lines. Instead, there are special curves called geodesics, an example of which are great circles around the earth. These special curves are at each point as straight as possible, and they are the most natural lines in a curved geometry. The effect of gravitation would be to influence the geodesics in space-time. Near sources of gravitation the space is strongly curved and the geodesics behave less and less like those in flat, uncurved space-time. In the solar system, for example, the effect of the sun and the earth is to cause the moon to move on a geodesic that winds around the geodesic of the earth 12 times a year.

Einstein's theory required verification, but the level of precision in measurement needed to distinguish between Einstein's theory and Newton's theory was difficult to achieve in the early 20th century and remains so today. There were two predictions, however, that could be tested. One involved deviations in the orbit of Mercury. Astronomers had observed that the ellipse of Mercury's orbit itself rotated—that is, the point nearest the sun, called the perihelion, slowly advanced around the sun. The rate of advancement predicted by Newton's theory differed slightly from what astronomers had measured, but Einstein's theory predicted the correct rate.

The second test involved measuring the bending of light as it passed around the sun. Both Newton's and Einstein's theories predicted that light would be deflected by gravitation. But the amount of deflection predicted by the two theories differed. The light to be measured in such a test originates in distant stars. However, under ordinary conditions the sun's brightness prevents scientists from observing the light from these stars. This problem disappears during an eclipse, when the moon blocks the sun's light. In 1919 a special British expedition took photographs during an eclipse. Scientists measured the deflection of starlight as it passed by the sun and arrived at numbers that agreed with Einstein's prediction. Subsequent eclipse observations also have confirmed Einstein's theory.

Other physicists have proposed relativistic theories of gravitation to compete with Einstein's. In these competing theories, almost all of which are geometrical like Einstein's, gravitational effects move at the speed c. They differ mostly in the mathematical details. Even using the technology of the late 20th century, scientists still find it very difficult to test these theories with experiments and observations. But Einstein's theory has passed all tests that have been made so far.

Einstein's general relativity theory predicts special gravitational conditions. The Big Bang theory, which describes the origin and early expansion of the universe, is one conclusion based on Einstein's theory that has been verified in several independent ways.

Another conclusion suggested by general relativity, as well as other relativistic theories of gravitation, is that gravitational effects move in waves. Astronomers have observed a loss of energy in a pair of neutron stars (stars composed of densely packed neutrons) that are orbiting each other. The astronomers theorize that energy-carrying gravitational waves are radiating from the pair, depleting the stars of their energy. Very violent astrophysical events, such as the explosion of stars or the collision of neutron stars, can produce gravitational waves strong enough that they may eventually be directly detectable with extremely precise instruments. Astrophysicists are designing such instruments with the hope that they will be able to detect gravitational waves by the beginning of the 21st century.

Another gravitational effect predicted by general relativity is the existence of black holes. The idea of a star with a gravitational force so strong that light cannot escape from its surface can be traced to Newtonian theory. Einstein modified this idea in his general theory of relativity. Because light cannot escape from a black hole, for any object—a particle, spacecraft, or wave—to escape, it would have to move past light. But light moves outward at the speed c. According to relativity, c is the highest attainable speed, so nothing can pass it. The black holes that Einstein envisioned, then, allow no escape whatsoever. An extension of this argument shows that when gravitation is this strong, nothing can even stay in the same place, but must move inward. Even the surface of a star must move inward, and must continue the collapse that created the strong gravitational force. What remains then is not a star, but a region of space from which emerges a tremendous gravitational force.

Einstein's theory of gravitation revolutionized 20th-century physics. Another important revolution that took place was quantum theory. Quantum theory states that physical interactions, or the exchange of energy, cannot be made arbitrarily small. There is a minimal interaction that comes in a packet called the quantum of an interaction. For electromagnetism the quantum is called the photon. Like the other interactions, gravitation also must be quantized. Physicists call a quantum of gravitational energy a graviton. In principle, gravitational waves arriving at the earth would consist of gravitons. In practice, gravitational waves would consist of apparently continuous streams of gravitons, and individual gravitons could not be detected.

Einstein's theory did not include quantum effects. For most of the 20th century, theoretical physicists have been unsuccessful in their attempts to formulate a theory that resembles Einstein's theory but also includes gravitons. Despite the lack of a complete quantum theory, it is possible to make some partial predictions about quantized gravitation. In the 1970s, British physicist Stephen Hawking showed that quantum mechanical processes in the strong gravitational pull just outside of black holes would create particles and quanta that move away from the black hole, thereby robbing it of energy.

An important trend in modern theoretical physics is to find a theory of everything (TOE), in which all four of the fundamental forces are seen to be really different aspects of the same single universal force. Physicists already have unified electromagnetism and the weak nuclear force and have made progress in joining these two forces with the strong nuclear force, however, relativistic gravitation, with its geometric and mathematically complex character, poses the most difficult challenge. Einstein spent most of his later years searching for an all-encompassing theory by trying to make electromagnetism geometrical like gravitation. The modern approach is to try to make gravitation fit the pattern of the other fundamental forces. Much of this work involves looking for mathematical patterns. A TOE is difficult to test using experiments because the effects of a TOE would be important only in the rarest circumstances.

Uncertainty Principle, in quantum mechanics, theory stating that it is impossible to specify simultaneously the position and momentum of a particle, such as an electron, with precision. Also called the indeterminacy principle, the theory further states that a more accurate determination of one quantity will result in a less precise measurement of the other, and that the product of both uncertainties is never less than Planck's constant, named after the German physicist Max Planck. Of very small magnitude, the uncertainty results from the fundamental nature of the particles being observed. In quantum mechanics, probability calculations therefore replace the exact calculations of classical mechanics.

Formulated in 1927 by the German physicist Werner Heisenberg, the uncertainty principle was of great significance in the development of quantum mechanics. Its philosophic implications of indeterminacy created a strong trend of mysticism among scientists who interpreted the concept as a violation of the fundamental law of cause and effect. Other scientists, including Albert Einstein, believed that the uncertainty involved in observation in no way contradicted the existence of laws governing the behaviour of the particles or the ability of scientists to discover these laws

Quantum Theory, in physics, engage upon the description of the particles that make up matter and how they interact with each other and with energy. Quantum theory explains in principle how to calculate what will happen in any experiment involving physical or biological systems, and how to understand how our world works. The name ‘quantum theory’ comes from the fact that the theory describes the matter and energy in the universe in terms of single indivisible units called quanta (singular quantum). Quantum theory is different from classical physics. Classical physics is an approximation of the set of rules and equations in quantum theory. Classical physics accurately describes the behaviour of matter and energy in the everyday universe. For example, classical physics explains the motion of a car accelerating or of a ball flying through the air. Quantum theory, on the other hand, can accurately describe the behaviour of the universe on a much smaller scale, that of atoms and smaller particles. The rules of classical physics do not explain the behaviour of matter and energy on this small scale. Quantum theory is more general than classical physics, and in principle, it could be used to predict the behaviour of any physical, chemical, or biological system. However, explaining the behaviour of the everyday world with quantum theory is too complicated to be practical.

Quantum theory not only specifies new rules for describing the universe but also introduces new ways of thinking about matter and energy. The tiny particles that quantum theory describes do not have defined locations, speeds, and paths like objects described by classical physics. Instead, quantum theory describes positions and other properties of particles in terms of the chances that the property will have a certain value. For example, it allows scientists to calculate how likely it is that a particle will be in a certain position at a certain time.

Quantum description of particles allows scientists to understand how particles combine to form atoms. Quantum description of atoms helps scientists understand the chemical and physical properties of molecules, atoms, and subatomic particles. Quantum theory enabled scientists to understand the conditions of the early universe, how the Sun shines, and how atoms and molecules determine the characteristics of the material that they make up. Without quantum theory, scientists could not have developed nuclear energy or the electric circuits that provide the basis for computers.

Quantum theory describes all of the fundamental forces—except gravitation—that physicists have found in nature. The forces that quantum theory describes are the electrical, the magnetic, the weak, and the strong. Physicists often refer to these forces as interactions, because the forces control the way particles interact with each other. Interactions also affect spontaneous changes in isolated particles.

When two pulses travelling on a string meet each other, the amplitudes of the pulses are added together to produce the shape of the resulting pulse. If the pulses are identical but travel on opposite sides of the string, then the sum of the amplitudes is zero and the string will appear flat for one instant (A). This is called destructive interference. When the two identical pulses travel on the same side of the string, then the sum of the amplitudes is double the amplitude of a single pulse when the pulses are together(B). This is called constructive interference.

One of the striking differences between quantum theory and classical physics is that quantum theory describes energy and matter both as waves and as particles. The type of energy physicists study most often with quantum theory is light. Classical physics considers light to be only a wave, and it treats matter strictly as particles. Quantum theory acknowledges that both light and matter can behave like waves and like particles.

Nobody really understands quantum physics, says scientist John Gribbin. Even to advanced physicists, the question of why subatomic particles can act as both waves and particles is still a puzzle. But the classic 19th-century ‘experiment with two holes’ is still the best way to illustrate how they behave that way. Gribbin’s simple explanation of the experiment illuminates why quantum mechanics, which provides the basis for modern physics and the scientific understanding of the structure of matter, still challenges common sense.

It is important to understand how scientists describe the properties of waves in order to understand how waves fit into quantum theory. A familiar type of wave occurs when a rope is tied to a solid object and someone moves the free end up and down. Waves travel along the rope. The highest points on the rope are called the crests of the waves. The lowest points are called troughs. One full wave consists of a crest and trough. The distance from crest to crest or from trough to trough—or from any point on one wave to the identical point on the next wave—is called the wavelength. The frequency of the waves is the number of waves per second that pass by a given point along the rope.

If waves travelling down the rope hit the stationary end and bounce back, like water waves bouncing against a wall, two waves on the rope may meet each other, hitting the same place on the rope at the same time. These two waves will interfere, or combine, if the two waves exactly line up—that is, if the crests and troughs of the waves line up—the waves interfere constructively. This means that the trough of the combined wave is deeper and the crest is higher than those of the waves before they combined. If the two waves are offset by exactly half of a wavelength, the trough of one wave lines up with the crest of the other. This alignment creates destructive interference—the two waves cancel each other out and a momentary flat spot appears on the rope.

Like classical physics, quantum theory sometimes describes light as a wave, because light behaves like a wave in many situations. Light is not a vibration of a solid substance, such as a rope. Instead, a light wave is made up of a vibration in the intensity of the electric and magnetic fields that surround any electrically charged object.

Like the waves moving along a rope, light waves travel and carry energy. The amount of energy depends on the frequency of the light waves: the higher the frequency, the higher the energy. The frequency of a light wave is also related to the colour of the light. For example, blue light has a higher frequency than that of red light. Therefore, a beam of blue light has more energy than an equally intense beam of red light has.

Unlike classical physics, quantum theory also describes light as a particle. Scientists revealed this aspect of light behaviour in several experiments performed during the early 20th century. In one experiment, physicists discovered an interaction between light and particles in a metal. They called this interaction the photoelectric effect. In the photoelectric effect, a beam of light shining on a piece of metal makes the metal emit electrons. The light adds energy to the metal’s electrons, giving them enough energy to break free from the metal. If light was made strictly of waves, each electron in the metal could absorb many continuous waves of light and gain more and more energy. Increasing the intensity of the light, or adding more light waves, would add more energy to the emitted electrons. Shining a more and more intense beam of light on the metal would make the metal emit electrons with more and more energy.

Scientists found, however, that shining a more intense beam of light on the metal just made the metal emit more electrons. Each of these electrons had the same energy as that of electrons emitted with low intensity light. The electrons could not be interacting with waves, because adding more waves did not add more energy to the electrons. Instead, each electron had to be interacting with just a small piece of the light beam. These pieces were like little packets of light energy, or particles of light. The size, or energy, of each packet depended only on the frequency, or colour, of the light—not on the intensity of the light. A more intense beam of light just had more packets of light energy, but each packet contained the same amount of energy. Individual electrons could absorb one packet of light energy and break free from the metal. Increasing the intensity of the light added more packets of energy to the beam and enabled a greater number of electrons to break free—but each of these emitted electrons had the same amount of energy. Scientists could only change the energy of the emitted electrons by changing the frequency, or colour, of the beam. Changing from red light to blue light, for example, increased the energy of each packet of light. In this case, each emitted electron absorbed a bigger packet of light energy and had more energy after it broke free of the metal. Using these results, physicists developed a model of light as a particle, and they called these light particles photons.

In 1922 American physicist Arthur Compton discovered another interaction, now called the Compton effect, that reveals the particle nature of light. In the Compton effect, light collides with an electron. The collision knocks the electron off course and changes the frequency, and therefore energy, of the light. The light affects the electron in the same way a particle with momentum would: It bumps the electron and changes the electron’s path. The light is also affected by the collision as though it were a particle, in that its energy and momentum changes.

Momentum is a quantity that can be defined for all particles. For light particles, or photons, momentum depends on the frequency, or colour, of the photon, which in turn depends on the photon’s energy. The energy of a photon is equal to a constant number, called Planck’s constant, times the frequency of the photon. Planck’s constant is named for German physicist Max Planck, who first proposed the relationship between energy and frequency. The accepted value of Planck’s constant is 6.626 × 10-34 joule-second. This number is very small—written out, it is a decimal point followed by 33 zeroes, followed by the digits 6626. The energy of a single photon is therefore very small.

The dual nature of light seems puzzling because we have no everyday experience with wave-particle duality. Waves are everyday phenomena; we are all familiar with waves on a body of water or on a vibrating rope. Particles, too, are everyday objects—baseballs, cars, buildings, and even people can be thought of as particles. But to our senses, there are no everyday objects that are both waves and particles. Scientists increasingly find that the rules that apply to the world we see are only approximations of the rules that govern the unseen world of light and subatomic particles.

In 1923 French physicist Louis de Broglie suggested that all particles—not just photons—have both wave and particle properties. He calculated that every particle has a wavelength (represented by ?, the Greek letter lambda) equal to Planck’s constant (h) divided by the momentum (p) of the particle: ? = h/p. Electrons, atoms, and all other particles have de Broglie wavelengths. The momentum of an object depends on its speed and mass, so the faster and heavier an object is, the larger its momentum (p) will be. Because Planck’s constant (h) is an extremely tiny number, the de Broglie wavelength (h/p) of any visible object is exceedingly small. In fact, the de Broglie wavelength of anything much larger than an atom is smaller than the size of one of its atoms. For example, the de Broglie wavelength of a baseball moving at 150 km/h (90 mph) is 1.1 × 10-34 m (3.6 × 10-34 ft). The diameter of a hydrogen atom (the simplest and smallest atom) is about 5 × 10-11 m (about 2 × 10-10 ft), more than 100 billion trillion times larger than the de Broglie wavelength of the baseball. The de Broglie wavelengths of everyday objects are so tiny that the wave nature of these objects does not affect their visible behaviour, so their wave-particle duality is undetectable to us.

In the 20th century, physicists discovered that matter behaved as both a wave and a particle. Austrian physicist and Nobel Prize winner Erwin Schrödinger discussed this apparent paradox in a lecture in Geneva, Switzerland, in 1952. A condensed and translated version of his lecture appeared in Scientific American the following year.

De Broglie wavelengths become important when the mass, and therefore momentum, of particles is very small. Particles the size of atoms and electrons have demonstrable wavelike properties. One of the most dramatic and interesting demonstrations of the wave behaviour of electrons comes from the double-slit experiment. This experiment consists of a barrier set between a source of electrons and an electron detector. The barrier contains two slits, each about the width of the de Broglie wavelength of an electron. On this small scale, the wave nature of electrons becomes evident, as described in the following paragraphs.

Scientists can determine whether the electrons are behaving like waves or like particles by comparing the results of double-slit experiments with those of similar experiments performed with visible waves and particles. To establish how visible waves behave in a double-slit apparatus, physicists can replace the electron source with a device that creates waves in a tank of water. The slits in the barrier are about as wide as the wavelength of the water waves. In this experiment, the waves spread out spherically from the source until they hit the barrier. The waves pass through the slits and spread out again, producing two new wave fronts with centres as far apart as the slits are. These two new sets of waves interfere with each other as they travel toward the detector at the far end of the tank.

The waves interfere constructively in some places (adding together) and destructively in others (cancelling each other out). The most intense waves—that is, those formed by the most constructive interference—hit the detector at the spot opposite the midpoint between the two slits. These strong waves form a peak of intensity on the detector. On either side of this peak, the waves destructively interfere and cancel each other out, creating a low point in intensity. Further out from these low points, the waves are weaker, but they constructively interfere again and create two more peaks of intensity, smaller than the large peak in the middle. The intensity then drops again as the waves destructively interfere. The intensity of the waves forms a symmetrical pattern on the detector, with a large peak directly across from the midpoint between the slits and alternating low points and smaller and smaller peaks on either side.

To see how particles behave in the double-slit experiment, physicists replace the water with marbles. The barrier slits are about the width of a marble, as the point of this experiment is to allow particles (in this case, marbles) to pass through the barrier. The marbles are put in motion and pass through the barrier, striking the detector at the far end of the apparatus. The results show that the marbles do not interfere with each other or with themselves like waves do. Instead, the marbles strike the detector most frequently in the two points directly opposite each slit.

When physicists perform the double-slit experiment with electrons, the detection pattern matches that produced by the waves, not the marbles. These results show that electrons do have wave properties. However, if scientists run the experiment using a barrier whose slits are much wider than the de Broglie wavelength of the electrons, the pattern resembles the one produced by the marbles. This shows that tiny particles such as electrons behave as waves in some circumstances and as particles in others.

Before the development of quantum theory, physicists assumed that, with perfect equipment in perfect conditions, measuring any physical quantity as accurately as desired was possible. Quantum mechanical equations show that accurate measurement of both the position and the momentum of a particle at the same time is impossible. This rule is called Heisenberg’s uncertainty principle after German physicist Werner Heisenberg, who derived it from other rules of quantum theory. The uncertainty principle means that as physicists measure a particle’s position with more and more accuracy, the momentum of the particle becomes less and less precise, or more and more uncertain, and vice versa.

Heisenberg formally stated his principle by describing the relationship between the uncertainty in the measurement of a particle’s position and the uncertainty in the measurement of its momentum. Heisenberg said that the uncertainty in position (represented by ?x) times the uncertainty in momentum (represented by ?p;) must be greater than a constant number equal to Planck’s constant (h) divided by 4p (p is a constant approximately equal to 3.14). Mathematically, the uncertainty principle can be written as ?x ?p > h / 4p. This relationship means that as a scientist measures a particle’s position more and more accurately—so the uncertainty in its position becomes very small—the uncertainty in its momentum must become large to compensate and make this expression true. Likewise, if the uncertainty in momentum, ?p, becomes small, ?x must become large to make the expression true.


One way to understand the uncertainty principle is to consider the dual wave-particle nature of light and matter. Physicists can measure the position and momentum of an atom by bouncing light off of the atom. If they treat the light as a wave, they have to consider a property of waves called diffraction when measuring the atom’s position. Diffraction occurs when waves encounter an object—the waves bend around the object instead of travelling in a straight line. If the length of the waves is much shorter than the size of the object, the bending of the waves just at the edges of the object is not a problem. Most of the waves bounce back and give an accurate measurement of the object’s position. If the length of the waves is close to the size of the object, however, most of the waves diffract, making the measurement of the object’s position fuzzy. Physicists must bounce shorter and shorter waves off an atom to measure its position more accurately. Using shorter wavelengths of light, however, increases the uncertainty in the measurement of the atom’s momentum.

Light carries energy and momentum, because of its particle nature (described in the Compton effect). Photons that strike the atom being measured will change the atom’s energy and momentum. The fact that measuring an object also affects the object is an important principle in quantum theory. Normally the affect is so small it does not matter, but on the small scale of atoms, it becomes important. The bump to the atom increases the uncertainty in the measurement of the atom’s momentum. Light with more energy and momentum will knock the atom harder and create more uncertainty. The momentum of light is equal to Planck’s constant divided by the light’s wavelength, or p = h/?. Physicists can increase the wavelength to decrease the light’s momentum and measure the atom’s momentum more accurately. Because of diffraction, however, increasing the light’s wavelength increases the uncertainty in the measurement of the atom’s position. Physicists most often use the uncertainty principle that describes the relationship between position and momentum, but a similar and important uncertainty relationship also exists between the measurement of energy and the measurement of time.

Quantum theory gives exact answers to many questions, but it can only give probabilities for some values. A probability is the likelihood of an answer being a certain value. Probability is often represented by a graph, with the highest point on the graph representing the most likely value and the lowest representing the least likely value. For example, the graph that shows the likelihood of finding the electron of a hydrogen atom at a certain distance from the nucleus looks like the following:

The nucleus of the atom is at the left of the graph. The probability of finding the electron very near the nucleus is very low. The probability reaches a definite peak, marking the spot at which the electron is most likely to be.

Scientists use a mathematical expression called a wave function to describe the characteristics of a particle that are related to time and space—such as position and velocity. The wave function helps determine the probability of these aspects being certain values. The wave function of a particle is not the same as the wave suggested by wave-particle duality. A wave function is strictly a mathematical way of expressing the characteristics of a particle. Physicists usually represent these types of wave functions with the Greek letter psi, ?. The wave function of the electron in a hydrogen atom is:

The symbol p and the letter e in this equation represent constant numbers derived from mathematics. The letter a is also a constant number called the Bohr radius for the hydrogen atom. The square of a wave function, or a wave function multiplied by itself, is equal to the probability density of the particle that the wave function describes. The probability density of a particle gives the likelihood of finding the particle at a certain point.

The wave function described above does not depend on time. An isolated hydrogen atom does not change over time, so leaving time out of the atom’s wave function is acceptable. For particles in systems that change over time, physicists use wave functions that depend on time. This lets them calculate how the system and the particle’s properties change over time. Physicists represent a time-dependent wave function with ?(t), where t represents time.

The wave function for a single atom can only reveal the probability that an atom will have a certain set of characteristics at a certain time. Physicists call the set of characteristics describing an atom the state of the atom. The wave function cannot describe the actual state of the atom, just the probability that the atom will be in a certain state.

The wave functions of individual particles can be added together to create a wave function for a system, so quantum theory allows physicists to examine many particles at once. The rules of probability state that probabilities and actual values match better and better as the number of particles (or dice thrown, or coins tossed, whatever the probability refers to) increases. Therefore, if physicists consider a large group of atoms, the wave function for the group of atoms provides information that is more definite and useful than that provided by the wave function of a single atom.

An example of a wave function for a single atom is one that describes an atom that has absorbed some energy. The energy has boosted the atom’s electrons to a higher energy level, and the atom is said to be in an excited state. It can return to its normal ground state (or lowest energy state) by emitting energy in the form of a photon. Scientists call the wave function of the initial exited state ?I (with ‘I’ indicating it is the initial state) and the wave function of the final ground state ?f (with ‘f’ representing the final state). To describe the atom’s state over time, they multiply ?I by some function, a(t), that decreases with time, because the chances of the atom being in this excited state decrease with time. They multiply ?f by some function, b(t), that increases with time, because the chances of the atom being in this state increase with time. The physicist completing the calculation chooses a(t) and b(t) based on the characteristics of the system. The complete wave equation for the transition is the following: ? = a(t) ?I + b(t) ?f.

At any time, the rules of probability state that the probability of finding a single atom in either state is found by multiplying the coefficient of its wave function (a(t) or b(t)) by itself. For one atom, this does not give a very satisfactory answer. Even though the physicist knows the probability of finding the atom in one state or the other, whether or not reality will match probability is still far from certain. This idea is similar to rolling a pair of dice. There is a 1 in 6 chance that the roll will add up to seven, which is the most likely sum. Each roll is random, however, and not connected to the rolls before it. It would not be surprising if ten rolls of the dice failed to yield a sum of seven. However, the actual number of times that seven appears matches probability better as the number of rolls increases. For one million or one billion rolls of the dice, one of every six rolls would almost certainly add up to seven.

Similarly, for a large number of atoms, the probabilities become approximate percentages of atoms in each state, and these percentages become more accurate as the number of atoms observed increases. For example, if the square of a(t) at a certain time is 0.2, then in a very large sample of atoms, 20 percent (0.2) of the atoms will be in the initial state and 80 percent (0.8) will be in the final state.

One of the most puzzling results of quantum mechanics is the effect of measurement on a quantum system. Before a scientist measures the characteristics of a particle, its characteristics do not have definite values. Instead, they are described by a wave function, which gives the characteristics only as fuzzy probabilities. In effect, the particle does not exist in an exact location until a scientist measures its position. Measuring the particle fixes its characteristics at specific values, effectively ‘collapsing’ the particle’s wave function. The particle’s position is no longer fuzzy, so the wave function that describes it has one high, sharp peak of probability. If the position of a particle has just been measured, the graph of its probability density looks like the following:

In the 1930s physicists proposed an imaginary experiment to demonstrate how measurement causes complications in quantum mechanics. They imagined a system that contained two particles with opposite values of spin, a property of particles that is analogous to angular momentum. The physicists can know that the two particles have opposite spins by setting the total spin of the system to be zero. They can measure the total spin without directly measuring the spin of either particle. Because they have not yet measured the spin of either particle, the spins do not actually have defined values. They exist only as fuzzy probabilities. The spins only take on definite values when the scientists measure them.

In this hypothetical experiment the scientists do not measure the spin of each particle right away. They send the two particles, called an entangled pair, off in opposite directions until they are far apart from each other. The scientists then measure the spin of one of the particles, fixing its value. Instantaneously, the spin of the other particle becomes known and fixed. It is no longer a fuzzy probability but must be the opposite of the other particle, so that their spins will add to zero. It is as though the first particle communicated with the second. This apparent instantaneous passing of information from one particle to the other would violate the rule that nothing, not even information, can travel faster than the speed of light. The two particles do not, however, communicate with each other. Physicists can instantaneously know the spin of the second particle because they set the total spin of the system to be zero at the beginning of the experiment. In 1997 Austrian researchers performed an experiment similar to the hypothetical experiment of the 1930s, confirming the effect of measurement on a quantum system.

Experimental data has been the impetus behind the creation and dismissal of physical models of the atom. Rutherford's model, in which electrons move around a tightly packed, positively charged nucleus, successfully explained the results of scattering experiments, but was unable to explain discrete atomic emission—that is, why atoms emit only certain wavelengths of light. Bohr began with Rutherford’s model, but then postulated further that electrons can only move in certain quantized orbits; this model was able to explain certain qualities of discrete emission for hydrogen, but failed completely for other elements. Schrödinger’s model, in which electrons are described not by the paths they take but by the regions where they are most likely to be found, can explain certain qualities of emission spectra for all elements; however, further refinements of the model, made throughout the 20th century, have been needed to explain all observable spectral phenomenon.

The first great achievement of quantum theory was to explain how atoms work. Physicists found explaining the structure of the atom with classical physics to be impossible. Atoms consist of negatively charged electrons bound to a positively charged nucleus. The nucleus of an atom contains positively charged particles called protons and may contain neutral particles called neutrons. Protons and neutrons are about the same size but are much larger and heavier than electrons are. Classical physics describes a hydrogen atom as an electron orbiting a proton, much as the Moon orbits Earth. By the rules of classical physics, the electron has a property called inertia that makes it want to continue travelling in a straight line. The attractive electrical force of the positively charged proton overcomes this inertia and bends the electron’s path into a circle, making it stay in a closed orbit. The classical theory of electromagnetism says that charged particles (such as electrons) radiate energy when they bend their paths. If classical physics applied to the atom, the electron would radiate away all of its energy. It would slow down and its orbit would collapse into the proton within a fraction of a second. However, physicists know that atoms can be stable for centuries or longer.

Atomic orbitals are mathematical descriptions of where the electrons in an atom (or molecule) are most likely to be found. These descriptions are obtained by solving an equation known as the Schrödinger equation, which expresses our knowledge of the atomic world. As the angular momentum and energy of an electron increases, it tends to reside in differently shaped orbitals. The orbitals corresponding to the three lowest energy states are s, p, and d, respectively. The illustration shows the spatial distribution of electrons within these orbitals. The fundamental nature of electrons prevents more than two from ever being in the same orbital. The overall distribution of electrons in an atom is the sum of many such pictures. This description has been confirmed by many experiments in chemistry and physics, including an actual picture of a p-orbital made by a Scanning Tunnelling Microscope.

Quantum theory gives a model of the atom that explains its stability. It still treats atoms as electrons surrounding a nucleus, but the electrons do not orbit the nucleus like moons orbiting planets. Quantum mechanics gives the location of an electron as a probability instead of pinpointing it at a certain position. Even though the position of an electron is uncertain, quantum theory prohibits the electron from being at some places. The easiest way to describe the differences between the allowed and prohibited positions of electrons in an atom is to think of the electron as a wave. The wave-particle duality of quantum theory allows electrons to be described as waves, using the electron’s de Broglie wavelength.

The periodic table of elements groups elements in columns and rows by shared chemical properties. Elements appear in sequence according to their atomic number. Clicking on an element in the table provides basic information about the element, including its name, history, electron configuration, and atomic weight. Atomic weights in parentheses indicate the atomic weight of the most stable isotope.

If the electron is described as a continuous wave, its motion may be described as that of a standing wave. Standing waves occur when a continuous wave occupies one of a set of certain distances. These distances enable the wave to interfere with itself in such a way that the wave appears to remain stationary. Plucking the string of a musical instrument sets up a standing wave in the string that makes the string resonate and produce sound. The length of the string, or the distance the wave on the string occupies, is equal to a whole or half number of wavelengths. At these distances, the wave bounces back at either end and constructively interferes with itself, which strengthens the wave. Similarly, an electron wave occupies a distance around the nucleus of an atom, or a circumference, that enables it to travel a whole or half number of wavelengths before looping back on itself. The electron wave therefore constructively interferes with itself and remains stable:

An electron wave cannot occupy a distance that is not equal to a whole or half number of wavelengths. In a distance such as this, the wave would interfere with itself in a complicated way, and would become unstable:

When a photon, or packet of light energy, is absorbed by an atom, the atom gains the energy of the photon, and one of the atom’s electrons may jump to a higher energy level. The atom is then said to be excited. When an electron of an excited atom falls to a lower energy level, the atom may emit the electron’s excess energy in the form of a photon. The energy levels, or orbitals, of the atoms shown here have been greatly simplified to illustrate these absorption and emission processes.

An electron has a certain amount of energy when its wave occupies one of the allowed circumferences around the nucleus of an atom. This energy depends on the number of wavelengths in the circumference, and it is called the electron’s energy level. Because only certain circumferences, and therefore energy levels, are allowed, physicists say that the energy levels are quantized. This quantization means that the energies of the levels can only take on certain values.

When an electron makes a transition from one energy level to another, the electron emits a photon with a particular energy. These photons are then observed as emission lines using a spectroscope. The Lyman series involves transitions to the lowest or ground state energy level. Transitions to the second energy level are called the Balmer series. These transitions involve frequencies in the visible part of the spectrum. In this frequency range each transition is characterized by a different colour.

The regions of space in which electrons are most likely to be found are called orbitals. Orbitals look like fuzzy, three-dimensional shapes. More than one orbital, meaning more than one shape, may exist at certain energy levels. Electron orbitals are also quantized, meaning that only certain shapes are allowed in each energy level. The quantization of electron orbitals and energy levels in atoms explains the stability of atoms. An electron in an energy level that allows only one wavelength is at the lowest possible energy level. An atom with all of its electrons in their lowest possible energy levels is said to be in its ground state. Unless it is affected by external forces, an atom will stay in its ground state forever.

Every chemical element has a characteristic spectrum, or particular distribution of electromagnetic radiation. Because of these ‘signature’ wavelength patterns, it is possible to identify the constituents of an unknown substance by analyzing its spectrum; this technique is called spectroscopy. Emission spectrums, such as the representative examples shown here, appear as several lines of specific wavelength separated by absolute darkness. The lines are indicative of molecular structure, occurring where atoms make transitions between states of definite energy.

The quantum theory explanation of the atom led to a deeper understanding of the periodic table of the chemical elements. The periodic table of elements is a chart of the known elements. Scientists originally arranged the elements in this table in order of increasing atomic number (which is equal to the number of protons in the nuclei of each element’s atoms) and according to the chemical behaviour of the elements. They grouped elements that behave in a similar way together in columns. Scientists found that elements that behave similarly occur in a periodic fashion according to their atomic number. For example, a family of elements called the noble gases all share similar chemical properties. The noble gases include neon, xenon, and argon. They do not react easily with other elements and are almost never found in chemical compounds. The atomic numbers of the noble gases increase from one element to the next in a periodic way. They belong to the same column at the far right edge of the periodic table.

Quantum theory showed that an element’s chemical properties have little to do with the nucleus of the element’s atoms, but instead depend on the number and arrangement of the electrons in each atom. An atom has the same number of electrons as protons, making the atom electrically neutral. The arrangement of electrons in an atom depends on two important parts of quantum theory. The first is the quantization of electron energy, which limits the regions of space that electrons can occupy. The second part is a rule called the Pauli exclusion principle, first proposed by Austrian-born Swiss physicist Wolfgang Pauli.

The Pauli exclusion principle states that no electron can have exactly the same characteristics as those of another electron. These characteristics include orbital, direction of rotation (called spin), and direction of orbit. Each energy level in an atom has a set number of ways these characteristics can combine. The number of combinations determines how many electrons can occupy an energy level before the electrons have to start filling up the next level.

An atom is the most stable when it has the least amount of energy, so its lowest energy levels fill with electrons first. Each energy level must be filled before electrons begin filling up the next level. These rules, and the rules of quantum theory, determine how many electrons an atom has in each energy level, and in particular, how many it has in its outermost level. Using the quantum mechanical model of the atom, physicists found that all the elements in the same column of the periodic table also have the same number of electrons in the outer energy level of their atoms. Quantum theory shows that the number of electrons in an atom’s outer level determines the atom’s chemical properties, or how it will react with other atoms.

The number of electrons in an atom’s outer energy level is important because atoms are most stable when their outermost energy level is filled, which is the case for atoms of the noble gases. Atoms imitate the noble gases by donating electrons to, taking electrons from, or sharing electrons with other atoms. If an atom’s outer energy level is only partially filled, it will bond easily with atoms that can help it fill its outer level. Atoms that are missing the same number of electrons from their outer energy level will react similarly to fill their outer energy level.

Quantum theory also explains why different atoms emit and absorb different wavelengths of light. An atom stores energy in its electrons. An atom with all of its electrons at their lowest possible energy levels has its lowest possible energy and is said to be in its ground state. One of the ways atoms can gain more energy is to absorb light in the form of photons, or particles of light. When a photon hits an atom, one of the atom’s electrons absorbs the photon. The photon’s energy makes the electron jump from its original energy level up to a higher energy level. This jump leaves an empty space in the original inner energy level, making the atom less stable. The atom is now in an excited state, but it cannot store the new energy indefinitely, because atoms always seek their most stable state. When the atom releases the energy, the electron drops back down to its original energy level. As it does, the electron releases a photon.

Quantum theory defines the possible energy levels of an atom, so it defines the particular jumps that an electron can make between energy levels. The difference between the old and new energy levels of the electron is equal to the amount of energy the atom stores. Because the energy levels are quantized, atoms can only absorb and store photons with certain amounts of energy. The photon’s energy is related to its frequency, or colour. As the frequency of photons increases, their energy increases. Atoms can only absorb certain amounts of energy, so only certain frequencies of light can excite atoms. Likewise, atoms only emit certain frequencies of light when they drop to their ground state. The different frequencies available to different atoms help astronomers, for example, determine the chemical makeup of a star by observing which wavelengths are especially weak or strong in the star’s light.

In a radical departure from classical ideas, theoretical physicist Max Planck proposed that energy travels in discrete packets called quanta. Prior to Planck’s work with black body radiation, energy was thought to be continuous, but this theory left many phenomena unexplained. While working out the mathematics for the radiation phenomena he had observed, Planck realized that quantized energy could explain the behaviour of light. His revolutionary work laid the foundation for much of modern physics.

British physicist Ernest Rutherford, winner of the 1908 Nobel Prize in chemistry, pioneered the field of nuclear physics with his research and development of the nuclear theory of atomic structure. Rutherford stated that an atom consists largely of empty space, with an electrically positive nucleus in the centre and electrically negative electrons orbiting the nucleus. By bombarding nitrogen gas with alpha particles (nuclear particles emitted through radioactivity), Rutherford engineered the transformation of an atom of nitrogen into both an atom of oxygen and an atom of hydrogen. This experiment was an early stimulus to the development of nuclear energy, a form of energy in which nuclear transformation and disintegration release extraordinary power.

The development of quantum theory began with German physicist Max Planck’s proposal in 1900 that matter can emit or absorb energy only in small, discrete packets, called quanta. This idea introduced the particle nature of light. In 1905 German-born American physicist Albert Einstein used Planck’s work to explain the photoelectric effect, in which light hitting a metal makes the metal emit electrons. British physicist Ernest Rutherford proved that atoms consisted of electrons bound to a nucleus in 1911. In 1913 Danish physicist Niels Bohr proposed that classical mechanics could not explain the structure of the atom and developed a model of the atom with electrons in fixed orbits. Bohr’s model of the atom proved difficult to apply to all but the simplest atoms.

Danish physicist Aage Niels Bohr won the Nobel Prize in physics in 1975. He developed the collective-motion theory of the atomic nucleus that helped explain many nuclear properties.

A pioneer in the area of quantum theory, Erwin Schrödinger is best known for his mathematical theory describing the wave mechanics of electrons. He and British physicist Paul Dirac shared the 1933 Nobel Prize in physics for their contributions to the understanding of quantum mechanics.

In 1923 French physicist Louis de Broglie suggested that matter could be described as a wave, just as light could be described as a particle. The wave model of the electron allowed Austrian physicist Erwin Schrödinger to develop a mathematical method of determining the probability that an electron will be at a particular place at a certain time. Schrödinger published his theory of wave mechanics in 1926. Around the same time, German physicist Werner Heisenberg developed a way of calculating the characteristics of electrons that was quite different from Schrödinger’s method but yielded the same results. Heisenberg’s method was called matrix mechanics.

American physicist Richard Feynman was well known for both his contributions to quantum electrodynamics and his enthusiastic teaching methods. Feynman reformulated quantum electrodynamic theory, which concerns the interactions between electromagnetic waves and matter. He is pictured here after winning the 1965 Nobel Prize in physics, which he shared with American physicist Julian S. Schwinger and Japanese physicist Tomonaga Shin’ichiro.

American physicist Julian Schwinger won the 1965 Nobel Prize in physics. He won the award for his part in the development of a theory of quantum electrodynamics.

Since these first breakthroughs in quantum mechanical research, physicists have focused on testing and refining quantum theory, further connecting the theory to other theories, and finding new applications. In 1928 British physicist Paul Dirac refined the theory that combined quantum theory with electrodynamics. He developed a model of the electron that was consistent with both quantum theory and Einstein’s special theory of relativity, and in doing so he created a theory that came to be known as quantum electrodynamics, or QED. In the early 1950s Japanese physicist Tomonaga Shin’ichiro and American physicists Richard Feynman and Julian Schwinger each independently improved the scientific community’s understanding of QED and made it an experimentally testable theory that successfully predicted or explained the results of many experiments.

At the turn of the 21st century, physicists were still finding new problems to study with quantum theory and new applications for quantum theory. This research will probably continue for many decades. Quantum theory is technically a fully formulated theory—any questions about the physical world can be calculated using quantum mechanics, but some are too complicated to solve in practice. The attempt to find quantum explanations of gravitation and to find a unified description of all the forces in nature are promising and active areas of research. Researchers try to find out why quantum theory explains the way nature works—they may never find an answer, but the effort to do so is underway. Physicists also study the complicated area of overlap between classical physics and quantum mechanics and work on the applications of quantum mechanics.

Studying the intersection of quantum theory and classical physics requires developing a theory that can predict how quantum systems will behave as they get larger or as the number of particles involved approaches the size of problems described by classical physics. The mathematics involved is extremely difficult, but physicists continue to advance in their research. The constantly increasing power of computers should continue to help scientists with these calculations.

New research in quantum theory also promises new applications and improvements to known applications. One of the most potentially powerful applications is quantum computing. In quantum computing, scientists make use of the behaviour of subatomic particles to perform calculations. Making calculations on the atomic level, a quantum computer could theoretically investigate all the possible answers to a query at the same time and make many calculations in parallel. This ability would make quantum computers thousands or even millions of time faster than current computers. Advancements in quantum theory also hold promise for the fields of optics, chemistry, and atomic theory.

Werner Heisenberg (1901-1976), German physicist and Nobel Prize winner, who played a large part in the development of quantum mechanics or its Quantum Theory. Quantum mechanics describes matter in terms of both particles and waves. One of Heisenberg’s best known contributions to quantum theory is the uncertainty principle, which states that the exact position and velocity of a particle cannot both be known at the same time—the more precisely one value is known, the greater the range of possibilities that exist for the other.

Heisenberg was born in Würzburg, Germany. His family moved to Munich in 1910, where Heisenberg received his early education. In the summer of 1920 he graduated from a Munich gymnasium (the German equivalent to a United States high school) and entered the University of Munich. During his first two years of studies, he published four physics research papers, making Heisenberg—at age 20—one of the top contributors to theoretical physics research. Heisenberg finished his undergraduate and graduate work in three years, and in 1923 presented his doctoral dissertation on turbulence in streams of fluid.

In his early career Heisenberg was at the forefront of dramatic changes taking place in the field of quantum mechanics. He studied with three leading quantum theorists at three major centres of quantum research of that time: German physicist Arnold Sommerfeld at the University of Munich; German physicist Max Born at the University of Göttingen in 1923; and, from 1924 to 1927, Danish physicist Niels Bohr at the Institute for Theoretical Physics in Copenhagen.

Heisenberg developed the first version of quantum mechanics, called matrix mechanics, in 1925. His version explained the motion of electrons (tiny negatively charged particles) in an atom in purely mathematical terms. His equations showed why electrons behave the way they do, which scientists had been unable to explain before. Heisenberg realized that the laws of classical physics did not govern events on the quantum level. For example, electrons do not follow the laws of classical physics and orbit the nucleus of an atom in a defined path, as planets orbit the Sun.

Heisenberg's matrix mechanics predicted that molecular hydrogen (hydrogen made up of pairs of atoms, sharing their electrons to form molecules) should exist in two distinct forms, called orthohydrogen and parahydrogen. These two forms result from a property of atoms called spin, a kind of angular momentum. In 1925 Heisenberg predicted that the spin of the two hydrogen atoms was the same in parahydrogen, and opposite each other in orthohydrogen. Other scientists soon confirmed his prediction experimentally. Heisenberg won the 1932 Nobel Prize in physics for his development of quantum mechanics and his prediction of the two types of molecular hydrogen.

With the development of matrix mechanics, Heisenberg became one of the founders of quantum mechanics. At about the same time Heisenberg developed matrix mechanics, Austrian physicist Erwin Schrödinger developed a way to describe particles in terms of the probability that any of their characteristics would be a certain value. Schrödinger later showed that both his approach and Heisenberg’s approach yielded the same result.


In 1927 Heisenberg became a professor of theoretical physics at the University of Leipzig. That year he published a paper explaining the uncertainty principle, which stemmed from his matrix mechanics. Using calculations that explain the motion of particles, he showed that it is impossible to know accurately both the velocity and position of a particle at the same time. The more accurately scientists measure one quantity, the more uncertainty exists in the measurement of the other. The consequence of the uncertainty principle is that a description in quantum mechanics is limited to a statement of the relative probability of a value rather than exact numbers.

In 1941 Heisenberg became a professor at the University of Berlin and director of the Kaiser Wilhelm Institute for Physics. During World War II (1939-1945) he chose to remain in Nazi Germany while many of his colleagues fled the country. He was the leader of Germany's atomic research team, despite his opposition to Nazi policies. He worked with Otto Hahn, one of the discoverers of nuclear fission, but the German team failed to develop nuclear weapons.

At the end of the war the United States arrested Heisenberg for his role in the German weapons program and detained him for nine months in England. Following his return to Germany in 1946, he became professor of physics and the director of the Max Planck Institute for Physics and Astrophysics (the former Kaiser Wilhelm Institute) in Göttingen. The institute moved to Munich in 1958, and Heisenberg moved with it, continuing as its director until his death.

Elementary Particles, in physics, particles that cannot be broken down into any other particles. The term elementary particles also is used more loosely to include some subatomic particles that are composed of other particles. Particles that cannot be broken further are sometimes called fundamental particles to avoid confusion. These fundamental particles provide the basic units that make up all matter and energy in the universe.

Modern physics has revealed successively deeper layers of structure in ordinary matter. Matter is composed, on a tiny scale, of particles called atoms. Atoms are in turn made up of minuscule nuclei surrounded by a cloud of particles called electrons. Nuclei are composed of particles called protons and neutrons, which are themselves made up of even smaller particles called quarks. Quarks are believed to be fundamental, meaning that they cannot be broken up into smaller particles.

Scientists and philosophers have sought to identify and study elementary particles since ancient times. Aristotle and other ancient Greek philosophers believed that all things were composed of four elementary materials: fire, water, air, and earth. People in other ancient cultures developed similar notions of basic substances. As early scientists began collecting and analyzing information about the world, they showed that these materials were not fundamental but were made of other substances.

Physicists have sought for decades to demonstrate that the forces governing the behaviour of elementary particles at the atomic level are different aspects of the same fundamental force. Progress toward a unified theory was made in the 1960s and 1970s, when physicists unified the electromagnetic force with the lesser-known weak force (the force responsible for slow nuclear processes, such as beta decay). The two forces are now sometimes referred to collectively as the electroweak interaction. American physicist Steven Weinberg and Pakistani physicist Abdus Salam independently proposed similar unified theories for these two interactions in 1967 and 1968. In 1979 the two scientists shared the Nobel Prize in physics for their contribution. Weinberg described the electroweak theory in this 1974 Scientific American article.

In the 1800s British physicist John Dalton was so sure he had identified the most basic objects that he called them atoms (from the Greek word for ‘indivisible’). By the early 1900s scientists were able to break apart these atoms into particles that they called the electron and the nucleus. Electrons surround the dense nucleus of an atom. In the 1930s, researchers showed that the nucleus consists of smaller particles, called the proton and the neutron. Today, scientists have evidence that the proton and neutron are themselves made up of even smaller particles, called quarks.

Theoretical physicist C. Llewellyn Smith discusses the discoveries that scientists have made to date about the electron and other elementary particles—subatomic particles that scientists believe cannot be split into smaller units of matter. Scientists have discovered what Smith refers to as sibling and cousin particles to the electron, but much about the nature of these particles is still a mystery. One way scientists learn about these particles is to accelerate them to high energies, smash them together, and then study what happens when they collide. By observing the behaviour of these particles, scientists hope to learn more about the fundamental structures of the universe.

Scientists now believe that quarks and three other types of particles—leptons, force-carrying bosons, and the Higgs boson—are truly fundamental and cannot be split into anything smaller. In the 1960s American physicists Steven Weinberg and Sheldon Glashow and Pakistani physicist Abdus Salam developed a mathematical description of the nature and behaviour of elementary particles. Their theory, known as the standard model of particle physics, has greatly advanced understanding of the fundamental particles and forces in the universe. Yet some questions about particles remain unanswered by the standard model, and physicists continue to work toward a theory that would explain even more about particles.

German-born American physicist Albert Einstein’s elegant equation E=mc2 predicted that energy could be converted to matter. Using a linear accelerator and high-energy laser light, physicists have done just that.

Physicist Clifford V. Johnson takes readers on a brief introductory tour of the world of particle physics. A leading theoretician in elementary particle physics, Johnson traces the history of this field from its beginnings to the present day. He explains why physicists are currently intrigued with the exotic ideas of superstrings and M-theory.

Everything in the universe, from elementary particles and atoms to people, houses, and planets, can be classified into one of two categories: fermions (pronounced FUR-me-onz) or bosons (pronounced BO-zonz). The behaviour of a particle or group of particles, such as an atom or a house, determines whether it is a fermion or boson. The distinction between these two categories is not noticeable on the large scale of people or houses, but it has profound implications in the world of atoms and elementary particles. Fundamental particles are classified according to whether they are fermions or bosons. Fundamental fermions combine to form atoms and other more unusual particles, while fundamental bosons carry forces between particles and give particles mass.

In 1925 Austrian-born American physicist Wolfgang Pauli formulated a rule of physics that helped define fermions. He suggested that no two electrons can have the same properties and locations. He proposed this exclusion principle to explain why all of the electrons in atoms have slightly different amounts of energy. In 1926 Italian-born American physicist Enrico Fermi and British physicist Paul Dirac developed equations that describe electron behaviour, providing mathematical proof of the exclusion principle. Physicists call particles that obey the exclusion principle fermions in honour of Fermi. Protons, neutrons, and the quarks that comprise them are all examples of fermions.

Some particles, such as particles of light called photons, do not obey the exclusion principle. Two or more photons can have the exact same characteristics. In 1925 German-born American physicist Albert Einstein and Indian mathematician Satyendra Bose developed a set of equations describing the behaviour of particles that do not obey the exclusion principle. Particles that obey the equations of Bose and Einstein are called bosons, in honour of Bose.

Classifying particles as either fermions or bosons is similar to classifying whole numbers as either odd or even. No number is both odd and even, yet every whole number is either odd or even. Similarly, particles are either fermions or bosons. Sums of odd and even numbers are either odd or even, depending on how many odd numbers were added. Adding two odd numbers together yields an even number, but adding a third odd number makes the sum odd again. Adding any number of even numbers yields an even sum. In a similar manner, adding an even number of fermions yields a boson, while adding an odd number of fermions results in a fermion. Adding any number of bosons yields a boson.

For example, a hydrogen atom contains two fermions: an electron and a proton. But the atom itself is a boson because it contains an even number of fermions. According to the exclusion principle, the electron inside the hydrogen atom cannot have the same properties as another electron nearby. However, the hydrogen atom itself, as a boson, does not follow the exclusion principle. Thus, one hydrogen atom can be identical to another hydrogen atom.

A particle composed of three fermions, on the other hand, is a fermion. An atom of heavy hydrogen, also called a deuteron, is a hydrogen atom with a neutron added to the nucleus. A deuteron contains three fermions: one proton, one electron, and one neutron. Since the deuteron contains an odd number of fermions, it too is a fermion. Just like its constituent particles, the deuteron must obey the exclusion principle. It cannot have the same properties as another deuteron atom.


The differences between fermions and bosons have important implications. If electrons did not obey the exclusion principle, all electrons in an atom could have the same energy and be identical. If all of the electrons in an atom were identical, different elements would not have such different properties. For example, metals conduct electricity better than plastics do because the arrangement of the electrons in their atoms and molecules differs. If electrons were bosons, their arrangements could be identical in these atoms, and devices that rely on the conduction of electricity, such as televisions and computers, would not work. Photons, on the other hand, are bosons, so a group of photons can all have identical properties. This characteristic allows the photons to form a coherent beam of identical particles called a laser.

The most fundamental particles that make up matter fall into the fermion category. These fermions cannot be split into anything smaller. The particles that carry the forces acting on matter and antimatter are bosons called force carriers. Force carriers are also fundamental particles, so they cannot be split into anything smaller. These bosons carry the four basic forces in the universe: the electromagnetic, the gravitational, the strong (force that holds the nuclei of atoms together), and the weak (force that causes atoms to radioactively decay). Scientists believe another type of fundamental boson, called the Higgs boson, gives matter and antimatter mass. Scientists have yet to discover definitive proof of the existence of the Higgs boson.

Ordinary matter makes up all the objects and materials familiar to life on Earth, including people, cars, buildings, mountains, air, and clouds. Stars, planets, and other celestial bodies also contain ordinary matter. The fundamental fermions that make up matter fall into two categories: leptons and quarks. Each lepton and quark has an antiparticle partner, with the same mass but opposite charge. Leptons and quarks differ from each other in two main ways: (1) the electric charge they carry and (2) the way they interact with each other and with other particles. Scientists usually state the electric charge of a particle as a multiple of the electric charge of a proton, which is 1.602 × 10-19 coulombs ©. Leptons have electric charges of either -1 or 0 (neutral), with their antiparticles having charges of +1 or 0. Quarks have electric charges of either +? or -?. Antiquarks have electric charges of either -? or +?. Leptons interact rather weakly with one another and with other particles, while quarks interact strongly with one another.

According to the currently accepted theory of fundamental particles and their interactions, three generations, or families, of elementary particles exist in nature. The most familiar of these families is the first generation, which includes the electron and the ‘up’ and ‘down’ quarks that form the protons and neutrons in the nucleus of an atom. German-born American physicist and Nobel laureate Jack Steinberger and American physicist Gary J. Feldman participated in experiments in the late 1980s that confirmed the existence of only three families of elementary particles. They described their work in a 1991 Scientific American article. Since the article was published, scientists have verified the existence of the top quark.

Leptons and quarks each come in 6 varieties. Scientists divide these 12 basic types into 3 groups, called generations. Each generation consists of 2 leptons and 2 quarks. All ordinary matter consists of just the first generation of particles. The particles in the second and third generation tend to be heavier than their counterparts in the first generation. These heavier, higher-generation particles decay, or spontaneously change, into their first generation counterparts. Most of these decays occur very quickly, and the particles in the higher generations exist for an extremely short time (a millionth of a second or less). Particle physicists are still trying to understand the role of the second and third generations in nature.

American physicist Martin L. Perl shared the 1995 Nobel Prize in physics for his discovery of an elementary particle known as the tau lepton. He described the detection of the tau lepton in a 1978 article in Scientific American. At the time, several fundamental particles were thought to exist but had not yet been detected. Physicists were unsure if there would be an end to this proliferation of newly identified elementary particles. As of 1998, physicists believe that there are three and only three ‘families’ of matter. The first family of matter includes the electron and the two types of quarks that make up the proton and neutron. The tau lepton belongs to the third family of matter.

Scientists divide leptons into two groups: particles that have electric charges and particles, called neutrinos, that are electrically neutral. Each of the three generations contains a charged lepton and a neutrino. The first generation of leptons consists of the electron (e-) and the electron neutrino (?e); the second generation, the muon (µ) and the muon neutrino (?µ); and the third generation, the tau (t) and the tau neutrino (?t;).

The electron is probably the most familiar elementary particle. Electrons are about 2,000 times lighter than protons and have an electric charge of –1. They are stable, so they can exist independently (outside an atom) for an infinitely long time. All atoms contain electrons, and the behaviour of electrons in atoms distinguishes one type of atom from another. When atoms radioactively decay, they sometimes emit an electron in a process called beta decay.

Studies of beta decay led to the discovery of the electron neutrino, the first generation lepton with no electric charge. Atoms release neutrinos, along with electrons, when they undergo beta decay. Electron neutrinos might have a tiny mass, but their mass is so small that scientists have not been able to measure it or conclusively confirm that the particles have any mass at all.

Physicists discovered a particle heavier than the electron but lighter than a proton in studies of high-energy particles created in Earth’s atmosphere. This particle, called the muon (pronounced MYOO-on), is the second generation charged lepton. Muons have an electric charge of -1 and an average lifetime of 1.52 microseconds (a microsecond is one-millionth of a second). Unlike electrons, they do not make up everyday matter. Muons live their brief lives in the atmosphere, where heavier particles called pions decay into muons and other particles. The electrically neutral partner of the muon is the muon neutrino. Muon neutrinos, like electron neutrinos, have either a tiny mass too small to measure or no mass at all. They are released when a muon decays.

The third generation charged lepton is the tau. The tau has an electric charge of -1 and almost twice the mass of a proton. Scientists have detected taus only in laboratory experiments. The average lifetime of taus is extremely short—only 0.3 picoseconds (a picosecond is one-trillionth of a second). The tau has an electrically neutral partner called the tau neutrino. Scientists have detected tau neutrinos directly during experiments. Like the other neutrinos, the tau neutrino has an extremely small mass.

The masses of elementary particles are usually given in units of MeV (million electron volts). One MeV mass equivalent is equal to 1.8 x 10-27 g. The mean lives of the unstable particles are in units of seconds.

The fundamental particles that make up protons and neutrons are called quarks. Like leptons, quarks come in six varieties, or ‘flavors,’ divided into three generations. Unlike leptons, however, quarks never exist alone—they are always combined with other quarks. In fact, quarks cannot be isolated even with the most advanced laboratory equipment and processes. Scientists have had to determine the charges and approximate masses of quarks mathematically by studying particles that contain quarks.

Quarks are unique among all elementary particles in that they have fractional electric charges—either +? or -?. In an observable particle, the fractional charges of quarks in the particle add up to an integer charge for the combination.

The first generation quarks are designated up (u) and down (d); the second generation, charm © and strange (s); and the third generation, top (t) and bottom (b). The odd names for quarks do not describe any aspect of the particles; they merely give scientists a way to refer to a particular type of quark.

The up quark and the down quark make up protons and neutrons in atoms, as described below. The up quark has an electric charge of +?, and the down quark has a charge of -?. The second generation quarks have greater mass than those in the first generation. The charm quark has an electric charge of +?, and the strange quark has a charge of -?. The heaviest quarks are the third generation top and bottom quarks. Some scientists originally called the top and bottom quarks truth and beauty, but those names have dropped out of use. The top quark has an electric charge of +?, and the bottom quark has a charge of -?. The up quark, the charm quark, and the top quark behave similarly and are called up-type quarks. The down quark, the strange quark, and the bottom quark are called down-type quarks because they share the same electric charge.

Particles made of quarks are called hadrons (pronounced HA-dronz). Hadrons are not fundamental, since they consist of quarks, but they are commonly included in discussions of elementary particles. Two classes of hadrons can be found in nature: mesons (pronounced ME-zonz) and baryons (pronounced BARE-ee-onz).

Matter is composed of tiny particles called quarks. Quarks come in six varieties: up (u), down (d), charm ©, strange (s), top (t), and bottom (b). Quarks also have antimatter counterparts called antiquarks (designated by a line over the letter symbol). Quarks combine to form larger particles called baryons, and quarks and antiquarks combine to form mesons. Protons and neutrons, particles that form the nuclei of atoms, are examples of baryons. Positive and negative kaons are examples of mesons.



Mesons contain a quark and an antiquark (the antiparticle partner of the quark). Since they contain two fermions, mesons are bosons. The first meson that scientists detected was the pion. Pions exist as intermediary particles in the nuclei of atoms, forming from and being absorbed by protons and neutrons. The pion comes in three varieties: a positive pion (p+), a negative pion (p-), and an electrically neutral pion (p0). The positive pion consists of an up quark and a down antiquark. The up quark has charge +? and the down antiquark has charge +?, so the charge on the positive pion is +1. Positive pions have an average lifetime of 26 nanoseconds (a nanosecond is one-billionth of a second). The negative pion contains an up antiquark and a down quark, so the charge on the negative pion is -? plus –?, or -1. It has the same mass and average lifetime as the positive pion. The neutral pion contains an up quark and an up antiquark, so the electric charges cancel each other. It has an average lifetime of 9 femtoseconds (a femtosecond is one-quadrillionth of a second).

Many other mesons exist. All six quarks play a part in the formation of mesons, although mesons containing heavier quarks like the top quark have very short lifetimes. Other mesons include the kaons (pronounced KAY-ons) and the D particles. Kaons (?) and Ds come in several different varieties, just as pions do. All varieties of kaons and some varieties of Ds contain either a strange quark or a strange antiquark. All Ds contain either a charm quark or a charm antiquark.

Three quarks together form a baryon. A baryon contains an odd number of fermions, so it is a fermion itself. Protons, the positively charged particles in all atomic nuclei, are baryons that consist of two up quarks and a down quark. Adding the charges of two up quarks and a down quark, +? plus +? plus -?, produces a net charge of +1, the charge of the proton. Protons have never been observed to decay.

The neutrons found inside atoms are baryons as well. A neutron consists of one up quark and two down quarks. Adding these charges gives +? plus -? plus -? for a net charge of 0, making the neutron electrically neutral. Neutrons have a slightly greater mass than protons and an average lifetime of 930 seconds.

Many other baryons exist, and many contain quarks other than the up and down flavors. For example, lambda and sigma (S) particles contain strange, charm, or bottom quarks. For lambda particles, the average lifespan ranges from 200 femtoseconds to 1.2 picoseconds. The average lifetime of sigma particles ranges from 0.0007 femtoseconds to 150 picoseconds.

British physicist Paul Dirac proposed an early theory of particle interactions in 1928. His theory predicted the existence of antiparticles, which combine to form antimatter. Antiparticles have the same mass as their normal particle counterparts, but they have several opposite quantities, such as electric charge and colour charge. Colour charge determines how particles react with one another under the strong force (the force that holds the nuclei of atoms together, just as electric charge determines how particles react to one another under the electromagnetic force). The antiparticles of fermions are also fermions, and the antiparticles of bosons are bosons.

All fermions have antiparticles. The antiparticle of an electron is called the positron (pronounced POZ-i-tron). The antiparticle of the proton is the antiproton. The antiproton consists of antiquarks—two up antiquarks and one down antiquark. Antiquarks have the opposite electric and colour charges of their counterparts. The antiparticles of neutrinos are called antineutrinos. Both neutrinos and antineutrinos have no electric charge or colour charge, but physicists still consider them distinct from one another. Neutrinos and antineutrinos behave differently when they collide with other particles and in radioactive decay. When a particle decays, for example, an antineutrino accompanies the production of a charged lepton, and a neutrino accompanies the production of a charged antilepton. In addition, reactions that absorb neutrinos do not absorb antineutrinos, giving further evidence of the distinction between neutrinos and antineutrinos.

When a particle and its associated antiparticle collide, they annihilate, or destroy, each other, creating a tiny burst of energy. Particle-antiparticle collisions would provide a very efficient source of energy if large numbers of antiparticles could be harnessed cheaply. Physicists already make use of this energy in machines called particle accelerators. Particle accelerators increase the speed (and therefore energy) of elementary particles and make the particles collide with one another. When particles and antiparticles (such as protons and antiprotons) collide, their kinetic energy and the energy released when they annihilate each other converts to matter, creating new and unusual particles for physicists to study.

Particle-antiparticle collisions could someday fuel spacecraft, which need only a slight push to change their speed or direction in the vacuum of space. The antiparticles and particles would have to be kept away from each other until the spacecraft needed the energy of their collisions. Finely tuned magnetic fields could be used to trap the particles and keep them separate, but these magnetic fields are difficult to set up and maintain. At the end of the 20th century, technology was not advanced enough to allow spacecraft to carry the equipment and particles necessary for using particle-antiparticle collisions as fuel.

All of the known forces in our universe can be classified as one of four types: electromagnetic, strong, weak, or gravitational. These forces affect everything in the universe. The electromagnetic force binds electrons to the atoms that compose our bodies, the objects around us, the Earth, the planets, and the Moon. The strong nuclear force holds together the nuclei inside the atoms that compose matter. Reactions due to the weak nuclear force fuel the Sun, providing light and heat. Gravity holds people and objects to the ground.

Each force has a particular property associated with it, such as electric charge for the electromagnetic force. Elementary particles that do not have electric charge, such as neutrinos, are electrically neutral and are not affected by the electromagnetic force.

Mechanical forces, such as the force used to push a child on a swing, result from the electrical repulsion between electrons and are thus electromagnetic. Even though a parent pushing a child on a swing feels his or her hands touching the child, the atoms in the parent’s hands never come into contact with the atoms of the child. The electrons in the parent’s atoms repel those in the child while remaining a slight distance away from them. In a similar manner, the Sun attracts Earth through gravity, without Earth ever contacting the Sun. Physicists call these forces nonlocal, because the forces appear to affect objects that are not in the same location, but at a distance from one another.

Theories about elementary particles, however, require forces to be local—that is, the objects affecting each other must come into contact. Scientists achieved this locality by introducing the idea of elementary particles that carry the force from one object to another. Experiments have confirmed the existence of many of these particles. In the case of electromagnetism, a particle called a photon travels between the two repelling electrons. One electron releases the photon and recoils, while the other electron absorbs it and is pushed away.

Each of the four forces has one or more unique force carriers, such as the photon, associated with it. These force carrier particles are bosons, since they do not obey the exclusion principle—any number of force carriers can have the exact same characteristics. They are also believed to be fundamental, so they cannot be split into smaller particles. Other than the fact that they are all fundamental bosons, the force carriers have very few common features. They are as unique as the forces they carry.

No comments:

Post a Comment