December 3, 2010

PAGE 3

For centuries, electricity and magnetism seemed distinct forces. In the 1800s, however, experiments showed many connections between these two forces. In 1864 British physicist James Clerk Maxwell drew together the work of many physicists to show that electricity and magnetism are actually different aspects of the same electromagnetic force. This force causes particles with similar electric charges to repel one another and particles with opposite charges to attract one another. Maxwell also showed that light is a travelling form of electromagnetic energy. The founders of quantum mechanics took Maxwell’s work one step further. In 1925 German-British physicist Max Born, and German physicists Ernst Pascual Jordan and Werner Heisenberg showed mathematically that packets of light energy, later called photons, are emitted and absorbed when charged particles attract or repel each other through the electromagnetic force.


Any particle with electric charge, such as a quark or an electron, is subject to, or ‘feels,’ the electromagnetic force. Electrically neutral particles, such as neutrinos, do not feel it. The electric charge of a hadron is the sum of the charges on the quarks in the hadron. If the sum is zero, the electromagnetic force does not affect the hadron, although it does affect the quarks inside the hadron. Photons carry the electromagnetic force between particles but have no mass or electric charge themselves. Since photons have no electric charge, they are not affected by the force they carry.

Unlike neutrinos and some other electrically neutral particles, the photon does not have a distinct antiparticle. Particles that have antiparticles are like positive and negative numbers—they are each the other’s additive inverse. Photons are like the number zero, which is its own additive inverse. In effect, a photon is its own antiparticle.

In one example of the electromagnetic force, two electrons repel each other because they both have negative electric charges. One electron releases a photon, and the other electron absorbs it. Even though photons have no mass, their energy gives them momentum, a property that enables them to affect other particles. The momentum of the photon pushes the two electrons apart, just as the momentum of a basketball tossed between two ice skaters will push the skaters apart. For more information about electromagnetic radiation and particle physics.

Gluons are particles of energy that carry the strong nuclear force. They hold together particles called quarks and antiquarks, which combine to form hadrons. Examples of hadrons include protons and neutral kaons. As gluons bind together quarks, or quarks and antiquarks, they affect a property of quarks and antiquarks called colour charge. The relationship between colour charge and the strong force is similar to that between electric charge and the electromagnetic force.

Quarks and particles made of quarks attract each other through the strong force. The strong force holds the quarks in protons and neutrons together, and it holds protons and neutrons together in the nuclei of atoms. If electromagnetism were the only force between quarks, the two up quarks in a proton would repel each other because they are both positively charged. (The up quarks are also attracted to the negatively charged down quark in the proton, but this attraction is not as great as the repulsion between the up quarks.) However, the strong force is stronger than the electromagnetic force, so it glues the quarks inside the proton together.

A property of particles called colour charge determines how the strong force affects them. The term colour charge has nothing to do with colour in the usual sense; it is just a convenient way for scientists to describe this property of particles. Colour charge is similar to electric charge, which determines a particle’s electromagnetic interactions. Quarks can have a colour charge of red, blue, or green. Antiquarks can have a colour charge of attired (also called cyan), antiblue (also called yellow), or antigreen (also called magenta). Quark types and colours are not linked—up quarks, for example, may be red, green, or blue.

All observed objects carry a colour charge of zero, so quarks (which compose matter) must combine to form hadrons that are colourless, or colour neutral. The colour charges of the quarks in hadrons therefore cancel one another. Mesons contain a quark of one colour and an antiquark of the quark’s anticolour. The colour charges cancel each other out and make the meson white, or colourless. Baryons contain three quarks, each with a different colour. As with light, the colours red, blue, and green combine to produce white, so the baryon is white, or colourless.

The strong force and the creation of the particle hold together particles called quarks inside protons. When a fast-moving particle collides with a proton, the strong force can convert the energy of the collision into matter, resulting in the creation of a new particle.

The bosons that carry the strong force between particles are called gluons. Gluons have no mass or electric charge and, like photons, they are their own antiparticle. Unlike photons, however, gluons do have colour charge. They carry a colour and an anticolour. Possible gluon colour combinations include red-antiblue, green-attired, and blue-antigreen. Because gluons carry colour charge, they can attract each other, while the colourless, electrically neutral photons cannot. Colours and anticolours attract each other, so gluons that carry one colour will attract gluons that carry the associated anticolour.

Gluons carry the strong force by moving between quarks and antiquarks and changing the colours of these particles. Quarks and antiquarks in hadrons constantly exchange gluons, changing colours as they emit and absorb gluons. Baryons and mesons are all colourless, so each time a quark or antiquark changes colour, other quarks or antiquarks in the particle must change colour as well to preserve the balance. The constant exchange of gluons and colour charge inside mesons and baryons creates a colour force field that holds the particles together.

The strong force is the strongest of the four forces in atoms. Quarks are bound so tightly to each other that they cannot be isolated. Separating a quark from an antiquark requires more energy than creating a quark and antiquark does. Attempting to pull apart a meson, then, just creates another meson: The quark in the original meson combines with a newly created antiquark, and the antiquark in the original meson combines with a newly created quark.

In addition to holding quarks together in mesons and baryons, gluons and the strong force also attract mesons and baryons to one another. The nuclei of atoms contain two kinds of baryons: protons and neutrons. Protons and neutrons are colourless, so the strong force does not attract them to each other directly. Instead, the individual quarks in one neutron or proton attract the quarks of its neighbours. The pull of quarks toward each other, even though they occur in separate baryons, provides enough energy to create a quark-antiquark pair. This pair of particles forms a type of meson called a pion. The exchange of pions between neutrons and protons holds the baryons in the nucleus together. The strong force between baryons in the nucleus is called the residual strong force.

While the strong force holds the nucleus of an atom together, the weak force can make the nucleus decay, changing some of its particles into other particles. The weak force is so named because it is far weaker than the electromagnetic or strong forces. For example, an interaction involving the weak force is 10 quintillion (10 billion billion) times less likely to occur than an interaction involving the electromagnetic force. Three particles, called vector bosons, carry the weak force. The weak force equivalent to electric charge and colour charge is a property called weak hypercharge. Weak hypercharge determines whether the weak force will affect a particle. All fermions possess weak hypercharge, as do the vector bosons that carry the weak force.

All elementary particles, except the force carriers of the other forces and the Higgs boson, interact by means of the weak force. But the effects of the weak force are usually masked by the other, stronger forces. The weak force is not very significant when considering most of the interactions between two quarks. For example, the strong force completely overwhelms the weak force when a quark bounces off another quark. Nor does the weak force significantly affect interactions between two charged particles, such as the interaction between an electron and a proton. The electromagnetic force dominates those interactions.

The weak force becomes significant when an interaction does not involve the strong force or the electromagnetic force. For example, neutrinos have neither electric charge nor colour charge, so any interaction involving a neutrino must be due to either the weak force or the gravitational force. The gravitational force is even weaker than the weak force on the scale of elementary particles, so the weak force dominates in neutrino interactions.

One example of a weak interaction is beta decay involving the decay of a neutron. When a neutron decays, it turns into a proton and emits an electron and an electron antineutrino. The neutron and antineutrino are electrically neutral, ruling out the electromagnetic force as a cause. The antineutrino and electron are colourless, so the strong force is not at work. Beta decay is due solely to the weak force.

The weak force is carried by three vector bosons. These bosons are designated the W+, the W-, and the Z0. The W bosons are electrically charged (+1 and –1), so they can feel the electromagnetic force. These two bosons are each other’s antiparticle counterparts, while the Z0 is its own antiparticle. All three vector bosons are colourless. A distinctive feature of the vector bosons is their mass. The weak force is the only force carried by particles that have mass. These massive force carriers cannot travel as far as the massless force carriers of the three long-range forces, so the weak force acts over shorter distances than the other three forces.

When the weak force affects a particle, the particle emits one of the three weak vector bosons—W+, W-, or Z0—and changes into a different particle. The weak vector boson then decays to produce other particles. In interactions that involve the W+ and W-, a particle changes into a particle with a different electric charge. For example, in beta decay, one of the down quarks in a neutron changes into an up quark and the neutron releases a W boson. This change in quark type converts the neutron (two down quarks and an up quark) to a proton (one down quark and two up quarks). The W boson released by the neutron could then decay into an electron and an electron antineutrino. In Z0 interactions, a particle changes into a particle with the same electric charge.

A quark or lepton can change into a different quark or lepton from another generation only by the weak interaction. Thus the weak force is the reason that all stable matter contains only first generation leptons and quarks. The second and third generation leptons and quarks are heavier than their first generation counterparts, so they quickly decay into the lighter first generation leptons and quarks by exchanging W and Z bosons. The first generation particles have no lighter counterparts into which they can decay, so they are stable.

The gravitational force is probably the most familiar force, yet it is the only force not described by the standard model of particle physics. In 1915 German-born American physicist Albert Einstein developed a significant new approach to the concept of gravity: the general theory of relativity. While general relativity successfully described many phenomena, the theory was framed differently than were theories of particle physics, making relativity difficult to reconcile with particle physics. Through the end of the 20th century, all efforts to develop a theory of gravitation entirely consistent with particle physics failed.

Physicists call their goal of an overall theory a ‘theory of everything,’ because it would explain all four known forces in the universe and how these forces affect particles. In such a theory, the particles that carry the gravitational force would be called gravitons. Gravitons should share many characteristics with photons because, like electromagnetism, gravitation is a long-range force that gets weaker with distance. Gravitons should be massless and have no electric charge or colour charge. The graviton is the only force carrier not yet observed in an experiment.

Gravitation is the weakest of the four forces on the atomic scale, but it can become extremely powerful on a cosmic scale. For instance, the gravitational force between Earth and the Sun holds Earth in orbit. Gravity can have large effects, because, unlike the electromagnetic force, it is always attractive. Every particle in your body has some tiny gravitational attraction to the ground. The innumerable tiny attractions add up, which is why you do not float off into space. The negative charge on electrons, however, cancels out the positive charge on the protons in your body, leaving you electrically neutral.

Another unique feature of gravitation is its universality—every object is gravitationally attracted to every other object, even objects without mass. For example, the theory of relativity predicted that light should feel the gravitational force. Before Einstein, scientists thought that gravitational attraction depended only on mass. They thought that light, being massless, would not be attracted by gravitation. Relativity, however, holds that gravitational attraction depends on the energy of an object and that mass is just one possible form of energy. Einstein was proven correct in 1919, when astronomers observed that the gravitational attraction between light from distant stars and the Sun bends the path of the light around the Sun (Gravitational Lens).

The standard model of particle physics includes an elementary boson that is not a force carrier: the Higgs boson. Scientists have not yet detected the Higgs boson in an experiment, but they believe it gives elementary particles their mass. Composite particles receive their mass from their constituent particles, and in some cases, the energy involved in holding these particles together. For example, the mass of a neutron comes from the mass of its quarks and the energy of the strong force holding the quarks together. The quarks themselves, however, have no such source of mass, which is why physicists introduced the idea of the Higgs boson. Elementary particles should obtain their mass by interacting with the Higgs boson.

Scientists expect the mass of the Higgs boson to be large compared to that of most other fundamental particles. Physicists can create more massive particles by forcing smaller particles to collide at high speeds. The energy released in the collisions converts to matter. Producing the Higgs boson, with its relatively large mass, will require a tremendous amount of energy. Many scientists are searching for the Higgs boson using machines called particle colliders. Particle colliders shoot a beam of particles at a target or another beam of particles to produce new, more massive particles.

Scientific progress often occurs when people find connections between apparently unconnected phenomena. For example, 19th-century British physicist James Clerk Maxwell made a connection between electric forces on charged objects and the force on a moving charge due to a magnet. He deduced that the electric force and the magnetic force were just different aspects of the same force. His discovery led to a deeper understanding of electromagnetism.

The unification of electricity and magnetism and the discovery of the strong and weak nuclear forces in the mid-20th century left physicists with four apparently independent forces: electromagnetism, the strong force, the weak force, and gravitation. Physicists believe they should be able to connect these forces with one unified theory, called a theory of everything (TOE). A TOE should explain all particles and particle interactions by demonstrating that these four forces are different aspects of one universal force. The theory should also explain why fermions come in three generations when all stable matter contains fermions from just the first generation.

Scientists also hope that in explaining the extra generations, a TOE will explain why particles have the masses they do. They would like an explanation of why the top quark is so much heavier than the other quarks and why neutrinos are so much lighter than the other fermions. The standard model does not address these questions, and scientists have had to determine the masses of particles by experiment rather than by theoretical calculations.

Unification of all of the forces, however, is not an easy task. Each force appears to have distinctive properties and unique force carriers. In addition, physicists have yet to describe successfully the gravitational force in terms of particles, as they have for the other three forces. Despite these daunting obstacles, particle physicists continue to seek a unified theory and have made some progress. Starting points for unification include the electroweak theory and grand unification theories.

American physicists Sheldon Glashow and Steven Weinberg and Pakistani physicist Abdus Salam completed the first step toward finding a universal force in the 1960s with their standard model theory of particle physics. Using a branch of mathematics called group theory, they showed how the weak force and the electromagnetic force could be combined mathematically into a single electroweak force. The electromagnetic force seems much stronger than the weak force at low energies, but that disparity is due to the differences between the force carriers. At higher energies, the difference between the W and Z bosons of the weak force, which have mass, and the massless photons of the electromagnetic force becomes less significant, and the two forces become indistinguishable.

The standard model also uses group theory to describe the strong force, but scientists have not yet been able to unify the strong force with the electroweak force. The next step toward finding a TOE would be a grand unified theory (GUT), a theory that would unify the strong, electromagnetic, and weak forces (the forces currently described by the standard model). A GUT should describe all three forces as different aspects of one force. At high energies, the distinctions between the three aspects should disappear. The only force remaining would then be the gravitational force, which scientists have not been able to describe with particle theory.

One type of GUT contains a theory called supersymmetry (SUSY), first suggested in 1971. Supersymmetric theories set rules for new symmetries, or pairings, between particles and interactions. The standard model, for example, requires that every particle have an associated antiparticle. In a similar manner, SUSY requires that every particle have an associated supersymmetric partner. While particles and their associated antiparticles are either both fermions or bosons, the supersymmetric partner of a fermion should be a boson, and the supersymmetric partner of a boson should be a fermion. For example, the fermion electron should be paired with a boson called a selecton, and the fermion quarks with bosons called squarks. The force-carrying bosons, such as photons and gluons, should be paired with fermions, such as particles called photinos and gluinos. Scientists have yet to detect these supersymmetric partners, but they believe the partners may be massive compared to known particles, and therefore require too much energy to create with current particle accelerators.

Another approach to grand unification involves string theories. British physicist Paul Dirac developed the first string theory in 1950. String theories describe elementary particles as loops of vibrating string. Scientists believe these strings are currently invisible to us because the vibrations do not occur in the four familiar dimensions of space and time—some string theories, for example, need as many as 26 dimensions to explain particles and particle interactions. Incorporating supersymmetry with string theory results in theories of superstrings. Superstring theories are one of the leading candidates in the quest to unify gravitation with the other forces. The mathematics of superstring theories incorporates gravity into particle physics easily. Many scientists, however, do not believe superstrings are the answers, because they have not detected the additional dimensions required by string theory.

Studying elementary particles requires specialized equipment, the skill of deduction, and much patience. All of the fundamental particles—leptons, quarks, force-carrying bosons, and the Higgs boson—appear to be ‘point particles.’ A point particle is infinitely small—it exists at a certain point in space without taking up any space. These fundamental particles are therefore impossible to see directly, even with the most powerful microscopes. Instead, scientists must deduce the properties of a particle from the way it affects other objects.

In a way, studying an elementary particle is like tracking a white polar bear in a field of snow: The polar bear may be impossible to see, but you can see the tracks it left in the snow, you can find trees it clawed, and you can find the remains of polar bear meals. You might even smell or hear the polar bear. From these observations, you could determine the position of the polar bear, its speed (from the spacing of the paw prints), and its weight (from the depth of the paw prints). No one can see an elementary particle, but scientists can look at the tracks it leaves in detectors, and they can look at materials with which it has interacted. They can even measure electric and magnetic fields caused by electrically charged particles. From these observations, physicists can deduce the position of an elementary particle, its speed, its weight, and many other properties.

Most particles are extremely unstable, which means they decay into other particles very quickly. Only the proton, neutron, electron, photon, and neutrinos can be detected a significantly long time after they are created. Studying the other particles, such as mesons, the heavier baryons, and the heavier leptons, requires detectors that can take many (250,000 or more) measurements per second. In addition, these heavier particles do not naturally exist on the surface of Earth, so scientists must create them in the laboratory or look to natural laboratories, such as stars and Earth’s atmosphere. Creating these particles requires extremely high amounts of energy.

Particle physicists use large, specialized facilities to measure the effects of elementary particles. In some cases, they use particle accelerators and particle colliders to create the particles to be studied. Particle accelerators are huge devices that use electric and magnetic fields to speed up elementary particles. Particle colliders are chambers in which beams of accelerated elementary particles crash into one another. Scientists can also study elementary particles from outer space, from sources such as the Sun. Physicists use large particle detectors, complex machines with several different instruments, to measure many different properties of elementary particles. Particle traps slow down and isolate particles, allowing direct study of the particles’ properties.

Particle Accelerator accelerators, seem as big circle marks the location of the Large Hadron Collider (LHC) at the European particle physics laboratory in CERN. The tunnel where the particles are accelerated is located 100m (320 ft) underground and is 27 km (16.7 mi) in circumference. The smaller circle is the site of the smaller proton-antiproton collider. The border of France and Switzerland bisects the CERN site and the two accelerator rings.

When energetic particles collide, the energy released in the collision can convert to matter and produce new particles. The more energy produced in the collision, the heavier the new particles can be. Particle accelerators produce heavier elementary particles by accelerating beams of electrons, protons, or their antiparticles to very high energies. Once the accelerated particles reach the desired energy, scientists steer them into a collision. The particles can collide with a stationary object (in a fixed target experiment) or with another beam of accelerated particles (in a collider experiment).

Particle accelerators come in two basic types—linear accelerators and circular accelerators. Devices that accelerate particles in a straight line are called linear accelerators. They use electric fields to speed up charged particles. Traditional (not flat screen) television sets and computer monitors use this method to accelerate electrons toward the screen (Television: Picture Tube). Linear accelerators have two main uses: They can produce a beam of particles for a fixed target experiment, or they can feed particles into a circular accelerator.

Circular accelerators, or synchrotrons (pronounced SIN-krow-trons), use magnetic fields to accelerate charged particles in a circle. The particles can circle many times, gaining energy each time they travel around the circle. Thus synchrotrons can accelerate particles to extremely high energies. Synchrotrons can be used in fixed target experiments, or they can accelerate two beams simultaneously for use in a collider experiment.

Positively charged particles bend a different way in a magnetic field than do negatively charged particles, so a synchrotron can accelerate electrons in one direction and positrons in the other. A synchrotron can also accelerate protons in one direction and antiprotons in the other. Scientists are even considering building a synchrotron to accelerate less stable particles, such as muons and antimuons. Once particles reach the desired energy, experimenters slightly change the magnetic field controlling the particles, bringing the two beams into a collision. The particles and antiparticles annihilate each other. The resulting energy produces numerous other particles for the scientists to study.

Many great discoveries in particle physics have been made by looking to the heavens. The universe is a natural particle accelerator, and particles from outer space continually bombard Earth’s atmosphere. Extraterrestrial particles called cosmic rays—and their collisions with other particles in the atmosphere—produce many unusual and unstable particles. Scientists first discovered the muon and the pion in cosmic rays, as well as the positron. Mesons made up of the strange quark were also first spotted in cosmic ray experiments before modern large accelerator facilities were built.

Neutrinos stream to Earth from cosmic sources. Nuclear reactions in the Sun produce incredibly large numbers of electron neutrinos that can then be detected on Earth. Experiments studying these solar neutrinos suggest that the mass of the neutrino is very small but that it is not zero.

Particle accelerators and detectors provide physicists with invaluable information about subatomic particles. Using particle accelerators, physicists accelerate particles to very high energies. Then they smash these particles into each other or into a target and use particle detectors to measure and record the properties of the particles produced. This Mark II particle detector is part of the 3.2 km (2 mi) linear accelerator in California, called the Stanford Linear Accelerator Centre.

Every particle experiment needs particle detectors. Particle detectors come in many shapes, sizes, and types. Some detectors track particles, some count the number of particles passing by, some measure the energy left in the detector by a particle, and some are even more specialized. In addition, many detectors contain large magnets to bend the paths of charged particles. The direction the path bends indicates the electric charge of the particle, and the amount the path bends indicates the mass and speed of the particle.

Physicists have extensively studied, and come to understand, commonly occurring interactions between particles, so most current particle experiments focus on rare interactions, which are less well understood. Experiments must generate incredibly large numbers of particle interactions to produce a few of the desired rare interactions. Scientists are not interested in studying the majority of interactions produced in an experiment, so they need fast computers and sophisticated programs to sort the data and pick out the important interactions.

Each type of particle has distinct properties, so each type of particle behaves differently in detectors. Experiments typically have many types of detectors to distinguish between different particles. Each detector produces such an enormous amount of data on each interaction that analyzing particle experiments requires a huge amount of computer time.

Scientists use particle traps to study particles that are more stable and have less energy than particles studied in accelerators and colliders. Magnetic and electric fields can be used to trap charged particles. The fields control the movement of the particle, keeping it confined to a small area. Neutral particles, such as atoms, can also be trapped, but that task is much more difficult. Lasers, beams of coherent light, are often used to trap neutral particles. Light carries energy, and when light strikes an object, it exerts a small force on the object. Shining lasers on atoms or other neutral particles causes the particles to gradually slow down and be trapped.

The rules of quantum theory prevent any particle trap from being perfect. A perfect trap would enable a physicist to precisely determine a particle’s position and speed. A rule called the uncertainty principle states that a particle’s location and speed cannot be precisely measured at the same time. Increasing the precision in one measurement increases the uncertainty in the other. If a particle trap was infinitely small, the location of the particle would be known precisely, but this would make measurement of the particle’s speed infinitely uncertain: The scientist would not be able to determine anything about the particle’s speed. Likewise, if the particle trap slowed the particle to a complete rest, its speed would be known precisely, which would make the particle’s location infinitely uncertain: The scientist would not be able to determine anything about position, or whether the particle was even in the trap.

Scientists use particle traps to compare the properties of particles and antiparticles. Scientists are also trying to create antihydrogen using particle traps. Antiparticles, such as antiprotons and positrons, usually exist for just a brief time before they combine with their counterpart particles in ordinary matter and are annihilated. A particle trap, however, can confine an antiproton without letting it contact its ordinary matter counterpart, the proton. Positrons can be confined in a similar manner. Researchers are currently using particle traps to bring positrons close enough to antiprotons so these particles can bind and make antihydrogen, just as electrons and protons make hydrogen.

History of particle physics began in the early 20th century with the discovery of the parts of the atom and the photon. Theories explaining the behaviour of these particles led physicists to propose the existence of neutrinos in 1928 and antimatter in 1931. Antimatter was discovered in 1933, but it took experimenters almost 30 years to confirm the existence of neutrinos. Physicists were aided in their studies of particles by the first particle accelerator, invented in 1928, and by its successor, which was developed in the 1940s.

During the 1950s scientists discovered mesons and pions in cosmic rays from space. They did not yet understand, however, that these particles, as well as the protons and neutrons inside atoms, were composed of quarks.

Two important advances in the theory of elementary particles occurred in the 1960s: Physicists proposed the existence of quarks, and they introduced the standard model, a theory that explains how the strong and weak nuclear forces work. The standard model predicted the existence of many more particles, which scientists later detected in experiments. According to the standard model, the number of truly elementary particles is now 30: 6 quarks, 6 antiquarks, 6 leptons, 6 antileptons, the photon, the gluon, the 3 bosons of the weak force, and the Higgs boson. (The graviton, while it may exist, is not included in the standard model.) Particle physicists continue to revise their theories and often propose new particles to explain different phenomena. Some of the particles that have been suggested, but not yet detected, are the axions, the squark, and the magnetic monopole.

In seeking to explain the behaviour of atoms, physicists of the late 1800s searched for the source of negative electric charge in atoms. British physicist Sir Joseph John Thomson is credited with the discovery of the electron. Although many others had studied electricity and streams of electrons, Thomson was the first to measure the properties of individual electrons and to suggest that electrons existed within atoms. He measured the ratio of electron mass to electron charge and, in 1897, claimed that electrons could be found in all matter.

Matter is not made up entirely of electrons–atoms also contain protons and neutrons. No one person is given credit for discovering the proton. Many experiments around the turn of the century examined its properties, but it was not named proton until 1920. The discovery of the neutron came much later, because the neutron is electrically neutral and therefore much harder to detect. British physicist James Chadwick discovered the neutron in 1932. He won the 1935 Nobel Prize in physics for this discovery.

Before the development of particle physics, scientists had a difficult time explaining the behaviour of light. Light often behaves like a wave, such as a wave of sound or a wave on the surface of water. Other times, however, light behaves more like a beam of particles. To explain this behaviour, Albert Einstein proposed in 1905 that light came in little packets, or particles, of energy. He was awarded the 1921 Nobel Prize in physics for his explanation. In 1926 scientists named these particles of light photons.

Famous for producing the first controlled nuclear reaction in 1942, physicist Enrico Fermi worked as a consultant for the Manhattan Project during World War II, helping to design the atomic bomb. He won a Nobel Prize in 1938 for his work on artificial radioactivity. He inspired many students and continues to be honoured through various awards and institutions that were established in his name, such as the Fermi National Accelerator Laboratory in Batavia, Illinois.

In the early part of the 20th century, scientists studying beta decay noticed that the sum of the mass and energy before the decay was greater than the sum of mass and energy present after the decay. To account for this missing energy, Austrian-born American physicist Wolfgang Pauli proposed the existence of a new particle in 1928. Pauli called his suggestion a drastic measure because scientists by then did not expect more elementary particles. His hypothesis proved correct, however, and this particle is now known as the electron neutrino. The neutrino was escaping unseen because it has no electric charge, no colour charge, and only a very small mass. American physicists Fred Reines and Clyde Cowen were the first to experimentally detect the neutrino in 1956, almost 30 years after Pauli first proposed its existence. Reines shared the 1995 Nobel Prize in physics for his part in this experiment.

Wolfgang Pauli won the 1945 Nobel Prize in physics for his discovery of the exclusion principle, also called the Pauli principle, which states that no two electrons in an atom can have identical sets of quantum numbers. These numbers define an electron’s energy, and the exclusion principle allowed scientists to describe the arrangement and behaviour of electrons in the chemical elements.

Pauli received a Nobel Prize as well, but not for his proposal of neutrinos. He won the 1945 Nobel Prize in physics for developing the exclusion principle. The exclusion principle is the rule of quantum theory that says that no two fermions with exactly the same characteristics can occupy the same space. Pauli proposed the exclusion principle in 1925. A year later Italian-born American physicist Enrico Fermi developed the mathematical equations to explain why two fermions cannot occupy the same state.

In 1931 British physicist Paul Dirac produced the precursor of modern particle theories. Dirac’s equations described the known electromagnetic properties of particles well, but to make his theory work more comprehensively, Dirac had to introduce the idea of antiparticles, antimatter counterparts of existing particles. The existence of these particles was confirmed in 1933, when American physicist Carl Anderson saw something peculiar while looking at tracks made by cosmic rays in a type of particle detector called a cloud chamber. A particle passing through the cloud chamber seemed to have the mass of an electron, but it had a positive rather than a negative charge—he had discovered the positron. Anderson shared the 1936 Nobel Prize in physics for this confirmation of Dirac’s theory.

In 1934 Japanese physicist Yukawa Hideki predicted the existence of a force carrier holding neutrons and protons together in the nucleus of an atom. He believed this particle should have a mass between the mass of the electron and that of the proton. Yukawa’s theory attempted to describe how the strong force affects particle interactions, but it was not complete because it did not describe the fundamental interactions between quarks and gluons. It was, however, highly successful at describing the way protons and neutrons bond inside the nucleus. The theory predicted the existence of the pion, the meson that holds the particles in an atomic nucleus together.

When Carl Anderson and American physicist Seth Neddermeyer detected a new particle in cosmic ray experiments two years later, many thought this new particle was Yukawa’s meson. But some properties of the new particle did not match Yukawa’s theory. This dilemma appeared to be solved in 1947 when yet another particle, the pion, was found in cosmic rays. The pion’s behaviour was consistent with predictions in Yukawa’s theory. The particle that Anderson and Neddermeyer discovered was later found to be the muon, but in the beginning, no one could tell the purpose of this particle. Anderson and Neddermeyer’s muon turned out to be the first indication of a new type of lepton. Scientists detected the muon neutrino in 1962 and thereafter regarded the muon and its neutrino partner as a second generation of leptons.

In the same year that the pion was discovered, physicists detected another particle in cosmic ray experiments. This particle, now called the lambda, behaved differently than known particles. Starting in 1953, scientists found many more such unexpected particles. Because these particles were different, physicists called them ‘strange.’ These particles were eventually shown to include strange quarks, which received their name from the description of the particles they compose.

While cosmic ray experiments revealed a myriad of particles, scientists also sought ways to create unusual and unstable particles in laboratories. American physicist Ernest Lawrence invented the cyclotron, a type of circular accelerator, in 1932. The cyclotron, however, could not achieve very high energies. Lawrence’s model was improved (independently) by American physicist Edwin McMillan and Soviet physicist Vladimir Veksler in the 1940s, resulting in the synchrocyclotron. The high energies available using the synchrocyclotron led to many important particle discoveries.

American physicist Murray Gell-Mann won the 1969 Nobel Prize in physics. He researched the interactions of elementary particles and advanced the quark theory.

By the 1960's hundreds of different ‘elementary’ particles had been seen. Physicists found they could separate these particles into two main groups: those that interacted by the strong force and those that did not. They called the strongly interacting particles hadrons, and the particles without strong interactions leptons. American physicist Murray Gell-Mann proposed in 1964 that many of these observed particles might not be elementary after all. He showed that all of the properties of hadrons could be explained if they were various combinations of three quarks. Normal matter, such as protons, neutrons, and pions, contains only up and down quarks, and strange matter (such as the lambda particles) contains one or more strange quarks along with up and down quarks. Gell-Mann was honoured for his contributions in 1969 with the Nobel Prize in physics. Gell-Mann’s quark theory was confirmed experimentally by American physicists Jerome Friedman and Henry Kendall and Canadian physicist Richard Taylor in 1969. Their experiment demonstrated that protons have internal structure. This experiment earned them the 1990 Nobel Prize in physics.

In 1964, the same year Gell-Mann introduced his quark theory, British physicist Peter Higgs proposed the existence of the Higgs boson, building on the work others had done in the early 1960s. Some scientists also predicted that same year that a fourth quark—the charm quark—should exist. Hadrons containing the charm quark were finally detected in 1976, leaving the number of quarks and the number of leptons equal at four apiece. Scientists divided the leptons and quarks into two generations, with the up and down quarks and the electron and electron neutrino in the first, and the strange and charm quarks and muon and muon neutrino in the second.

A third generation of particles entered the scene in 1975, just a year before the charm quark was discovered. American physicist Martin Perl and his collaborators detected a third charged lepton, the tau. Scientists assumed immediately that a third neutrino accompanied the tau, but it has not yet been directly detected. Perl shared the 1995 Nobel Prize in physics with American physicist Frederick Reines for his part in discovering the tau lepton.

Physicists discovered a third generation of quarks in 1977. American physicist Leon Lederman and his collaborators discovered mesons that contained a fifth quark: the bottom quark. Scientists assumed the bottom quark should have a partner, called the top quark, and so the hunt for this particle was on. This hunt finally ended in 1995, when evidence of the top quark was detected at the Fermi National Accelerator Laboratory in Batavia, Illinois. While the existence of the top quark was no surprise, the mass of it was. The top quark is over 40 times heavier than the bottom quark, and 174 times heavier than the proton, which contains three first generation quarks (two up quarks and one down quark).

Throughout the 1960s physicists worked on a comprehensive theory to explain why different types of elementary particles exist and why they behave the way they do. Building on the work of Fermi, Dirac, Yukawa, Gell-Mann, and numerous others, three scientists developed what is now called the standard model of particle physics. American physicist Steven Weinberg and Pakistani physicist Abdus Salam extended the earlier work of American physicist Sheldon Glashow and unified the electromagnetic and weak forces in 1967. These three men shared the 1979 Nobel Prize in physics for their highly successful theory. When these scientists developed the standard model, the physics community had not yet discovered the charm quark and did not know of the third generation of particles. The theory, however, predicted the charm and worked well with the addition of a third generation.

One of the key predictions of the standard model was the existence of particles carrying the weak force. In 1983 Italian physicist Carlo Rubbia and his colleagues discovered the W and Z bosons. Rubbia and Dutch physicist Simon Van der Meer shared the 1984 Nobel Prize for their work on the discovery of the W and Z bosons.

Particle physics is not finished yet. Most of the predictions of the standard model have been verified, but physicists still seek evidence of physics beyond the standard model. They look for new particles both on Earth and throughout the cosmos. They work on theories that would explain why particles have the masses scientists have observed. In particular, they want to understand why the top quark is so much heavier than the other particles and why the second and third generations of particles exist at all. They look for connections between the four forces in the universe and continue their quest for a theory of everything.

Gauge theory, in principle, can be applied to any force field, holding out the possibility that all the interactions, or forces, can be brought together into a single unified field theory. Such efforts inevitably involve the concept of symmetry. Generalized symmetries extend to particle interchanges that vary from point to point in space and time. The difficulty for physicists is that such symmetries, while mathematically elegant, do not extend scientific understanding of the underlying nature of matter. For this reason, many physicists are exploring the possibilities of so-called supersymmetry theories, which would directly relate fermions and bosons to one another by postulating further particle ‘twins’ to those now known, differing only in spin. Doubts have been expressed about such efforts, but another approach known as ‘superstring’ theory is attracting a good deal of interest. In such theories, fundamental particles are considered not as dimensionless objects but as ‘strings’ that extend one-dimensionally to lengths of no more than 10-35 metres. Such theories solve a number of problems for the physicists who are working on unified field theories, but they are still only highly theoretical constructs

Louis de Broglie (1892-1987), French physicist and Nobel laureate, who made major contributions to the theory of quantum mechanics with his studies of electromagnetic radiation. De Broglie was born in Dieppe and educated at the University of Paris. He tried to rationalize the dual nature of matter and energy, both of which he found to be composed of corpuscles and moving in waves. For his 1923 theory describing the wave nature of electrons, he was awarded the 1929 Nobel Prize in physics. He was elected to the Academy of Sciences (1933) and to the French Academy (1943). He was named professor of theoretical physics at the University of Paris (1928), permanent secretary of the Academy of Sciences (1942), and adviser to the French Atomic Energy Commission (1945). Several of his books have been translated into English, including Matter and Light (1939), Revolution in Physics (1953), Current Interpretation of Wave Mechanics (1964), and Quantum, Space, and Time (1984).

Wave Motion, in physics, mechanism by which energy is conveyed from one place to another in mechanically propagated waves without the transference of matter. At any point along the path of transmission a periodic displacement, or oscillation, occurs about a neutral position. The oscillation may be of air molecules, as in the case of sound travelling through the atmosphere; of water molecules, as in waves occurring on the surface of the ocean; or of portions of a rope or a wire spring. In each of these cases the particles of matter oscillate about their own equilibrium position and only the energy moves continuously in one direction. Such waves are called mechanical because the energy is transmitted through a material medium, without a mass movement of the medium itself. The only form of wave motion that requires no material medium for transmission is the electromagnetic wave; in this case the displacement is of electric and magnetic fields of force in space.

Waves, such as water or sound waves, are a periodic disturbance of the medium through which they travel. For longitudinal waves, the medium is displaced in the direction of travel. For example, air is compressed and expanded (figure 1) in the same direction that a sound wave travels. For transverse waves, the medium is displaced perpendicular to the direction of travel. Ripples on the surface of a pond are an example of a transverse wave: the water is displaced vertically (figure 2), while the wave itself travels horizontally. Earthquakes generate both P (compression, or longitudinal) and S (shear, or transverse) waves, which travel at different speeds and follow different paths. These differences allow seismologists to estimate where the earthquake occurred. Atomic particles and light can be described by probability waves that, while fundamentally different, behave much like the ripples on a pond.

Waves are divided into types according to the direction of the displacements in relation to the direction of the motion of the wave itself. If the vibration is parallel to the direction of motion, the wave is known as a longitudinal wave. The longitudinal wave is always mechanical because it results from successive compressions (state of maximum density and pressure) and rarefactions (state of minimum density and pressure) of the medium. Sound waves typify this form of wave motion. Another type of wave is the transverse wave, in which the vibrations are at right angles to the direction of motion. A transverse wave may be mechanical, such as the wave projected in a taut string that is subjected to a transverse vibration or it may be electromagnetic, such as light, X ray, or radio waves. Some mechanical wave motions, such as waves on the surface of a liquid, are combinations of both longitudinal and transverse motions, resulting in the circular motion of liquid particles.

For a transverse wave, the wavelength is the distance between two successive crests or troughs. For longitudinal waves, it is the distance from compression to compression or rarefaction to rarefaction. The frequency of the wave is the number of vibrations per second. The velocity of the wave, which is the speed at which it advances, is equal to the wavelength times the frequency. The maximum displacement involved in the vibration is called the amplitude of the wave.

This interference pattern was formed by two rods moving rhythmically up and down in a ripple tank. A ripple tank consists of a clear tray of water, an overhead light, and devices to make wave patterns. You can observe a similar pattern by dipping two fingers up and down in a puddle of water or by watching two ducks swim near each other in a lake or pond. The waves from one point source (a rod, finger, duck) interfere with waves from the other point source (another rod, finger, or duck). If two crests arrive at a point together, they superimpose to form a very high crest; if two troughs arrive together, they superimpose to form a very low trough. This is called constructive interference. The bright and dark rings are regions of constructive interference. If a crest from one source arrives at a point at the same instant as a trough from the other source, they cancel each other. This is called destructive interference. The radiating dark rays are regions of destructive interference.

The velocity of a wave motion in matter depends on the elasticity and density of the medium. In a transverse wave on a taut string, for example, the velocity depends on the tension of the string and its mass per unit length. The velocity can be doubled by quadrupling the tension, or it can be reduced to one-half by quadrupling the mass of the string. The motion of electromagnetic waves through space is constant at about 300,000 km/sec (about 186,000 mi/sec), or the speed of light. This velocity varies slightly in passage through matter.

When two waves meet at a point, the resulting displacement of that point will be the sum of the displacements produced by each of the waves. If the displacements are in the same direction, the two waves reinforce each other; if the displacements are in the opposite direction, the waves counteract each other. This phenomenon is known as interference.

When two pulses travelling on a string meet each other, the amplitudes of the pulses are added together to produce the shape of the resulting pulse. If the pulses are identical but travel on opposite sides of the string, then the sum of the amplitudes is zero and the string will appear flat for one instant (A). This is called destructive interference. When the two identical pulses travel on the same side of the string, then the sum of the amplitudes is double the amplitude of a single pulse when the pulses are together(B). This is called constructive interference.

When two waves of equal wavelength and amplitude travel in opposite directions at the same velocity through a medium, stationary, or standing, waves are formed. For example, if one end of a rope is tied to a wall and the other end is shaken up and down, waves will be reflected back along the rope from the wall. Assuming that the reflection is perfectly efficient, the reflected wave will be half a wavelength behind the initiating wave. Interference will take place, and the resultant displacement at any given point and time will be the sum of the individual displacements. No motion will take place at points where the crest of the incident wave meets the trough of the reflected one. Such points are called nodes. Halfway between the nodes, the waves meet in the same phase; that is, crest will coincide with crest and trough with trough. At these points the amplitude of the resultant wave is twice as great as that of the incident wave. Thus, the rope is divided into sections one wavelength long by the nodes, which do not progress along the rope, while the rope between the nodes vibrates transversely.

A wave pulse on a string is generated by a quick movement of a hand and travels down the string toward the left (A). If the end of the string is free to move up and down at the wall, the pulse will come back down the string on the same side (C1). If the string is tied to the wall, the pulse will travel back along the string on the opposite side (C2). For the free end, the pulse will have twice the original amplitude at the turnaround point (B1); for the fixed end, the pulse will have no amplitude at the turnaround point (B2).

Stationary waves are present in the vibrating strings of musical instruments. A violin string, for instance, when bowed or plucked, vibrates as a whole, with nodes at the ends, and also vibrates in halves, with a node at the centre, in thirds, with two equally spaced nodes, and in various other fractions, all simultaneously. The vibration as a whole produces the fundamental tone, and the other vibrations produce the various harmonics.

In quantum mechanics or the Quantum Theory, where the structure of the atom is explained by analogy to a system of standing waves. Much of the development of modern physics is based on the elaboration of the theory of waves and wave motion.

Unified Field Theory, in physics, a theory that proposes to unify the four known interactions, or forces—the strong, electromagnetic, weak, and gravitational forces—by a simple set of general laws. Four distinct forces are known to control all the observed interactions in matter: gravitation, electromagnetism, the strong force (a short-range force that holds atomic nuclei together), and the weak force (the force responsible for slow nuclear processes, such as beta decay). The attempts to develop a unified field theory are grounded in the belief that all physical phenomena should ultimately be explainable by some underlying unity.

One of the first to attempt the development of such a theory was Albert Einstein, whose work in relativity had led him to the hypothesis that it should be possible to find a unifying theory for the electromagnetic and gravitational forces. Einstein tried unsuccessfully during the last 30 years of his life to develop a theory that would represent forces and material particles by fields only, in which particles would be regions of very high field intensity. The development of quantum theory, which Einstein rejected, and the discovery of many new particles, however, precluded Einstein's success in formulating a unifying theory based on relativity and classical physics alone.

An important advance in this quest was made in 1967-68 by the American physicist Steven Weinberg and the Pakistani physicist Abdus Salam. They succeeded in unifying the weak interaction and the electromagnetic interaction by using a mathematical technique known as gauge symmetry. According to this theory the electromagnetic interaction consists of the exchange of a photon, and the weak interaction of the exchange of W and Z intermediate bosons. These bosons are believed to belong to the same family of particles as the photons. Theoretical physicists are currently attempting to combine this so-called electroweak theory with the strong nuclear force, using symmetry theories; such attempts are known as grand unification theories, or GUTs. The effort also continues to combine all four fundamental interactions, including gravitation, in what are now known as supersymmetry theories. Thus far, however, such attempts have not succeeded, although they are proving useful in current work in cosmology.

Once, again, Kepler’s Laws, three laws concerning the motions of planets formulated by the German astronomer Johannes Kepler early in the 17th century. Kepler based his laws on planetary data collected by the Danish astronomer Tycho Brahe, to whom he was an assistant. The proposals broke with a centuries-old belief based on the Ptolemaic system advanced by the Alexandrian astronomer Ptolemy, in the 2nd century ad, and the Copernican system, put forward by the Polish astronomer Nicolaus Copernicus, in the 16th century, that the planets moved in circular orbits. According to Kepler's first law, the planets orbit the sun in elliptical paths, with the sun at one focus of the ellipse. The second law states that the areas described in a planetary orbit by the straight line joining the center of the planet and the center of the sun are equal for equal time intervals; that is, the closer a planet comes to the sun, the more rapidly it moves. Kepler's third law states that the ratio of the cube of a planet's mean distance, d, from the sun to the square of its orbital period, t, is a constant - that is, d3/t2 is the same for all planets.

These laws played an important part in the work of the 17th-century English astronomer, mathematician, and physicist Sir Isaac Newton, and are important for the understanding of the orbital paths of the moon, the natural satellite of the earth, and the paths of the artificial satellites launched from the earth.

Background Radiation, long wavelength electromagnetic radiation that hits Earth uniformly from all directions. Background radiation represents energy left over from the ‘big bang,’ the explosion at the beginning of the universe. Electromagnetic radiation is energy that moves in oscillating waves at the speed of light, and it includes light, radio waves, and microwaves. Background radiation is most intense at microwave wavelengths. The microwave part of the electromagnetic spectrum is equivalent to the shortest wavelength radio waves. Microwaves have considerably longer wavelengths than visible light. In addition to microwave background radiation, radio and infrared (shorter than microwave) background radiation also exist.

The Wilkinson Microwave Anisotropy Probe (WMAP) produced this image, which shows variations in cosmic microwave radiation. The colored spots correspond to fluctuations in the density of matter and energy in the early universe, about 380,000 years after the big bang. Cosmologists believe that as the universe expanded and cooled, these fluctuations structured the formation of galaxies.

The big bang theory of the beginning of the universe holds that the universe was extremely hot and dense in its first moments and has been expanding and cooling ever since. Models of the early universe and its evolution predict that some of the radiation caused by the extremely high temperature of the early universe will still be present, but that it will exist at a much lower temperature because the universe has expanded so much.

Even when all Earthly and astronomical sources of radio waves are screened out, some static remains on the most sensitive radios. This static is caused by radiation left over from the big bang, the explosion that created the universe.

American physicist and radio astronomer Robert W. Wilson won the 1978 Nobel Prize in physics. While attempting to measure the intensity of radiation from a single, specific point in the sky, he discovered cosmic microwave background radiation.

Scientists can measure the intensity of the background radiation at infrared, microwave, and radio wavelengths to determine how the intensity of the radiation relates to its wavelength. Planck’s law, developed in the early 1900s by German physicist Max Planck, predicts the curve of intensity versus wavelength for the radiation of an object of a given temperature. The curve that results from measurement of the background radiation matches exactly the curve predicted for a body radiating energy at a little less than 3 K (a little less than –270°C, or about –450°F). The background radiation is almost completely isotropic—that is, the same in all directions. The evenness of the radiation and its striking conformity to a Planck curve make it a compelling support to the big bang theory.

Russian-born American physicist George Gamow developed the theory that the universe began as a hot explosion of matter and energy, or a ‘big bang.’ Gamow also made important contributions to other fields of physics, including radioactivity and nuclear physics.

Russian-born American cosmologist George Gamow and his students Ralph Alpher and Robert Herman predicted the existence of background radiation in the late 1940s and early 1950s, more than a decade before the radiation was first detected. Gamow, Alpher, and Herman were working on the theory that would become known as the big bang theory. They predicted that, as the universe evolved from its hot, dense early stage, it would cool as it expanded. They believed that the universe should still be cooling, which meant that some of the initial energy should still exist. Their prediction of the temperature was uncertain, though, and their work was first ignored and then forgotten until the detection of the background radiation in the 1960s.

German-born American physicist Arno Penzias and American physicist Robert Woodrow Wilson accidentally discovered the background radiation in 1964. They were working at what was then the Bell Telephone Laboratories (now Bell Laboratories, part of Lucent Technologies) and were using a type of large radio telescope known as a horn antenna. Penzias and Wilson were trying to improve the quality of radio-transmitted telephone signals by eliminating all external radio noise. When they eliminated all the known sources of radio waves, they were still left with a residue that was surprisingly isotropic. The two physicists tried measuring the signal at different times of day and on different days, but the signal did not vary.

American astronomer Bernard Burke heard of the work of Penzias and Wilson and introduced them to a team of Princeton University scientists who had been investigating the theoretical consequences of the big bang. American astronomers Robert Dicke, James Peebles, David Roll, and David Wilkinson had calculated that there should indeed be isotropic radiation left over from the big bang. They were in the process of building apparatus to try to measure such radiation at a wavelength somewhat different than that at which Penzias and Wilson detected the radiation.

The two groups published papers simultaneously, with Penzias and Wilson publishing the observational results and the Princeton group publishing the theoretical results. The scientific community soon accepted that the discovery represented the cosmic background radiation, also known as the primordial background radiation.

Understanding background radiation is related to understanding the origin and evolution of the universe. Scientists can trace the life of the universe back to when the universe was only 1 × 10-43 sec old (1 × 10-43 is a very small number, a decimal point followed by 42 zeroes and then a 1). To understand the very beginning of the universe, physicists would need a yet-undeveloped theory that combines the theory of gravity with quantum mechanics: Theory of Everything. Despite the fact that scientists do not know what happened at the very beginning of the universe, they believe that the universe was at first extremely hot and dense and has been expanding ever since that time.

For thousands of years, the universe was opaque—that is, light could not travel very far through it. The universe was a hot sea of mostly unconnected subatomic particles such as protons and electrons. These particles were too hot and energetic to come together and form atoms and molecules. Light in the form of particles of energy called photons hit electrons and scattered off them before the light could get very far. The scattering of photons off electrons in that early universe made the universe act as a blackbody, a radiating object that gives off radiation that follows Planck's formula perfectly. The universe continued to expand and cool so that its Planck curve shifted to longer and longer wavelengths (corresponding to cooler and cooler radiation).

Electrons began to attach to protons to form hydrogen atoms about 300,000 years after the big bang. By about one million years after the big bang, the universe cooled to about 3000 K and hydrogen atom formation was complete. Hydrogen atoms—in contrast with the free protons and electrons—absorb photons only at specific wavelengths, so photons of other wavelengths could travel farther when hydrogen atoms formed. The universe changed from being opaque to being largely transparent as the atoms allowed radiation in the form of photons to travel freely through space. The background radiation that scientists detect today is the radiation that was released at that time. This radiation has been traveling through space ever since, with its Planck curve corresponding to cooler and cooler temperatures over time.

The United States National Aeronautics and Space Administration (NASA) launched the Cosmic Background Explorer (COBE) satellite in 1989. COBE provided astronomers with most of the information they have about background radiation. One of COBE’s instruments measured the strength of the cosmic background radiation in each wavelength range. Its measurement produced a curve that precisely matches a Planck curve, which most scientists have accepted as proof that the radiation is indeed from the beginning of the universe. The Planck curve that the background radiation follows corresponds to a temperature of 2.728 K, with an uncertainty (including possible systematic errors) of 0.004 K.

Another COBE instrument measured the evenness of the radiation across the sky. This instrument found variations of 30 microkelvins (0.00003 K) from point to point. These apparent ripples were probably formed by unevenness in the background radiation dating back to the beginning of the universe. They may have been caused by tiny deviations from homogeneity, or uniformity. This lack of uniformity may have been the seed from which the current structures - galaxies, clusters of galaxies, and other large formations—of the universe grew.

A third COBE instrument studied the distribution of infrared radiation. In 1998 scientists announced that detailed study of the data revealed an almost uniform infrared background that could be detected when all specific sources of infrared radiation were subtracted. Also in 1998, a group of European astronomers discovered distant, young, galaxies that contribute most of the infrared background radiation. These galaxies are so distant that the radiation reaching Earth from them was emitted in the early universe. The fact that the infrared background radiation comes from young galaxies of the early universe helps astronomers study how the first galaxies formed. The background radiation that does not come from early galaxies comes from dust heated by stars now dead or hidden by dust. The infrared radiation from hidden stars helps astronomers get a better idea of the number of stars that have ever existed in the universe.

Measurements of how much of the sky an object takes up, or angular measurements, are made in degrees. The COBE equipment that searched for inhomogeneities, or unevenness in the background radiation, had a resolution of 7°, which means that the equipment could not detect features smaller than 7°. (A fist held at arm’s length takes up about 10°.) Scientists are carrying out observations on smaller angular scales from the ground in Antarctica and Canada, and from balloons. These studies are finding ripples of the same strength as the larger-scale ripples, which endorses the theory that the ripples are from inhomogeneities in the early universe.

In 2001 NASA launched a spacecraft called the Wilkinson Microwave Anisotropy Probe (WMAP) to build on the information acquired by COBE. WMAP has mapped the background radiation at a much higher resolution than COBE was capable of, confirming the inhomogeniety of the early universe. The European Space Agency (ESA) is planning a Planck spacecraft project for launch in 2007 as part of a joint mission with an infrared telescope. Planck is expected to provide even higher resolution than the WMAP mission and more detailed information on the ripples in the primordial background radiation.

Russian mathematician and cosmologist who helped develop models that explained the development of the universe. Friedmann built on the equations included in German American physicist Albert Einstein’s general theory of relativity. Friedmann found a solution to Einstein’s equations that showed that the universe is expanding.

Friedmann was born in Saint Petersburg, Russia. He was an excellent scholar, both in high school and at Saint Petersburg State University, where he studied mathematics from 1906 to 1910. In 1914 he was awarded his master's degree in pure and applied mathematics. In 1914 and 1915 during World War I, he served with the Russian air force as a technical expert and as a pilot. During the later part of the war, Friedmann lectured pilots on aerodynamics. In 1916 he became the head of the Central Aeronautical Station in Kyiv and moved with the Central Aeronautical Station to Moscow in 1917. Near the end of that year, the Central Aeronautical Station disbanded after the Russian Revolution of 1917. From 1918 to 1920 Friedmann was a professor of theoretical mechanics at Perm’ University.

In 1919 and 1920 Perm’ became a battleground of the civil war that followed the Russian Revolution of 1917. Friedmann returned to Saint Petersburg (called Petrograd from 1914 to 1924) in 1920 and took up several concurrent appointments. He taught mathematics and mechanics at Petrograd University, was a professor of physics and mathematics at Petrograd Polytechnic Institute, and did research at the Petrograd Institute of Railway Engineering, the Naval Academy, and the Optical Institute. In 1923 and 1924 Friedmann travelled through Europe, discussing his areas of research with other scientists in Germany and Norway. He returned to Saint Petersburg (called Leningrad from 1924 to 1991) just before his death.

While working on his undergraduate and master’s degrees, Friedmann’s research touched on the magnetic field of the earth, the mechanics of liquids, and theoretical meteorology. (Meteorology is the study of the earth’s atmosphere and weather). During his work for the military in World War I, he studied aerodynamics and the trajectories (paths through the air) of bombs.

Albert Einstein published the general theory of relativity in 1915, but the upheaval of World War I and the civil war raging in Russia meant that Friedmann did not learn of it until near the end of the decade. In 1922 Friedmann published a solution to equations in Einstein’s general theory of relativity in the German journal Zeitschrift für Physik (Journal of Physics). Einstein and Dutch mathematician Willem de Sitter had both published solutions earlier that assumed that the universe did not grow or shrink. Friedmann’s solutions revealed that it was possible for the universe to expand over time, and that it was also possible for the universe to undergo periodic expansions and contractions.

Einstein disputed Friedmann’s solutions in a reply also published in Zeitschrift für Physik in 1922. In 1923, however, Einstein reevaluated Friedmann’s solutions and admitted that the solution was correct. Einstein later called his addition of a cosmological constant to his solutions of the equations to produce a stationary universe ‘the greatest mistake of my life.’ Friedmann’s work on a model of an expanding universe helped shape modern cosmology.

Edwin Hubble (1889–1953), American astronomer, who made important contributions to the study of galaxies, the expansion of the universe, and the size of the universe. Hubble was the first to discover that fuzzy patches of light in the sky called spiral nebula were actually galaxies like Earth’s galaxy, the Milky Way. Hubble also found the first evidence for the expansion of the universe, and his work led to a much better understanding of the universe’s size.

Hubble was born in Marshfield, Missouri. He attended high school in Chicago, Illinois, and received his bachelor’s degree in mathematics and astronomy in 1910. He was awarded a Rhodes Scholarship to study at the University of Oxford in England, where he earned a law degree in 1912. He returned to the United States in 1913 and settled in Kentucky, where his family had moved. From 1913 to 1914 Hubble practiced law and taught high school in Kentucky and Indiana. In 1914 he moved to Wisconsin to take a research post at the University of Chicago’s Yerkes Observatory.

In 1917 Hubble earned his Ph.D. degree in astronomy from the University of Chicago and received an invitation from American astronomer George Hale to work at Mount Wilson Observatory in California. Around the same time that Hubble received the invitation, the United States declared war on Germany, marking the beginning of official U.S. military involvement in World War I (1914-1918). Hubble volunteered to serve in the U.S. Army, rushing to finish his dissertation and reporting for duty just three days after passing his oral Ph.D. exam. He was sent to France at first and remained on active duty in Germany until 1919. He left the Army with the rank of major.

In 1919 Hubble finally accepted the offer from Mount Wilson Observatory, where the 100-in (2.5-m) Hooker telescope was located. The Hooker telescope was the largest telescope in the world until 1948. Hubble worked at Mount Wilson for the rest of his career, and it was there that he carried out his most important work. His research was interrupted by the outbreak of World War II (1939-1945); during the war he served as a ballistics expert for the U.S. Department of War.

While Hubble was working at the Yerkes Observatory, he made a careful study of cloudy patches in the sky called nebulas. Now, astronomers apply the term nebula to clouds of dust and gas within galaxies. At the time that Hubble began studying nebulas, astronomers had not been able to differentiate between nebulas and distant galaxies, which also appear as cloudy patches in the sky.

Hubble was especially interested in two nebulas called the Large Magellanic Cloud and the Small Magellanic Cloud. In 1912 American astronomer Henrietta Leavitt had used the brightness of a certain type of star in the Magellanic Clouds to measure their distance from Earth. She used Cepheid stars, yellow stars that vary regularly in brightness. The longer the time a Cepheid star takes to go through a complete cycle, the higher its average brightness, or average absolute magnitude. By comparing the brightness of the star as seen from Earth with the star’s actual brightness (estimated from the length of the star’s cycle), Leavitt could determine the distance from Earth to the nebula. She and other scientists showed that the Magellanic Clouds were beyond the boundaries of the Milky Way Galaxy.

After World War I, with the Hooker telescope at his disposal, Hubble was able to make significant advances in his studies of nebulas. He focused on nebulas thought to be outside of the Milky Way, searching for Cepheid stars within them. In 1923 he discovered a Cepheid star in the Andromeda nebula, now known as the Great Andromeda Spiral Galaxy. Within a year he had detected 12 Cepheid stars within the Andromeda Galaxy. Using these variable stars, he determined that the Andromeda nebula was about 900,000 light-years away from Earth. (A light-year is the distance light can travel in one year, a measurement equal to 9.46 trillion km or 5.88 trillion mi). The diameter of the Milky Way is about 100,000 light-years, so Hubble’s measurements showed that the Andromeda nebula was far outside the boundaries of Earth’s galaxy.

Hubble discovered many other nebulas that contained stars and were located outside of the Milky Way. He found that they contained objects similar to those within the Milky Way Galaxy. These objects included round, compact groups of stars called globular clusters and stars called novas that flare suddenly in brightness. In 1924 he finally proposed that these nebulas were in fact other galaxies like our own, a theory that became known as the island universe. From 1925 he studied the structures of these external galaxies and classified them according to their shape and composition into regular and irregular forms. The regular galaxies, 97 percent of the total, had elliptical or spiral shapes. Hubble further divided the spiral galaxies into normal spiral galaxies and barred spiral galaxies. Normal spiral galaxies have arms that come out from a central, circular core and spiral around the core and each other. The arms of barred spiral galaxies come out from an elongated, bar-shaped nucleus. There are no distinct boundaries between the types of galaxies—some galaxies have the characteristics of both spiral and elliptical galaxies, and some spiral galaxies could be classified as either normal or barred. Irregular galaxies—galaxies that seem to have no regular shape or internal structure—made up only 3 percent of the galaxies that Hubble found.

Hubble began to measure the distance from Earth to the galaxies that he classified. He used information provided by Cepheid stars within the galaxies to measure their distance from Earth. He compared these distance measurements to measurements of the galaxies’ movement with respect to Earth. Several astronomers, in particular American astronomer Vesto Slipher, studied the speed of the galaxies in the 1910s and 1920s, before Hubble classified them as galaxies. The astronomers measured the galaxies’ speed by measuring the redshift of the galaxy. Redshift results from the radiation that an object emits. This radiation will appear to shift in wavelength if the object is moving with respect to the observer. If the object is moving away from the observer, each wave will leave from slightly farther away than the wave before it did, increasing the distance between the waves. The wavelength of an object’s radiation will seem shorter if the object is moving toward the observer. This is called the Doppler effect. When the radiation emitted by the object is visible light, a lengthening in wavelength corresponds to a reddening of light. Therefore, the light of astronomical objects moving away from the observer is said to be red-shifted. Slipher and the other astronomers found that all of the galaxies were moving away from Earth. Hubble also did his own redshift measurements.

In 1929 Hubble compared the distances of the galaxies to the speed at which they were moving away from Earth, and he found a direct and very consistent correlation: The farther a galaxy was from Earth, the faster it was receding. This relationship was so consistent throughout the 46 galaxies that Hubble initially studied, as well as in virtually all of the galaxies studied later by Hubble and other scientists, that it is known as Hubble’s Law. Hubble concluded that the relationship between velocity and distance must mean that the universe is expanding. In 1927 Belgian scientist Georges LemaĆ®tre had developed a model of the universe that incorporated the general theory of relativity of German American physicist Albert Einstein. LemaĆ®tre’s model showed an expanding universe, but Hubble’s measurements were the first real evidence of this expansion.

The relationship of the velocity of galaxies to their distance is called the Hubble constant. If astronomers knew the precise value of Hubble’s constant, they could determine both the age of the universe and the radius of the observable universe. Many teams of scientists have attempted to measure the value since Hubble proposed his theory. In 1999 a group of scientists measured Hubble’s constant to be 70 kilometers per second-megaparsec, with an uncertainty of 10 percent—the most precise measurement to date. This result means that a galaxy appears to be moving 260,000 km/h (160,000 mph) faster for every 3.3 million light-years that it is away from Earth. The universe may be infinitely large, but if objects really do move faster as they move farther from the center of the universe, at some distance objects will be moving at the speed of light. That distance would be the limit to the observable universe, because light from an object moving at the speed of light could never reach Earth. The radius of the observable universe is called the Hubble radius.

During the 1930s, Hubble studied the distribution of galaxies. His results showed that galaxies should be scattered evenly across the sky. He explained that there seemed to be fewer galaxies in the area of the sky that corresponds to the plane of the Milky Way because large amounts of dust block light from external galaxies.

Hubble was an active researcher until his death. He was involved in building the 200-in (508-cm) Hale telescope at the Mount Palomar Observatory, also in southern California. The Hale telescope was the largest telescope in the world from when it went into operation in 1948 until the Keck telescope at the Mauna Kea Observatory in Hawaii was completed in 1990. The Hubble Space Telescope (HST), a powerful telescope launched in 1990 and carried aboard a satellite in orbit around Earth, was named after Hubble and has helped scientists make many important observations.

The Steady-State Theory, acclaims the theory of cosmology, or the study of the universe and its origins, that was once a rival to the big bang theory, which proposes that the universe was created in a giant explosion. The steady-state theory holds that the universe looks, on the whole, the same at all times and places. The Austrian-British astronomer Hermann Bondi and the Austrian-American astronomer Thomas Gold formulated the theory in 1948. The British astronomer Fred Hoyle soon published a different version of the theory based on his mathematical understanding of the problem. Most astronomers believe that astronomical observations contradict the predictions of the steady-state theory and uphold the big bang theory.

Both the big bang theory and the steady-state theory are based on what Bondi called the ‘cosmological principle.’ This principle states that on a large scale, the universe is homogeneous, meaning the universe looks about the same at every point, and isotropic, meaning the universe looks the same in every direction. Homogeneity and isotropy are not the same—for example, a universe that grows denser with distance from the observer would still look isotropic even though it is not homogeneous.

Bondi enlarged the cosmological principle into the ‘perfect cosmological principle’ in which the universe also looks the same, on a large scale, throughout time. The perfect cosmological principle forms the basis of the steady-state theory. American astronomer Edwin Hubble discovered in the 1920s that the universe is expanding. The supporters of the steady-state theory had to work out a mechanism that would allow the universe to look the same at different times even though it is expanding. Therefore, the steady-state theory says that new matter must form as existing matter spreads apart. The theory seems to violate the law of conservation of total of mass and energy, which says that the sum of mass and energy in the universe must remain constant. The amount of mass that would form under the steady-state theory is so small that it would be too small for scientists to measure for verification. Also, scientists do not know if the law of conservation of mass and energy holds to this accuracy.

The steady-state theory’s main advantage is that it avoids the problem of having to describe how the universe began. The theory’s appeal is largely philosophical—there have never been particular observations that implied it was better or more valid than the big-bang theory.

However, the steady-state theory implies that the universe should look approximately the same now as it did billions of years ago. Because light takes time to travel, when astronomers look into space they see light that was emitted in the past, so they can see back in time. At the time the steady-state theory was developed, astronomers had not found evidence that the universe had changed. Since that time, however, astronomers have discovered that there were more radio galaxies and quasars in the universe in the past than there are now, and that the shape of galaxies has changed over time. Based on this evidence, almost all astronomers now agree that the universe has evolved.

The most convincing evidence against the steady-state theory is the cosmic background radiation, low-level radiation that permeates the universe and has a temperature of 3° C (6° F) above absolute zero. In 1965 the American astronomers Arnold Penzias and Robert Wilson discovered low levels of microwave radiation coming from every part of the universe. The temperature of this radiation matched the temperature of the radiation that should be left over from the big bang at the beginning of the universe—3° C (6° F) above absolute zero—and the radiation was coming equally from all directions. This discovery convinced most scientists that the universe did indeed have a beginning and that the steady-state theory is not the best model of the universe.

Hubble’s Law states that galaxies farther away from Earth are receding from Earth more quickly than nearer galaxies. The dots on this balloon represent galaxies. As the balloon is inflated (representing the universe’s expansion), each dot moves away from all the others. To a person viewing the universe from a galaxy, all other galaxies seem to be receding. The distant galaxies appear to be moving away faster than the near ones, which demonstrates Hubble’s law.

In the late 1920s American astronomer Edwin Hubble discovered that all but the nearest galaxies to us are receding, or moving away from us. Further, he found that the farther away from Earth a galaxy is, the faster it is receding. He made his discovery by taking spectra of galaxies and measuring the amount by which the wavelengths of spectral lines were shifted. He measured distance in a separate way, usually from studies of Cepheid variable stars. Hubble discovered that essentially all the spectra of all the galaxies were shifted toward the red, or had redshifts. The redshifts of galaxies increased with increasing distance from Earth. After Hubble’s work, other astronomers made the connection between redshift and velocity, showing that the farther a galaxy is from Earth, the faster it moves away from Earth. This idea is called Hubble’s law and is the basis for the belief that the universe is fairly uniformly expanding. Other uniformly expanding three-dimensional objects, such as a rising cake with raisins in the batter, also demonstrate the consequence that the more distant objects (such as the other raisins with respect to any given raisin) appear to recede more rapidly than nearer ones. This consequence is the result of the increased amount of material expanding between these more distant objects

Einsteinian theory have proven that the existence of Black Holes are attributed of an extremely dense celestial body that has been theorized to exist in the universe. The gravitational field of a black hole is so strong that, if the body is large enough, nothing, including electromagnetic radiation, can escape from its vicinity. The body is surrounded by a spherical boundary, called a horizon, through which light can enter but not escape; it therefore appears totally black.

In 1905 German-born American physicist Albert Einstein published his first paper outlining the theory of relativity. It was ignored by most of the scientific community. In 1916 he published his second major paper on relativity, which altered mankind’s fundamental concepts of space and time.

The black-hole concept was developed by the German astronomer Karl Schwarzschild in 1916 on the basis of physicist Albert Einstein’s general theory of relativity. The radius of the horizon of a Schwarzschild black hole depends only on the mass of the body, being 2.95 km (1.83 mi) times the mass of the body in solar units (the mass of the body divided by the mass of the Sun). If a body is electrically charged or rotating, Schwarzschild’s results are modified. An ‘ergosphere’ forms outside the horizon, within which matter is forced to rotate with the black hole; in principle, energy can be emitted from the ergosphere.

According to general relativity, gravitation severely modifies space and time near a black hole. As the horizon is approached from outside, time slows down relative to that of distant observers, stopping completely on the horizon. Once a body has contracted within its Schwarzschild radius, it would theoretically collapse to a singularity—that is, a dimensionless object of infinite density.

Stars begin life as diffuse clouds of dust and gas. These clouds condense to form stars, after which the stars can develop into a variety of objects, depending on how much matter they contain. Stars that contain more matter experience the effects of gravity more strongly and evolve into dense bodies, such as neutron stars or even black holes.

Black holes are thought to form during the course of stellar evolution. As nuclear fuels are exhausted in the core of a star, the pressure associated with their energy production is no longer available to resist contraction of the core to ever-higher densities. Two new types of pressure, electron and neutron pressure, arise at densities a million and a million billion times that of water, respectively, and a compact white dwarf or a neutron star may form. If the star is more than about five times as massive as the Sun, however, neither electron nor neutron pressure is sufficient to prevent collapse to a black hole.

In 1994 astronomers used the Hubble Space Telescope (HST) to uncover the first convincing evidence that a black hole exists. They detected an accretion disk (disk of hot, gaseous material) circling the center of the galaxy M87 with an acceleration that indicated the presence of an object 2.5 to 3.5 billion times the mass of the Sun. By 2000, astronomers had detected supermassive black holes in the centers of dozens of galaxies and had found that the masses of the black holes were correlated with the masses of the parent galaxies. More massive galaxies tend to have more massive black holes at their centers. Learning more about galactic black holes will help astronomers learn about the evolution of galaxies and the relationship between galaxies, black holes, and quasars.

Author of the best-selling book A Brief History of Time, physicist Stephen Hawking has strived to make difficult concepts in physics more accessible to the public. His discoveries about gravitation are regarded as some of the most important contributions to that area of physics since Albert Einstein introduced the general theory of relativity in 1915.

The English physicist Stephen Hawking has suggested that many black holes may have formed in the early universe. If this were so, many of these black holes could be too far from other matter to form detectable accretion disks, and they could even compose a significant fraction of the total mass of the universe. For black holes of sufficiently small mass it is possible for only one member of an electron-positron pair near the horizon to fall into the black hole, the other escaping X Ray: as fixed with Pair Production. The resulting radiation carries off energy, in a sense evaporating the black hole. Any primordial black holes weighing less than a few thousand million metric tons would have already evaporated, but heavier ones may remain.

The American astronomer Kip Thorne of California Institute of Technology in Pasadena, California, has evaluated the chance that black holes can collapse to form ‘wormholes,’ connections between otherwise distant parts of the universe. He concludes that an unknown form of ‘exotic matter’ would be necessary for such wormholes to survive.

Stephen Hawking, born in 1942, British theoretical physicist and mathematician whose main field of research has been the nature of space and time, including irregularities in space and time known as singularities. Hawking has also devoted much of his life to making his theories accessible to the public through lectures, books, and films.

Hawking was born in Oxford, England, and he showed exceptional talent in mathematics and physics from an early age. He entered Oxford University in 1958 and became especially interested in thermodynamics (the study of the interaction of matter and energy), relativity theory, and quantum mechanics. In 1961 he attended a summer course at the Royal Observatory that encouraged these interests. He completed his undergraduate courses in 1962 and received a bachelor’s degree in physics. Hawking then enrolled as a research student in general relativity at the department of applied mathematics and theoretical physics at the University of Cambridge.

Hawking earned his Ph.D. degree from Trinity College at the University of Cambridge in 1966. He stayed at the University of Cambridge, doing post-doctoral research, until he became a professor of physics in 1977. He became one of the youngest fellows of the Royal Society in 1974. In 1979 he was appointed Lucasian Professor of Mathematics at Cambridge.

During his postgraduate program, Hawking was diagnosed as having Amyotrophic Lateral Sclerosis (ALS), a rare progressive disease that handicaps movement and speech. This disease makes it necessary for Hawking to carry out the long and complex mathematical calculations that his work requires in his head. He has been able to continue his studies and to embark upon a distinguished and productive scientific career despite his illness.

From its earliest stages, Hawking’s research has been concerned with the concept of singularities—breakdowns in space and time where the classic laws of physics no longer apply. The combination of time and three-dimensional space is called space-time. The most familiar example of a singularity is a black hole, the final form of a collapsed star. Much of what scientists believe about space-time comes from the theory of relativity, which was developed in the early 20th century by German American physicist Albert Einstein. During the late 1960s Hawking proved that if the general theory of relativity is correct, then a singularity must also have occurred at the big bang. The big bang is the explosion that marked the beginning of the universe and the birth of space-time itself.

In 1970 Hawking’s research turned to the examination of the properties of black holes. The boundary of a black hole is called the event horizon. Hawking realized that the surface area of the event horizon around a black hole could only increase or remain constant with time—this area could never decrease. This meant, for example, that if two black holes merge, the surface area of the new black hole would be larger than the sum of the surface areas of the two original black holes. He also noticed that there were certain parallels between the laws of thermodynamics and the properties of black holes. For instance, the second law of thermodynamics states that entropy, or disorder, must increase with time. The surface area of the event horizon of a black hole is therefore similar to the entropy of a thermodynamic system.

From 1970 to 1974, Hawking and his associates provided mathematical proof for the hypothesis formulated by American physicist John Wheeler known as the ‘No Hair Theorem.’ This theorem states that the only properties that particles of matter keep once they enter a black hole are mass, angular momentum (or spin), and electric charge. Matter entering a black hole loses its shape, its chemical composition, and its distinction as matter or antimatter.

Since 1974 Hawking has studied the behavior of matter in the immediate vicinity of a black hole from a theoretical basis in quantum mechanics. Quantum mechanics is a theory that describes how subatomic particles behave and how matter and radiation interact. He found, to his initial surprise, that black holes—from which nothing was supposed to be able to escape—could emit thermal radiation, or heat. Several explanations for this phenomenon were proposed, including one involving the creation of virtual particles. A virtual particle differs from a real particle in that a virtual particle cannot be seen by means of a particle detector, but it can be observed through its indirect effects. Empty space is full of virtual particles fleetingly ‘created’ out of nothing, forming a particle and antiparticle pair that immediately destroy each other. (This concept is a violation of the principle of conservation of mass and energy, which says that the combined amount of mass and energy in a system must stay the same. The concept is permitted - and predicted - by the uncertainty principle of German physicist Werner Heisenberg, which states that it is impossible to measure both the position and energy of a particle precisely. Hawking proposed that when a particle pair is created near a black hole, one half of the pair might disappear into the black hole, leaving the other half to radiate away from the black hole. To a distant observer, the radiation of the leftover particle would appear as thermal radiation.

Throughout the 1990s Hawking sought to produce a theory that could connect several theories used by scientists to explain the universe. This theory would combine quantum mechanics and relativity to form a quantum theory of gravity, such a unified physical theory would incorporate all four basic types of interactions between matter and energy: strong nuclear interactions, weak nuclear interactions, electromagnetic interactions, and gravitational interactions.

The properties of space-time, the beginning of the universe, and a unified theory of physics are all fundamental research areas of science. Hawking has made, and continues to make, major contributions to the modern understanding of all these areas. He has also made his work accessible to the public through several books, including A Brief History of Time (1988) and Black Holes and Baby Universes and Other Essays (1993), which are suitable for a general audience. In 1992 American filmmaker Errol Morris helped make A Brief History of Time into a film about Hawking’s life and work.

The English physicist Stephen Hawking has suggested that many black holes may have formed in the early universe. If this were so, many of these black holes could be too far from other matter to form detectable accretion disks, and they could even compose a significant fraction of the total mass of the universe. For black holes of ssufficiently small mass it is possible for only one member of an electron-positron pair near the horizon to fall into the black hole, the other escaping through  X Ray: Pair Production. The resulting radiation carries off energy, in a sense evaporating the black hole. Any primordial black holes weighing less than a few thousand million metric tons would have already evaporated, but heavier ones may remain.

Evidence indicates that the matter that scientists detect in the universe is only a small fraction of all the matter that exists. For example, observations of the speeds at which individual galaxies move within clusters of galaxies show that a great deal of unseen matter must exist to exert sufficient gravitational force to keep the clusters from flying apart. Cosmologists now think that much of the universe is dark matter—matter that has gravity but does not give off radiation that we can see or otherwise detect. One kind of dark matter theorized by scientists is cold dark matter, with slowly moving (cold) massive particles. No such particles have yet been detected, though astronomers have made up fanciful names for them, such as Weakly Interacting Massive Particles (WIMPs). Other cold dark matter could be non-radiating stars or planets, which are known as MACHOs (Massive Compact Halo Objects).

An alternative theory that explains the dark-matter model involves hot dark matter, where hot implies that the particles are moving very fast. Neutrinos, fundamental particles that travel at nearly the speed of light, are the prime example of hot dark matter. However, scientists think that the mass of a neutrino is so low that neutrinos can only account for a small portion of dark matter. If the inflationary version of big bang theory is correct, then the amount of dark matter and of whatever else might exist is just enough to bring the universe to the boundary between open and closed.

Scientists develop theoretical models to show how the universe’s structures, such as clusters of galaxies, have formed. Their models invoke hot dark matter, cold dark matter, or a mixture of the two. This unseen matter would have provided the gravitational force needed to bring large structures such as clusters of galaxies together. The theories that include dark matter match the observations, although there is no consensus on the type or types of dark matter that must be included. Supercomputers are important for making such models.

According to the widely accepted theory of the big bang, the universe originated about 14 billion years ago and has been expanding ever since. Astronomers recognize four models of possible futures for the universe. According to the closed model, many billions of years from now expansion will slow, stop, and the universe will contract back in upon itself. In the flat model, the universe will not collapse upon itself, but expansion will slow and the universe will approach a stable size. According to the open model, the universe will continue expanding forever. In the accelerating expansion model, the universe will expand faster and faster until even the particles in normal matter are torn away from each other. Astronomers currently favor the accelerating expansion model.

Astronomers continue to make new observations that are also interpreted within the framework of the big bang theory. No major problems with the big bang theory have been found, but scientists constantly adjust the theory to match the observed universe. In particular, a ‘standard model’ of the big bang has been established by results from NASA's Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001, the probe studied the anisotropies, or ripples, in the temperature of cosmic background radiation at a higher resolution than COBE was capable of. These ripples indicate that regions of the young universe were very slightly hotter or cooler, by a factor of about 1/1000, than adjacent regions. WMAP’s observations suggest that the rate of expansion of the universe, called Hubble’s constant, is about 71 km/s/Mpc (kilometers per second per million parsecs, where a parsec is about 3.26 light-years). In other words, the distance between any two objects in space that are separated by a million parsecs increases by about 71 km every second in addition to any other motion they may have relative to one another. In combination with previously existing observations, this rate of expansion tells cosmologists that the universe is ‘flat,’ though flatness here does not refer to the actual shape of the universe but rather that the geometric laws that apply to the universe match those of a flat plane.

To be flat, the universe must contain a certain amount of matter and energy, known as the critical density. The distribution of sizes of ripples detected by WMAP show that ordinary matter—like that making up objects and living things on Earth—accounts for only 4.4 percent of the critical density. Dark matter makes up an additional 23 percent. Astoundingly, the remaining 73 percent of the universe is composed of something else—a substance so mysterious that nobody knows much about it. Called ‘dark energy,’ this substance provides the antigravity-like negative pressure that causes the universe's expansion to accelerate rather than slow down. This ‘accelerating universe’ was detected independently by two competing groups of astronomers in the last years of the 20th century. The ideas of an accelerating universe and the existence of dark energy have caused astronomers to substantially modify previous ideas of the big bang universe.

WMAP's results also show that cosmic background radiation was set free about 389,000 years after the big bang, later than was previously thought, and that the first stars formed about 200 million years after the big bang, earlier than anticipated. Further refinements to the big bang theory are expected from WMAP, which continues to collect data. An even more precise mission to study the beginnings of the universe, the European Space Agency’s Planck spacecraft, is scheduled to be launched in 2007.

Dark Matter, in astronomy, designation for matter that does not give off or reflect detectable electromagnetic radiation, the radiant energy that includes visible light, radio waves, infrared radiation, X rays, and gamma rays. Although dark matter is practically invisible, astrophysicists have determined its existence by detecting its gravitational interaction with matter that does give off detectable electromagnetic radiation, such as stars, galaxies, and clusters of galaxies. Dark matter has become a vital component of modern theories of cosmology and elementary particle physics. Along with the phenomenon of dark energy, the puzzle of what dark matter is represents one of the most important questions in physics today.

Many astronomers believe that as much as 90 percent of the matter in the universe is dark matter (matter that does not emit light). Some scientists think dark matter may be exotic particles that do not consist of the atoms making up ordinary matter as we know it. Although dark matter currently cannot be observed directly, scientists have evidence of its existence from observations of its gravitational influence on visible bodies, such as the vast collections of stars known as galaxies. In this article from Scientific American Presents, astronomer Vera Rubin explores contemporary views on the nature of dark matter.

British astrophysicist Sir Martin Rees, astronomer and Royal Society research professor at the Institute of Astronomy, Cambridge University, in Cambridge, England, is one of the world’s leading cosmologists (scientists who study the origin and evolution of the universe). Rees is the recipient of numerous awards and honorary degrees, including the Gold Medal of the Royal Astronomical Society, the Bruce Medal of the Astronomical Society of the Pacific, and the Cosmology Prize of the Peter Gruber Foundation. A former president of the British Association for the Advancement of Science, Sir Martin has also been at the forefront of communicating the current understanding of the universe to the wider public. He has written several books for the general reader, including Just Six Numbers (1999) and Our Cosmic Habitat (2002). Encarta World English editor Latha Menon interviewed Rees about his views on some of the latest developments in cosmology, including why the universe seems to have a unique affinity for life. British spellings have been retained.

The existence of dark matter was first suggested in the early 20th century by the Swiss American astronomer Fritz Zwicky, but convincing and overwhelming evidence of its existence was gathered by the American astronomer Vera Rubin in the 1970s. In the early 1930s, Zwicky studied the rotational motions of thousands of galaxies clustered together in a large group of galaxies known as the Coma Cluster. He found that the orbital motion of the galaxies around their common center of mass could only be explained by the presence of unseen matter, which astronomers now call dark matter. Zwicky’s suggestion was not taken very seriously at first because there was not a great amount of evidence to support such a radical suggestion.

In the early 1970s, however, Rubin studied the orbital motions of stars in a large number of galaxies. As these stars orbited their galactic centers, Rubin noticed that the outlying stars in the galaxies were moving so fast that they should have been flung out of the galaxies. But since they were still part of the galaxies, Rubin proposed that unseen matter was keeping them gravitationally bound to the galaxies. A similar observation was reported in the early 1930s for the stars in our own Milky Way Galaxy by Dutch astronomer Jan Oort, but the dark matter interpretation was not considered at that time.

Following up on the observational data gathered by Oort, Zwicky, Rubin, and other astronomers, two American theoretical astrophysicists, Jeremiah P. Ostriker and P. J. E. Peebles, contributed important theoretical analyses. The data and the analyses helped scientists determine that dark matter probably constitutes as much as 90 percent of all the matter in the universe. Scientists verified that the orbital motions of stars in galaxies cannot be explained by the mutual gravitational influence of all the other visible stars. To explain this orbital motion, dark matter must be present. Rubin and other astronomers and astrophysicists showed that this dark matter seems to be distributed in a large envelope or ‘halo’ around the visible matter of the galaxy. As a result the galaxies are much larger than what can actually be observed through a telescope.

Some scientists have theorized that there may be other explanations for the orbital motions of stars besides dark matter, such as a new type of force in nature that exerts itself over vast distances or a modification of the law of gravity. But so far, neither a new force nor a new understanding of gravity has been found. The hypothetical existence of dark matter, however, does explain the observed interactions with ordinary matter extremely well without resorting to a new long-range force.

To gain a fuller understanding of our universe, it is vital to determine exactly what dark matter is made of. Scientists think that dark matter occurs in several different forms. Moreover, observations and experiments place limits on the quantity and distribution of each type. There are two broad categories of dark matter: ‘hot’ dark matter, which moves at speeds comparable to the speed of light (about 299,000 km per second or 186,000 mi per second), and ‘cold’ dark matter, which moves at speeds well below that of light.

The elementary particle called the neutrino, discovered in 1956, is an example of a hot dark matter candidate. Various experiments and observations, such as those reported in 1998 by the Super-Kamiokande experiment in Japan, have shown that the neutrino has mass. Mass is the quality that causes gravitational attraction. The mass of the neutrino is extremely small, which is why the particle travels at speeds comparable to that of light. Neutrinos are extremely abundant in the universe because they are produced in enormous numbers in nuclear interactions that take place at the core of every star. For example, several trillion neutrinos pass through each person on Earth each second as a result of the nuclear reactions that cause the Sun to shine. Because neutrinos are electrically neutral they can pass easily through ordinary matter, such as through people, and so are able to spread throughout a region near ordinary matter. Their large numbers could enable them to be a significant component of dark matter despite the tiny amount of mass in an individual neutrino.

However, there is evidence that dark matter cannot be made up mostly of neutrinos. This is due to two reasons. First, their likely mass is still too small to provide enough matter to account for the gravitational effects seen in the orbital motions of stars. In addition, some form of dark matter was necessary in the early universe to create the early structures that eventually led to the formation of stars and galaxies. Neutrinos could not have played this role, in part because they could not have been created in the required quantities until stars actually formed.

Secondly, neutrinos are too energetic to have helped seed the process of star and galaxy formation. Some other form of dark matter must have contributed to star and galaxy formation, which developed from localized structures, or lumps, known as anisotropies. These lumps have been detected in the cosmic microwave background radiation, the radiation left over from the formation of the universe in the big bang about 13.7 billion years ago.

Detailed observations of the cosmic background radiation show that these localized lumps in the early universe were too small to be seeded by fast-moving particles such as neutrinos. For the anisotropies to have formed in the early universe, a large component of slower, cold dark matter must have been present.

Another candidate for the dark matter is known as baryonic cold dark matter. Baryonic cold dark matter is made of protons and neutrons, the subatomic particles known as baryons that make up ordinary matter and combine with electrons to form atoms. Baryonic cold dark matter could be found in celestial objects that were not massive enough to initiate the fusion processes that make stars shine. It could also be made of matter that collapsed to form dense objects such as neutron stars or even black holes (objects so dense that not even light can escape their gravitational field). Such objects are collectively referred to as ‘MACHOs,’ which means ‘massive compact halo objects.’

Astronomical observations limit the amount of MACHOs that could exist. For example, if they were numerous, such objects would inevitably pass close to the line of sight between an observer on Earth and a distant visible object. Albert Einstein’s theory of gravity, known as the theory of general relativity, describes how light is bent by the presence of mass and energy. Mass and energy set up a gravitational field through which light passes. The MACHO object’s gravitational field would therefore bend the light of a more distant visible object and produce a distortion of its image. Such a process is called a gravitational lens. Several such gravitational lenses have been observed, and the measured frequency of such events has placed limits on how much dark matter can take the form of MACHOs. From the observations that have been made by astronomers, it is now known that MACHOs cannot be the dominant constituent of dark matter. There are simply not enough such gravitational lenses.

Physicists suspect that a more exotic form of cold dark matter must exist. This form is not baryonic. Like neutrinos, this form barely interacts with ordinary matter, but is some type of massive particle. Such candidates are often called WIMPs (for ‘weakly interacting massive particles’).

Various theories of the fundamental elementary particles and their interactions in nature predict the existence of new particles that have not yet been discovered. These hypothetical particles are excellent candidates for WIMPs. For example, supersymmetric theories propose that a new fundamental symmetry of nature was present when the universe was very young and energetic. If this symmetry existed in the past, it requires the hypothetical particles to exist. The symmetry would have disappeared as a result of natural dynamical effects as the universe aged and became less energetic.

As a result we do not observe this symmetry today. The loss of symmetry over time is very similar to what happens as water freezes and turns to ice as it is cooled. Water is more symmetrical than ice since its molecules point in all directions in space, whereas ice forms a crystal lattice with the molecules pointing in the preferred directions of the lattice. As the universe expands, it also cools, and this can give rise to the disappearance of important symmetries in later epochs.

The supersymmetric theories predict that relics of the earlier, more symmetric phase of the universe exist and that they exist in the form of a very specific family of massive particles. New experiments, such as those planned for the Large Hadron Collider (LHC), a particle accelerator at the European Organization for Nuclear Research (CERN) in Switzerland, are expected to be able to test some of these theories. The experiments will try to create the predicted particles directly, using the energetic collisions produced in the LHC. If the particles are discovered, the LHC and other proposed experiments will systematically study their properties to determine if they are indeed the sought-after principal components of dark matter. This type of dark matter would be the key agent of structure formation that gave rise to the galaxies and clusters of galaxies that constitute our home in the universe.

In 2006 astronomers announced what may be the first direct evidence for dark matter in the cosmos, based on observations from the Chandra X-ray Observatory telescope, the Hubble Space Telescope, and advanced telescopes on Earth. A collision between two clusters of galaxies over four billion light-years away apparently caused visible matter and dark matter to separate on a gigantic scale. Huge gas clouds between and around the galaxies in each group smashed together as the two clusters raced past each other, heating the gas to a plasma state and dragging it away from the dark matter halos thought to surround the galaxies. The massive presence of dark matter was detected by gravitational lensing that bent the light from more distant background galaxies in the telescopes’ line of sight. The findings appear to support theories that dark matter exists and that gravity behaves the same in giant clusters of galaxies as it does on Earth.

Evidence indicates that the matter that scientists detect in the universe is only a small fraction of all the matter that exists. For example, observations of the speeds at which individual galaxies move within clusters of galaxies show that a great deal of unseen matter must exist to exert sufficient gravitational force to keep the clusters from flying apart. Cosmologists now think that much of the universe is dark matter—matter that has gravity but does not give off radiation that we can see or otherwise detect. One kind of dark matter theorized by scientists is cold dark matter, with slowly moving (cold) massive particles. No such particles have yet been detected, though astronomers have made up fanciful names for them, such as Weakly Interacting Massive Particles (WIMPs). Other cold dark matter could be non-radiating stars or planets, which are known as MACHOs (Massive Compact Halo Objects).

Alnitak, also known as Zeta Orionis, fifth brightest star in the constellation Orion, the Hunter. The name Alnitak is derived from the Arabic phrase Al Nitak, which means ‘The Girdle.’ This refers to Alnitak’s position as the easternmost star in Orion’s three-star belt. At this position, Alnitak is visible in the evening sky from December through March, slightly south of the celestial equator—the imaginary plane defined by projecting the earth’s equator into space. Like other stars near the celestial equator, Alnitak is equally visible to observers in both the northern and southern hemispheres.

Stars that are visible to the unaided eye, such as Alnitak, belong to the earth’s home galaxy, the Milky Way, and tend to be very bright or relatively close. Alnitak is very bright, with an intrinsic luminosity, or total light output, that rates an absolute magnitude of -6.2 (bright stars have low or even negative magnitude values). This magnitude corresponds to the light output of 25,000 suns. Alnitak is fairly distant from the earth, at 1400 light-years. Because of this distance, it only shines in the night sky with an apparent magnitude—a measure of how bright it appears to an observer on the earth—of +2.05, making it one of the 100 brightest stars as viewed from the earth.

Alnitak’s brightness is due to a combination of high surface temperature and large surface area. Its surface temperature is 26,000° C (46,000° F), which is four times hotter than the sun and gives Alnitak a brilliant blue color. Stars with such high surface temperatures and sizes are known as supergiant stars. The sizes of the hot blue supergiants are difficult to estimate. Using the best available data, astronomers estimate that the diameter of Alnitak is from 20 to 40 million km (12 to 24 million mi), which is about 15 to 30 times greater than the diameter of the sun. The surface area of Alnitak is thus about 200 to 900 times the surface area of the sun. Hot blue supergiants such as Alnitak are extremely massive, rapidly evolving stars that typically end in supernova explosions after only a few million years, in contrast, astronomers estimate that the sun is more than 4 billion years old and will live for another several billion years.

South of Alnitak is the famous Horsehead Nebula. The Horsehead, named for its resemblance to a horse’s head, is the sky’s most famous example of a dark nebula. Dark nebulas, sometimes called absorption nebulas, are black clouds that float in interstellar space far from any source of illumination; they become visible when they obscure brighter objects behind.

Antimatter, matter composed of elementary particles that are, in a special sense, mirror images of the particles that make up ordinary matter as it is known on earth. Antiparticles have the same mass as their corresponding particles but have opposite electric charges or other properties related to electromagnetism. For example, the antimatter electron, or positron, has opposite electric charge and magnetic moment (a property that determines how it behaves in a magnetic field), but is identical in all other respects to the electron. The antimatter equivalent of the chargeless neutron, on the other hand, differs in having a magnetic moment of opposite sign (magnetic moment is another electromagnetic property). In all of the other parameters involved in the dynamical properties of elementary particles, such as mass, spin, and partial decay, antiparticles are identical with their corresponding particles.

The existence of antiparticles was first proposed by the British physicist Paul Adrien Maurice Dirac, arising from his attempt to apply the techniques of relativistic mechanics to quantum theory. In 1928 he developed the concept of a positively charged electron but its actual existence was established experimentally in 1932. The existence of other antiparticles was presumed but not confirmed until 1955, when antiprotons and antineutrons were observed in particle accelerators. Since then, the full range of antiparticles has been observed or indicated. Antimatter atoms were created for the first time in September 1995 at the European Organization for Nuclear Research (CERN). Positrons were combined with antimatter protons to produce antimatter hydrogen atoms. These atoms of antimatter exist only for forty-billionths of a second, but physicists hope future experiments will determine what differences there are between normal hydrogen and its antimatter counterpart.

A profound problem for particle physics and for cosmology in general is the apparent scarcity of antiparticles in the universe. Their nonexistence, except momentarily, on earth is understandable, because particles and antiparticles are mutually annihilated with a great release of energy when they meet. Distant galaxies could possibly be made of antimatter, but no direct method of confirmation exists. Most of what is known about the far universe arrives in the form of photons, which are identical with their antiparticles and thus reveal little about the nature of their sources. The prevailing opinion, however, is that the universe consists overwhelmingly of ‘ordinary’ matter, and explanations for this have been proposed by recent cosmological theory.

In 1997 scientists studying data gathered by the Compton Gamma Ray Observatory (GRO) operated by the National Aeronautics and Space Administration (NASA) found that the earth’s home galaxy—the Milky Way - contains large clouds of antimatter particles. Astronomers suggest that these clouds form when high-energy events - such as the collision of neutron stars, exploding stars, or black holes - create radioactive elements that decay into matter and antimatter or heat matter enough to make it split into particles of matter and antimatter. When antimatter particles meet particles of matter, the two annihilate each other and produce a burst of gamma rays. It was these gamma rays that GROW detected.

No comments:

Post a Comment