BY: RICHARD J.KOSCIEJEW
Philosophy or the intentionality of Mind, sustain the branch of philosophy that considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.
In the 17th century, French philosopher René Descartes proposed that only two substances ultimately exist; mind and body. Yet, if the two are entirely distinct, as Descartes believed, how can one substance interact with the other? How, for example, is the intention of a human mind able to cause movement in the person’s limbs? The issue of the interaction between mind and body is known in philosophy as the mind-body problem.
Many fields other than philosophy share an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology uses scientific experiments to study mental states and events, philosophy uses reasoned arguments and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence (AI), which endeavours to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds—our sensations, thoughts, memories, desires, and fantasies—in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
Certain mental phenomena, those we generally call experiences, have a subjective nature - that is, they have certain characteristics we become aware of when we reflect. For instance, there is ‘something it is like’ to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or as being related to one another in a certain way. The belief that California is west of Nevada, for example, is about California and Nevada and represents the former as being west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as an awareness balanced equilibrium in the state of consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes’s work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things—bodies and minds—are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being may cause that person’s limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being may be affected by light, pressure, or sound, external sources which in turn affect the brain, affecting mental states. Thus the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed together.
In response to the mind-body problem arising from Descartes’s theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who are property dualists believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property dualists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and nonbasic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an outmoded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own mental phenomena by introspection and because we use the general principles we understand about mental phenomena to predict and explain the behaviour of others.
Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues include those of personal identity, immortality, and artificial intelligence.
During much of Western history, the mind has been identified with the soul as presented in Christian theology. According to Christianity, the soul is the source of a person’s identity and is usually regarded as immaterial; thus it is capable of enduring after the death of the body. Descartes’s conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes’s view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.
The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. It is these links of memory, rather than a single underlying substance, that provides the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.
The field of artificial intelligence also raises interesting questions for the philosophy of mind. People have designed machines that mimic or model many aspects of human intelligence, and there are robots currently in use whose behaviour is described in terms of goals, beliefs, and perceptions. Such machines are capable of behaviour that, were it exhibited by a human being, would surely be taken to be free and creative. As an example, in 1996 an IBM computer named Deep Blue won a chess game against Russian world champion Garry Kasparov under international match regulations. Moreover, it is possible to design robots that have some sort of privileged access to their internal states. Philosophers disagree over whether such robots truly think or simply appear to think and whether such robots should be considered to be conscious.
Because of Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible; and because human action consists of a direct grasp of objects, it is not necessary to posit a special mental entity called a meaning to account for intentionality. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analyzed by Husserl.
Our seismical uncertainty is felt therewith of our interiorized or private conditions whole regime is felt as justly in combinality to the, the inner as it is to the other, external as with internal, within or without, and so forth, from which of our sensations, perceptions, thoughts, beliefs, desires, intentions, memory’s emotions, imagination, and purposeful actions. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to one another and to physical phenomena, such as motion.
In the 17th century, French philosopher René Descartes proposed that only two substances ultimately exist; mind and body. Yet, if the two are entirely distinct, as Descartes believed, how can one substance interact with the other? How, for example, is the intention of a human mind able to cause movement in the person’s limbs? The issue of the interaction between mind and body is known in philosophy as the mind-body problem.
Many fields other than philosophy share an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology uses scientific experiments to study mental states and events, philosophy uses reasoned arguments and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence (AI), which endeavours to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds—our sensations, thoughts, memories, desires, and fantasies—in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
Certain mental phenomena, those we generally call experiences, have a subjective nature - that is, they have certain characteristics we become aware of when we reflect. For instance, there is ‘something it is like’ to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or as being related to one another in a certain way. The belief that California is west of Nevada, for example, is about California and Nevada and represents the former as being west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to as consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes’s work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things—bodies and minds—are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being may cause that person’s limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being may be affected by light, pressure, or sound, external sources which in turn affect the brain, affecting mental states. Thus the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed together.
In response to the mind-body problem arising from Descartes’s theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who are property dualists believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property dualists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and nonbasic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an outmoded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own mental phenomena by introspection and because we use the general principles we understand about mental phenomena to predict and explain the behaviour of others.
Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues include those of personal identity, immortality, and artificial intelligence.
During much of Western history, the mind has been identified with the soul as presented in Christian theology. According to Christianity, the soul is the source of a person’s identity and is usually regarded as immaterial; thus it is capable of enduring after the death of the body. Descartes’s conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes’s view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.
The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. It is these links of memory, rather than a single underlying substance, that provides the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.
Garry Kasparov became the youngest world champion in chess history at the age of 22. Since that time in 1985 Kasparov has continued to be rated and recognized as the best chess player in the world. In the 1990s Kasparov competed in two highly publicized matches against Deep Blue, a supercomputer designed to play chess. In 1999 Kasparov challenged chess enthusiasts everywhere in an Internet project called ‘Kasparov Vs. The World.’ Kasparov discusses taking on the world, Deep Blue, and his future challengers in chess.
The field of artificial intelligence also raises interesting questions for the philosophy of mind. People have designed machines that mimic or model many aspects of human intelligence, and there are robots currently in use whose behaviour is described in terms of goals, beliefs, and perceptions. Such machines are capable of behaviour that, were it exhibited by a human being, would surely be taken to be free and creative. As an example, in 1996 an IBM computer named Deep Blue won a chess game against Russian world champion Garry Kasparov under international match regulations. Moreover, it is possible to design robots that have some sort of privileged access to their internal states. Philosophers disagree over whether such robots truly think or simply appear to think and whether such robots should be considered to be conscious.
Husserl was born in Prossnitz, Moravia (now in the Czech Republic), on April 8, 1859. He studied science, philosophy, and mathematics at the universities of Leipzig, Berlin, and Vienna and wrote his doctoral thesis on the calculus of variations. He became interested in the psychological basis of mathematics and, shortly after becoming a lecturer in philosophy at the University of Halle, wrote his first book, Philosophie der Arithmetik (1891). At that time he maintained that the truths of mathematics have validity regardless of the way people come to discover and believe in them.
Husserl has argued against his early position, which he called psychologism, in Logical Investigations (1900-1901; trans. 1970). In this book, regarded as a radical departure in philosophy, he contended that the philosopher's task is to contemplate the essences of things, and that the essence of an object can be arrived at by systematically varying that object in the imagination. Husserl noted that consciousness is always directed toward something. He called this directedness intentionality and argued that consciousness contains ideal, unchanging structures called meanings, which determine what object the mind is directed toward at any given time.
During his tenure (1901-1916) at the University of Göttingen, Husserl attracted many students, who began to form a distinct phenomenological school, and he wrote his most influential work, Ideas: A General Introduction to Pure Phenomenology (1913; trans. 1931). In this book Husserl introduced the term phenomenological reduction for his method of reflection on the meanings the mind employs when it contemplates an object. Because this method concentrates on meanings that are in the mind, whether or not the object present to consciousness actually exists, Husserl said the method involves ‘bracketing existence,’ that is, setting aside the question of the real existence of the contemplated object. He proceeded to give detailed analyses of the mental structures involved in perceiving particular types of objects, describing in detail, for instance, his perception of the apple tree in his garden. Thus, although phenomenology does not assume the existence of anything, it is nonetheless a descriptive discipline; according to Husserl, phenomenology is devoted, not to inventing theories, but rather to describing the ‘things themselves.’
After 1916 Husserl taught at the University of Freiburg. Phenomenology had been criticized as an essentially solipsistic method, confining the philosopher to the contemplation of private meanings, so in Cartesian Meditations (1931; trans. 1960), Husserl attempted to show how the individual consciousness can be directed toward other minds, society, and history. Husserl died in Freiburg on April 26, 1938.
Husserl's phenomenology had a great influence on a younger colleague at Freiburg, Martin Heidegger, who developed existential phenomenology, and Jean-Paul Sartre and French existentialism. Phenomenology remains one of the most vigorous tendencies in contemporary philosophy, and its impact has also been felt in theology, linguistics, psychology, and the social sciences.
The founder of phenomenology, German philosopher Edmund Husserl, introduced the term in his book Ideen zu einer reinen Phänomenolgie und phänomenologischen Philosophie (1913; Ideas: A General Introduction to Pure Phenomenology,1931). Early followers of Husserl such as German philosopher Max Scheler, influenced by his previous book, Logische Untersuchungen (two volumes, 1900 and 1901; Logical Investigations, 1970), claimed that the task of phenomenology is to study essences, such as the essence of emotions. Although Husserl himself never gave up his early interest in essences, he later held that only the essences of certain special conscious structures are the proper object of phenomenology. As formulated by Husserl after 1910, phenomenology is the study of the structures of consciousness that enable consciousness to refer to objects outside itself. This study requires reflection on the content of the mind to the exclusion of everything else. Husserl called this type of reflection the phenomenological reduction. Because the mind can be directed toward nonexistent as well as real objects, Husserl noted that phenomenological reflection does not presuppose that anything exists, but rather amounts to a ‘bracketing of existence’—that is, setting aside the question of the real existence of the contemplated object.
What Husserl discovered when he contemplated the content of his mind were such acts as remembering, desiring, and perceiving, in addition to the abstract content of these acts, which Husserl called meanings. These meanings, he claimed, enabled an act to be directed toward an object under a certain aspect; and such directedness, called intentionality, he held to be the essence of consciousness. Transcendental phenomenology, according to Husserl, was the study of the basic components of the meanings that make intentionality possible. Later, in Méditations cartésiennes (1931; Cartesian Meditations, 1960), he introduced genetic phenomenology, which he defined as the study of how these meanings are built up in the course of experience.
Phenomenology attempts to describe reality in terms of pure experience by suspending all beliefs and assumptions about the world. Though first defined as descriptive psychology, phenomenology attempts philosophical rather than psychological investigations into the nature of human beings. Influenced by his colleague Edmund Husserl (known as the founder of phenomenology), German philosopher Martin Heidegger published Sein und Zeit (Being and Time) in 1927, an effort to describe the phenomenon of being by considering the full scope of existence.
All phenomenologists follow Husserl in attempting to use pure description. Thus, they all subscribe to Husserl's slogan ‘To the things themselves.’ They differ among themselves, however, as to whether the phenomenological reduction can be performed, and as to what is manifest to the philosopher giving a pure description of experience. German philosopher Martin Heidegger, Husserl's colleague and most brilliant critic, claimed that phenomenology should make manifest what is hidden in ordinary, everyday experience. He thus attempted in Sein und Zeit (1927; Being and Time, 1962) to describe what he called the structure of everydayness, or being-in-the-world, which he found to be an interconnected system of equipment, social roles, and purposes.
German philosopher Martin Heidegger was instrumental in the development of the 20th-century philosophical school of existential phenomenology, which examines the relationship between phenomena and individual consciousness. His inquiries into the meaning of ‘authentic’ or ‘inauthentic’ existence greatly influenced a broad range of thinkers, including French existentialist Jean-Paul Sartre. Author Michael Inwood explores Heidegger’s key concept of Dasein, or ‘being,’ which was first expounded in his major work Being and Time (1927).
Because, for Heidegger, one is what one does in the world, a phenomenological reduction to one's own private experience is impossible; and because human action consists of a direct grasp of objects, it is not necessary to posit a special mental entity called a meaning to account for intentionality. For Heidegger, being thrown into the world among things in the act of realizing projects is a more fundamental kind of intentionality than that revealed in merely staring at or thinking about objects, and it is this more fundamental intentionality that makes possible the directedness analyzed by Husserl.
In the mid-1900s, French existentialist, Jean-Paul Sartre attempted to adapt Heidegger's phenomenology to the philosophy of consciousness, in effect returning to the approach of Husserl. Sartre agreed with Husserl that consciousness is always directed at objects but criticized his claim that such directedness is possible only by means of special mental entities called meanings. The French philosopher Maurice Merleau-Ponty rejected Sartre's view that phenomenological description reveals human beings to be pure, isolated, and freely conscious. He stressed the role of the active, involved body in all human knowledge, thus generalizing Heidegger's insights to include the analysis of perception. Like Heidegger and Sartre, Merleau-Ponty is an existential phenomenologists, in that he denies the possibility of bracketing existence.
Phenomenology has had a pervasive influence on 20th-century thought. Phenomenological versions of theology, sociology, psychology, psychiatry, and literary criticism have been developed, and phenomenology remains one of the most important schools of contemporary philosophy.
Many fields other than philosophy share an interest in the nature of mind. In religion, the nature of mind is connected with various conceptions of the soul and the possibility of life after death. In many abstract theories of mind there is considerable overlap between philosophy and the science of psychology. Once part of philosophy, psychology split off and formed a separate branch of knowledge in the 19th century. While psychology uses scientific experiments to study mental states and events, philosophy uses reasoned arguments and thought experiments in seeking to understand the concepts that underlie mental phenomena. Also influenced by philosophy of mind is the field of artificial intelligence (AI), which endeavours to develop computers that can mimic what the human mind can do. Cognitive science attempts to integrate the understanding of mind provided by philosophy, psychology, AI, and other disciplines. Finally, all of these fields benefit from the detailed understanding of the brain that has emerged through neuroscience in the late 20th century.
Philosophers use the characteristics of inward accessibility, subjectivity, intentionality, goal-directedness, creativity and freedom, and consciousness to distinguish mental phenomena from physical phenomena.
Perhaps the most important characteristic of mental phenomena is that they are inwardly accessible, or available to us through introspection. We each know our own minds—our sensations, thoughts, memories, desires, and fantasies—in a direct sense, by internal reflection. We also know our mental states and mental events in a way that no one else can. In other words, we have privileged access to our own mental states.
Certain mental phenomena, those we generally call experiences, have a subjective nature - that is, they have certain characteristics we become aware of when we reflect. For instance, there is ‘something it is like’ to feel pain, or have an itch, or see something red. These characteristics are subjective in that they are accessible to the subject of the experience, the person who has the experience, but not to others.
Other mental phenomena, which we broadly refer to as thoughts, have a characteristic philosophers call intentionality. Intentional thoughts are about other thoughts or objects, which are represented as having certain properties or as being related to one another in a certain way. The belief that California is west of Nevada, for example, is about California and Nevada and represents the former as being west of the latter. Although we have privileged access to our intentional states, many of them do not seem to have a subjective nature, at least not in the way that experiences do.
A number of mental phenomena appear to be connected to one another as elements in an intelligent, goal-directed system. The system works as follows: First, our sense organs are stimulated by events in our environment; next, by virtue of these stimulations, we perceive things about the external world; finally, we use this information, as well as information we have remembered or inferred, to guide our actions in ways that further our goals. Goal-directedness seems to accompany only mental phenomena.
Another important characteristic of mind, especially of human minds, is the capacity for choice and imagination. Rather than automatically converting past influences into future actions, individual minds are capable of exhibiting creativity and freedom. For instance, we can imagine things we have not experienced and can act in ways that no one expects or could predict.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
Mental phenomena are conscious, and consciousness may be the closest term we have for describing what is special about mental phenomena. Minds are sometimes referred to for being privileged to the awakening state consciousness, yet it is difficult to describe exactly what consciousness is. Although consciousness is closely related to inward accessibility and subjectivity, these very characteristics seem to hinder us in reaching an objective scientific understanding of it.
Although philosophers have written about mental phenomena since ancient times, the philosophy of mind did not garner much attention until the work of French philosopher René Descartes in the 17th century. Descartes’s work represented a turning point in thinking about mind by making a strong distinction between bodies and minds, or the physical and the mental. This duality between mind and body, known as Cartesian dualism, has posed significant problems for philosophy ever since.
Descartes believed there are two basic kinds of things in the world, a belief known as substance dualism. For Descartes, the principles of existence for these two groups of things—bodies and minds—are completely different from one another: Bodies exist by being extended in space, while minds exist by being conscious. According to Descartes, nothing can be done to give a body thought and consciousness. No matter how we shape a body or combine it with other bodies, we cannot turn the body into a mind, a thing that is conscious, because being conscious is not a way of being extended.
For Descartes, a person consists of a human body and a human mind causally interacting with one another. For example, the intentions of a human being may cause that person’s limbs to move. In this way, the mind can affect the body. In addition, the sense organs of a human being may be affected by light, pressure, or sound, external sources which in turn affect the brain, affecting mental states. Thus the body may affect the mind. Exactly how mind can affect body, and vice versa, is a central issue in the philosophy of mind, and is known as the mind-body problem. According to Descartes, this interaction of mind and body is peculiarly intimate. Unlike the interaction between a pilot and his ship, the connection between mind and body more closely resembles two substances that have been thoroughly mixed together.
In response to the mind-body problem arising from Descartes’s theory of substance dualism, a number of philosophers have advocated various forms of substance monism, the doctrine that there is ultimately just one kind of thing in reality. In the 18th century, Irish philosopher George Berkeley claimed there were no material objects in the world, only minds and their ideas. Berkeley thought that talk about physical objects was simply a way of organizing the flow of experience. Near the turn of the 20th century, American psychologist and philosopher William James proposed another form of substance monism. James claimed that experience is the basic stuff from which both bodies and minds are constructed.
Most philosophers of mind today are substance monists of a third type: They are materialists who believe that everything in the world is basically material, or a physical object. Among materialists, there is still considerable disagreement about the status of mental properties, which are conceived as properties of bodies or brains. Materialists who are property dualists believe that mental properties are an additional kind of property or attribute, not reducible to physical properties. Property dualists have the problem of explaining how such properties can fit into the world envisaged by modern physical science, according to which there are physical explanations for all things.
Materialists who are property monists believe that there is ultimately only one type of property, although they disagree on whether or not mental properties exist in material form. Some property monists, known as reductive materialists, hold that mental properties exist simply as a subset of relatively complex and nonbasic physical properties of the brain. Reductive materialists have the problem of explaining how the physical states of the brain can be inwardly accessible and have a subjective character, as mental states do. Other property monists, known as eliminative materialists, consider the whole category of mental properties to be a mistake. According to them, mental properties should be treated as discredited postulates of an outmoded theory. Eliminative materialism is difficult for most people to accept, since we seem to have direct knowledge of our own mental phenomena by introspection and because we use the general principles we understand about mental phenomena to predict and explain the behaviour of others.
Philosophy of mind concerns itself with a number of specialized problems. In addition to the mind-body problem, important issues include those of personal identity, immortality, and artificial intelligence.
During much of Western history, the mind has been identified with the soul as presented in Christian theology. According to Christianity, the soul is the source of a person’s identity and is usually regarded as immaterial; thus it is capable of enduring after the death of the body. Descartes’s conception of the mind as a separate, nonmaterial substance fits well with this understanding of the soul. In Descartes’s view, we are aware of our bodies only as the cause of sensations and other mental phenomena. Consequently our personal essence is composed more fundamentally of mind and the preservation of the mind after death would constitute our continued existence.
The mind conceived by materialist forms of substance monism does not fit as neatly with this traditional concept of the soul. With materialism, once a physical body is destroyed, nothing enduring remains. Some philosophers think that a concept of personal identity can be constructed that permits the possibility of life after death without appealing to separate immaterial substances. Following in the tradition of 17th-century British philosopher John Locke, these philosophers propose that a person consists of a stream of mental events linked by memory. It is these links of memory, rather than a single underlying substance, that provides the unity of a single consciousness through time. Immortality is conceivable if we think of these memory links as connecting a later consciousness in heaven with an earlier one on earth.
Before psychology became established in science, it was popularly associated with extrasensory perception (ESP) and other paranormal phenomena (phenomena beyond the laws of science). Today, these topics lie outside the traditional scope of scientific psychology and fall within the domain of parapsychology. Psychologists note that thousands of studies have failed to demonstrate the existence of paranormal phenomena. Grounded in the conviction that mind and behaviour must be studied using statistical and scientific methods, psychology has become a highly respected and socially useful discipline. Psychologists now study important and sensitive topics such as the similarities and differences between men and women, racial and ethnic diversity, sexual orientation, marriage and divorce, abortion, adoption, intelligence testing, sleep and sleep disorders, obesity and dieting, and the effects of psychoactive drugs such as methylphenidate (Ritalin) and fluoxetine (Prozac).
In the last few decades, researchers have made significant breakthroughs in understanding the brain, mental processes, and behaviour. This section of the article provides examples of contemporary research in psychology: the plasticity of the brain and nervous system, the nature of consciousness, memory distortions, competence and rationality, genetic influences on behaviour, infancy, the nature of intelligence, human motivation, prejudice and discrimination, the benefits of psychotherapy, and the psychological influences on the immune system.
Psychologists once believed that the neural circuits of the adult brain and nervous system were fully developed and no longer subject to change. Then in the 1980s and 1990s a series of provocative experiments showed that the adult brain has flexibility, or plasticity - a capacity to change as a result of usage and experience.
These experiments showed that adult rats flooded with visual stimulation formed new neural connections in the brain’s visual cortex, where visual signals are interpreted. Likewise, those trained to run an obstacle course formed new connections in the cerebellum, where balance and motor skills are coordinated. Similar results with birds, mice, and monkeys have confirmed the point: Experience can stimulate the growth of new connections and mold the brain’s neural architecture.
Once the brain reaches maturity, the number of neurons does not increase, and any neurons that are damaged are permanently disabled. But the plasticity of the brain can greatly benefit people with damage to the brain and nervous system. Organisms can compensate for loss by strengthening old neural connections and sprouting new ones. That is why people who suffer strokes are often able to recover their lost speech and motor abilities.
In 1860 German physicist Gustav Fechner theorized that if the human brain were divided into right and left halves, each side would have its own stream of consciousness. Modern medicine has actually allowed scientists to investigate this hypothesis. People who suffer from life-threatening epileptic seizures sometimes undergo a radical surgery that severs the corpus callosum, a bridge of nerve tissue that connects the right and left hemispheres of the brain. After the surgery, the two hemispheres can no longer communicate with each other.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
Beginning in the 1960s American neurologist Roger Sperry and others tested such split-brain patients in carefully designed experiments. The researchers found that the hemispheres of these patients seemed to function independently, almost as if the subjects had two brains. In addition, they discovered that the left hemisphere was capable of speech and language, but not the right hemisphere. For example, when split-brain patients saw the image of an object flashed in their left visual field (thus sending the visual information to the right hemisphere), they were incapable of naming or describing the object. Yet they could easily point to the correct object with their left hand (which is controlled by the right hemisphere). As Sperry’s colleague Michael Gazzaniga stated, ‘Each half brain seemed to work and function outside of the conscious realm of the other.’
Other psychologists interested in consciousness have examined how people are influenced without their awareness. For example, research has demonstrated that under certain conditions in the laboratory, people can be fleetingly affected by subliminal stimuli, sensory information presented so rapidly or faintly that it falls below the threshold of awareness. (Note, however, that scientists have discredited claims that people can be importantly influenced by subliminal messages in advertising, rock music, or other media.) Other evidence for influence without awareness comes from studies of people with a type of amnesia that prevents them from forming new memories. In experiments, these subjects are unable to recognize words they previously viewed in a list, but they are more likely to use those words later in an unrelated task. In fact, memory without awareness is normal, as when people come up with an idea they think is original, only later to realize that they had inadvertently borrowed it from another source.
Cognitive psychologists have often likened human memory to a computer that encodes, stores, and retrieves information. It is now clear, however, that remembering is an active process and that people construct and alter memories according to their beliefs, wishes, needs, and information received from outside sources.
Without realizing it, people sometimes create memories that are false. In one study, for example, subjects watched a slide show depicting a car accident. They saw either a ‘STOP’ sign or a ‘YIELD’ sign in the slides, but afterward they were asked a question about the accident that implied the presence of the other sign. Influenced by this suggestion, many subjects recalled the wrong traffic sign. In another study, people who heard a list of sleep-related words (bed, yawn) or music-related words (jazz, instrument) were often convinced moments later that they had also heard the words sleep or music—words that fit the category but were not on the list. In a third study, researchers asked college students to recall their high-school grades. Then the researchers checked those memories against the students’ actual transcripts. The students recalled most grades correctly, but most of the errors inflated their grades, particularly when the actual grades were low.
When scientists distinguish between human beings and other animals, they point to our larger cerebral cortex (the outer part of the brain) and to our superior intellect - as seen in the abilities to acquire and store large amounts of information, solve problems, and communicate through the use of language.
In recent years, however, those studying human cognition have found that people are often less than rational and accurate in their performance. Some researchers have found that people are prone to forgetting, and worse, that memories of past events are often highly distorted. Others have observed that people often violate the rules of logic and probability when reasoning about real events, as when gamblers overestimate the odds of winning in games of chance. One reason for these mistakes is that we commonly rely on cognitive heuristics, mental shortcuts that allow us to make judgments that are quick but often in error. To understand how heuristics can lead to mistaken assumptions, imagine offering people a lottery ticket containing six numbers out of a pool of the numbers 1 through 40. If given a choice between the tickets 6-39-2-10-24-30 or 1-2-3-4-5-6, most people select the first ticket, because it has the appearance of randomness. Yet out of the 3,838,380 possible winning combinations, both sequences are equally likely.
One of the oldest debates in psychology, and in philosophy, concerns whether individual human traits and abilities are predetermined from birth or due to one’s upbringing and experiences. This debate is often termed the nature-nurture debate. A strict genetic (nature) position states that people are predisposed to become sociable, smart, cheerful, or depressed according to their genetic blueprint. In contrast, a strict environmental (nurture) position says that people are shaped by parents, peers, cultural institutions, and life experiences.
Research shows that the more genetically related a person is to someone with schizophrenia, the greater the risk that person has of developing the illness. For example, children of one parent with schizophrenia have a 13 percent chance of developing the illness, whereas children of two parents with schizophrenia have a 46 percent chance of developing the disorder.
Researchers can estimate the role of genetic factors in two ways: (1) twin studies and (2) adoption studies. Twin studies compare identical twins with fraternal twins of the same sex. If identical twins (who share all the same genes) are more similar to each other on a given trait than are same-sex fraternal twins (who share only about half of the same genes), then genetic factors are assumed to influence the trait. Other studies compare identical twins who are raised together with identical twins who are separated at birth and raised in different families. If the twins raised together are more similar to each other than the twins raised apart, childhood experiences are presumed to influence the trait. Sometimes researchers conduct adoption studies, in which they compare adopted children to their biological and adoptive parents. If these children display traits that resemble those of their biological relatives more than their adoptive relatives, genetic factors are assumed to play a role in the trait.
In recent years, several twin and adoption studies have shown that genetic factors play a role in the development of intellectual abilities, temperament and personality, vocational interests, and various psychological disorders. Interestingly, however, this same research indicates that at least 50 percent of the variation in these characteristics within the population is attributable to factors in the environment. Today, most researchers agree that psychological characteristics spring from a combination of the forces of nature and nurture.
Helpless to survive on their own, newborn babies nevertheless possess a remarkable range of skills that aid in their survival. Newborns can see, hear, taste, smell, and feel pain; vision is the least developed sense at birth but improves rapidly in the first months. Crying communicates their need for food, comfort, or stimulation. Newborns also have reflexes for sucking, swallowing, grasping, and turning their head in search of their mother’s nipple.
In 1890 William James described the newborn’s experience as ‘one great blooming, buzzing confusion.’ However, with the aid of sophisticated research methods, psychologists have discovered that infants are smarter than was previously known.
A period of dramatic growth, infancy lasts from birth to around 18 months of age. Researchers have found that infants are born with certain abilities designed to aid their survival. For example, newborns show a distinct preference for human faces over other visual stimuli.
To learn about the perceptual world of infants, researchers measure infants’ head movements, eye movements, facial expressions, brain waves, heart rate, and respiration. Using these indicators, psychologists have found that shortly after birth, infants show a distinct preference for the human face over other visual stimuli. Also suggesting that newborns are tuned in to the face as a social object is the fact that within 72 hours of birth, they can mimic adults who purse the lips or stick out the tongue - a rudimentary form of imitation. Newborns can distinguish between their mother’s voice and that of another woman. And at two weeks old, nursing infants are more attracted to the body odour of their mother and other breast-feeding females than to that of other women. Taken together, these findings show that infants are equipped at birth with certain senses and reflexes designed to aid their survival.
In 1905 French psychologist Alfred Binet and colleague Théodore Simon devised one of the first tests of general intelligence. The test sought to identify French children likely to have difficulty in school so that they could receive special education. An American version of Binet’s test, the Stanford-Binet Intelligence Scale, is still used today.
In 1905 French psychologist Alfred Binet devised the first major intelligence test for the purpose of identifying slow learners in school. In doing so, Binet assumed that intelligence could be measured as a general intellectual capacity and summarized in a numerical score, or intelligence quotient (IQ). Consistently, testing has revealed that although each of us is more skilled in some areas than in others, a general intelligence underlies our more specific abilities.
Intelligence tests often play a decisive role in determining whether a person is admitted to college, graduate school, or professional school. Thousands of people take intelligence tests every year, but many psychologists and education experts question whether these tests are an accurate way of measuring who will succeed or fail in school and later in life. In this 1998 Scientific American article, psychology and education professor Robert J. Sternberg of Yale University in New Haven, Connecticut, presents evidence against conventional intelligence tests and proposes several ways to improve testing.
Today, many psychologists believe that there is more than one type of intelligence. American psychologist Howard Gardner proposed the existence of multiple intelligence, each linked to a separate system within the brain. He theorized that there are seven types of intelligence: linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, and intrapersonal. American psychologist Robert Sternberg suggested a different model of intelligence, consisting of three components: analytic (‘school smarts,’ as measured in academic tests), creative (a capacity for insight), and practical (‘street smarts,’ or the ability to size up and adapt to situations).
Psychologists from all branches of the discipline study the topic of motivation, an inner state that moves an organism toward the fulfilment of some goal. Over the years, different theories of motivation have been proposed. Some theories state that people are motivated by the need to satisfy physiological needs, whereas others state that people seek to maintain an optimum level of bodily arousal (not too little and not too much). Still other theories focus on the ways in which people respond to external incentives such as money, grades in school, and recognition. Motivation researchers study a wide range of topics, including hunger and obesity, sexual desire, the effects of reward and punishment, and the needs for power, achievement, social acceptance, love, and self-esteem.
In 1954 American psychologist Abraham Maslow proposed that all people are motivated to fulfill a hierarchical pyramid of needs. At the bottom of Maslow’s pyramid are needs essential to survival, such as the needs for food, water, and sleep. The need for safety follows these physiological needs. According to Maslow, higher-level needs become important to us only after our more basic needs are satisfied. These higher needs include the need for love and belongingness, the need for esteem, and the need for self-actualization (in Maslow’s theory, a state in which people realize their greatest potential).
One of the most tenacious social problems of modern times is prejudice, the negative evaluation of others based solely on their membership in a particular group. Social psychologists once believed that prejudice was caused by competition among racial and ethnic groups for valuable but limited resources. However, this explanation did not account for the fact that people throughout the world harbor deep prejudices against groups that pose no realistic threat to them.
Research now shows that prejudice arises, to some extent, as an innocent by-product of the way people think. Human beings naturally sort each other into groups based on sex, race, age, and other attributes. This process of social categorization leads people to see others as either similar to themselves or as different. There are two consequences of this process. First, once we distinguish between ‘us’ and ‘them,’ we begin to assume that ‘they’ are all alike. This belief makes it easy to view in stereotyped ways others who are different. Second, research suggests that people needing a boost in self-esteem are often motivated to belittle ‘them’ in order to feel better about ‘us.’
Psychotherapy is an important form of treatment for a host of psychological problems, including low self-esteem, social problems, anxiety disorders, and substance abuse. But is psychotherapy effective? For years, clinical psychologists have debated the assumed benefits of psychotherapy. Many studies have compared psychotherapy to various drug treatments or to no treatment at all. By statistically combining hundreds of these studies, researchers have confirmed that overall, psychotherapy is better than no treatment at all. These studies have shown that most patients who improve with psychotherapy do so within six months of beginning treatment.
Surprisingly, these studies also indicate that all major types of psychotherapy—despite differences in theoretical orientations or in techniques used - are about equally effective. Psychologists theorize that despite surface differences, all psychotherapies have in common three factors that help to promote change: a supportive and trusting relationship, an opportunity to open up and talk freely, and positive expectations for improvement.
The immune system is a complex surveillance system that fights bacteria, viruses, and other foreign substances that invade the body. This defence relies on the actions of specialized white blood cells called lymphocytes, which circulate through the bloodstream and secrete chemical antibodies. Scientists have discovered that the immune system is linked to other systems of the body, including the brain and nervous system. Psychoneuroimmunology is the study of the relationship between psychological influences, the nervous system, and the immune system.
Researchers in this field have found that psychological factors such as stress can influence immune cell activity and increase vulnerability to physical illness. In controlled animal experiments, rats exposed to overcrowding, noise, or inescapable shocks—and primates separated from social companions—exhibit a significant drop in immune cell activity compared to unstressed animals. In addition, studies on humans have shown that immune cell activity changes in response to divorce, the death of a spouse, loss of employment, and other negative life events. This research helps to explain why stress increases the risk of illnesses ranging from the common cold to certain forms of cancer. It has also sparked interest in how optimism, social support, and other psychological factors can be used to protect the body.
Because the field of psychology is so diverse, psychologists work in a wide range of specialty areas. About half of psychologists with a Ph.D. degree are clinical or counselling psychologists who treat people with psychological problems or conduct research on mental disorders. Other psychologists specialize in developmental psychology, educational psychology, school psychology, social psychology, health psychology, cognitive psychology, biopsychology, or other areas.
Psychologists work in a variety of employment settings. Many work in colleges, universities, and professional schools. Working at an educational institution enables a psychologist to pursue several interests at once. For example, psychology professors will often combine teaching, research, and counselling. A large number of psychologists work in hospitals, clinics, and mental health centres. School psychologists usually work in elementary or secondary schools. Other psychologists work for businesses, government agencies, or other organizations. For example, large corporations and consulting firms often employ industrial-organizational psychologists to provide advice about employee training, hiring practices, and worker morale and productivity. Finally, many psychologists are self-employed as therapists or consultants in private practice.
A person who plans a career in psychology must first obtain a bachelor’s degree at a college or university. An undergraduate major in psychology is helpful preparation for graduate course - work in psychology but is not required. To become a psychologist, a person must attend graduate school and obtain either a master’s degree or a doctoral degree. A master’s degree typically requires two to three years of graduate work. Career opportunities in psychology are greatest for those with a doctoral degree. For this reason, most psychologists obtain a doctoral degree, usually a Ph.D. (doctor of philosophy). Clinical psychologists may obtain a Psy.D. (doctor of psychology) instead, and many counselling psychologists choose to earn an Ed.D. (doctor of education) in counselling. These doctoral degrees typically require four to six years of graduate study. In addition, clinical and counselling psychologists often complete a one-year internship at a psychological clinic following graduate school. Most states require a licensing exam for psychologists who practice as psychotherapists or counsellors.
As a discipline, psychology is growing in size. From 1980 to 1991 the number of psychologists worldwide doubled to about 500,000. The United States, Canada, Western Europe, and Australia are home to the largest number of psychologists. In most developing nations psychology is still in its infancy. China, with its 1.2 billion people, has fewer than 5,000 psychologists and only eight psychology departments. In the countries of sub-Saharan Africa, less than 20 universities had a psychology department by the mid-1980s. In many developing countries, the growth of psychology is stunted by insufficient funding, political instability, a shortage of qualified teachers, poor career prospects for those who enter the field, and a lack of legal or social recognition for the profession.
According to the US Department of Education, psychology is the second-most popular college major, behind business administration. There are now more women entering the field than ever before. In 1997 44 percent of psychologists with Ph.D. degrees were women, compared with 20 percent in 1973. This proportion is rapidly increasing; in 1996 women earned 69 percent of doctoral degrees in psychology awarded in the United States. Similar trends have occurred in Canada. In 1975 women made up only 22 percent of the membership of the Canadian Psychological Association (CPA), the main professional organization for Canadian psychologists. By 1995, 49 percent of CPA members were women. About 68 percent of Canadian doctoral students in psychology in 1995 were female.
Racial and ethnic minorities are under represented in psychology. Surveys indicate that most psychologists in the United States are white, although more members of minority groups are entering psychology than in the past. In 1997 8.5 percent of doctoral-level psychologists in the United States were minorities, up from only 2 percent in 1973.
The chief professional association for psychologists in the United States is the American Psychological Association (APA), which was founded in 1892. The APA now consists of approximately 50 specialty divisions dedicated to the study of topics such as addictions, military problems, religion, families, peace and conflict, women’s issues, hypnosis, and aging. A second major professional organization is the American Psychological Society (APS), which was founded in 1988 to represent the interests of research psychologists. The Canadian Psychological Association, established in 1939, maintains about 25 specialty sections on various topics in psychology.
Cognition, can be described as the act or process of knowing. Cognition includes attention, perception, memory, reasoning, judgment, imagining, thinking, and speech. Attempts to explain the way in which cognition works are as old as philosophy itself; the term, in fact, comes from the writings of Plato and Aristotle. With the advent of psychology as a discipline separate from philosophy, cognition has been investigated from several viewpoints.
An entire field - cognitive psychology - has arisen since the 1950s. It studies cognition mainly from the standpoint of information handling. Parallels are stressed between the functions of the human brain and the computer concepts such as the coding, storing, retrieving, and buffering of information. The actual physiology of cognition is of little interest to cognitive psychologists, but their theoretical models of cognition have deepened understanding of memory, psycholinguistics, and the development of intelligence.
Social psychologists since the mid-1960s have written extensively on the topic of cognitive consistency—that is, the tendency of a person's beliefs and actions to be logically consistent with one another. When cognitive dissonance, or the lack of such consistency, arises, the person unconsciously seeks to restore consistency by changing his or her behaviour, beliefs, or perceptions. The manner in which a particular individual classifies cognitions in order to impose order has been termed cognitive style.
Over the years, cognitive psychologists have discovered that mental activities that seem simple and natural are, in fact, extraordinarily complex. For example, most children have no trouble learning language from their parents. But how do young children decode the meanings of sounds and grasp the basic rules of grammar? Why do children learn language more easily and rapidly than adults? Explaining these puzzles has proven very difficult, and attempts to duplicate true language ability in machines have failed. Even the most advanced computers have trouble understanding the meaning of a simple story or conversation. Cognitive psychologists have found similar complexity in other mental processes.
Cognitive psychology is one field within cognitive science, an interdisciplinary approach to the study of the human mind. Other fields in cognitive science include anthropology, linguistics, neuroscience (the study of the brain and nervous system), and artificial intelligence. Cognitive neuroscience, or neurocognition, combines cognitive psychology and neuroscience.
Cognitive psychology is sometimes confused with cognitive therapy, a type of psychotherapy used to treat depression and other mental disorders. Cognitive therapy falls within the realm of clinical psychology, the branch of psychology devoted to the study and treatment of mental disorders.
Curiosity about the nature of knowledge and the mind dates back as far as the first recorded philosophers. The Greek philosopher Plato held that the seat of knowledge was in the brain, but his pupil Aristotle believed that knowledge was located in the heart. Many others since have wondered about how we come to know and understand our world, how we remember or represent information about the world, and how we arrive at decisions.
Although Renaissance philosophers and theologians actively debated the source of knowledge and the nature of sense perception, the scientific study of cognition did not begin until the late 19th century. In 1879 German physiologist Wilhelm Wundt founded the first psychological laboratory, at the University of Leipzig in Leipzig, Germany. Reasoning that people are the best source of information about their own thoughts, Wundt set about studying consciousness through the method of introspection. This technique involved asking people to observe and report what occurred in their minds as they engaged in various mental tasks. In 1885 German psychologist Hermann Ebbinghaus conducted the first experiments on memory and forgetting. In the United States, psychologist William James used introspection to theorize about the structure of memory and consciousness, and in 1890 he defined psychology as ‘the science of mental life.’ In 1896 American psychologist Mary Whiton Calkins invented an important technique for studying memory retention.
In the early 1900s, however, with psychology becoming more distinct from philosophy and physiology, attention shifted away from questions about mental life to questions about behaviour. This shift occurred because many psychologists thought that it was impossible to study mental life using scientific methods. For example, critics of introspection labelled it subjective and speculative, and even its supporters found that people were unable to report on their own mental states in much detail. Behaviour, on the other hand, could be observed, measured, and documented. American psychologist John B. Watson, considered the founder of behaviourism, contended that all human behaviour could be explained without reference to a person’s thoughts, feelings, or mental states. Another leading behaviourist, American psychologist B. F. Skinner, was adamant in his belief that even the most advanced forms of human learning, such as language acquisition, could be explained in terms of the basic principles of conditioning
In the 1950's American linguist Noam Chomsky proposed that the human brain is especially constructed to detect and reproduce language and that the ability to form and understand language is innate to all human beings. According to Chomsky, young children learn and apply grammatical rules and vocabulary as they are exposed to them and do not require initial formal teaching.
Landmark developments in the late 1940s and throughout the 1950s revived hope for the scientific study of mental life and fuelled a ‘cognitive revolution’ in psychology. In 1949 Canadian psychologist Donald O. Hebb published pioneering work, based in part on animal studies, that theorized about the biological basis of memory and other psychological phenomena. In 1956 American psychologist George Miller showed that there are limits to the amount of information that people can hold in short-term memory at any one time. In the late 1950s American linguist Noam Chomsky refuted Skinner’s behaviourist explanation of language development as overly simplistic. Chomsky’s theory, which proposed that children possess an innate ability to extract meaning from speech sounds, stimulated further interest in cognitive psychology.
The development of digital computers introduced new metaphors for thinking about human mental operations. Philosophers had offered such mechanical metaphors many times before, likening the mind to a blank slate (tabula rasa), a black box, and even a mechanical robot. But the computer metaphor was more powerful because it provided both a way for psychologists to conceptualize their observations and a common language for theorists to communicate their ideas. Computer terms such as input, output, processing, information storage, and information retrieval seemed to resemble the ‘real’ mental activities of people. Thus, cognitive psychologists began describing humans as information processors.
The information-processing model sees human cognition as a series of stages through which information passes sequentially. In this model, information gets into our brain (is encoded), is retained briefly or for longer periods of time (short-term or long-term storage), and is later reactivated (retrieved) for further processing or use.
With the development of more-sophisticated computer systems in the 1980s and 1990s, cognitive psychologists extended the computer metaphor to new models of cognition. These models rejected the idea of information processing as linear and sequential and instead proposed that the brain is capable of parallel processing, in which multiple operations are carried out simultaneously. One such model, called the parallel distributed processing model of cognition, reflects findings in neuroscience that suggest linear processing cannot account for the recorded speed of human memory retrieval.
Although the information-processing model is a powerful tool for guiding the study of cognitive processes, many psychologists argue that it falls short of capturing the full richness of people’s cognitive experiences. Describing the act of remembering as a process of storage and retrieval, for example, neglects the subjective experience of remembering. Another criticism is that information-processing theory may not reflect how the brain actually works. Newer models, such as the parallel distributed processing model, try to address this criticism by drawing on studies of brain structure and function. Psychologists continue to debate the adequacy of the information-processing model, but its influence likely will last well into the 21st century.
Like other psychologists, cognitive psychologists use a wide variety of research methods. Methods particularly relevant to cognitive psychology can be organized into three general categories: (1) self-reports, or people’s descriptions of their experiences; (2) reaction-time measurements; and (3) methods that measure biological factors such as brain activity.
One way of researching cognition is to conduct experiments in which the participants are asked to report their experiences. For example, an experiment on pattern recognition might present people with various visual stimuli and ask them to name what they see. An experiment on memory ability might require participants to view a list of words, then either say what they can remember (recall) or select the words they saw from a larger list (recognition). Self-report measures sometimes include people’s descriptions of their own intuitions about how their minds work. For example, people might report on the mental imagery they experience as they listen to a story or to music.
One common way that psychologists study thinking and other cognitive processes is to measure how fast people can make decisions, solve problems, and distinguish between different stimuli. In typical laboratory studies, people might be asked to name the colours in which words are printed, to scan for a special character in an array of letters, or to respond as quickly as possible about whether statements are true or false.
For a demonstration of how reaction times can illustrate mental processes, look at the accompanying illustration, entitled ‘Scroop Test.’ First, look at the left side of the illustration and, beginning with the first column, name aloud each colour as fast as you can. Next, look at the right side of the illustration and again name the colours in which the words are printed as fast as you can. Did you take longer to finish the second task? Almost all people find that the words interfere with their ability to name the colours. People do not need to read colour names before naming the printed colours, but they seem unable to stop themselves. This test suggests that reading is an automatic process and that processing the word meanings interferes with the task of colour naming.
Computers allow psychologists to measure reaction time in very small units, typically in milliseconds (thousandths of a second). For example, experiments have shown that people can recognize some faces in about 300 milliseconds, or less than one-third of a second. Such precise measurements allow scientists to test hypotheses about how the brain processes, stores, and retrieves information.
Positron emission tomography (PET) scan of the brain shows the activity of brain cells in the resting state and during three types of auditory stimulation. PET uses radioactive substances introduced into the brain to measure such brain functions as cerebral metabolism, blood flow and volume, oxygen use, and the formation of neurotransmitters. This imaging method collects data from many different angles, feeding the information into a computer that produces a series of cross-sectional images.
Advances in medical technology have made possible some of the most exciting developments in the history of cognitive psychology. Prior to the 1970s, it was virtually impossible to measure the activity of the living human brain without cutting open the head. The invention of sophisticated brain imaging techniques means we can now view pictures of the brain ‘in action.’ These techniques include computed tomography (CT), positron emission tomography (PET), magnetic resonance imaging (MRI), and functional magnetic resonance imaging (functional MRI). By observing patterns of brain activity as a person engages in various mental activities, researchers have gained new insights into memory, perception, language, and other processes. For more information about brain imaging techniques.
Scientists use a number of other methods to measure brain and nervous system activity. Scalp electroencephalography (EEG) measures the general electrical activity of the brain by means of electrodes taped to the scalp. Researchers have found that certain EEG readings correlate with particular states of consciousness, such as arousal, relaxed wakefulness, sleep, and deep sleep. Another technique, electrooculography, measures the movements of the eyes and is often used in studies of sleep and dreaming. Studies of cognitive processes in animals may use invasive research methods, such as stimulating parts of the brain with a probe or removing part of the brain. For an overview of biological methods used in psychological research.
One of the broadest branches of psychology, cognitive psychology encompasses dozens of topics of study. This article briefly describes some of the most important areas in the field: perception, learning and memory, thinking and reasoning, and language.
Studies in perception try to understand how people interpret sensory information to make sense out of their world. The human sense organs receive information about the world in the form of physical energy - for example, light waves and sound waves. This energy is converted by our sensory system into electrical impulses that travel to the brain. Perception is the mental process that translates these impulses into things we can recognize and understand: people, objects, places, sounds, tastes, and smells.
Scientists have long considered the nature of consciousness without producing a fully satisfactory definition. In the early 20th century American philosopher and psychologist William James suggested that consciousness is a mental process involving both attention to external stimuli and short-term memory. Later scientific explorations of consciousness mostly expanded upon James’s work. In this article from a 1997 special issue of Scientific American, Nobel laureate Francis Crick, who helped determine the structure of DNA, and fellow biophysicist Christof Koch explain how experiments on vision might deepen our understanding of consciousness.
Perception is such a natural, effortless process that most people are not even aware of it. But to cognitive psychologists, perception is one of the great mysteries of the mind. They wonder about questions such as ‘How do we perceive the world in three dimensions even though the images projected into the eyes are two-dimensional?’ ‘Why do we perceive melodies in music, rather than a series of disconnected notes?’ ‘What causes visual illusions?’
`One area of study in perception is pattern recognition, the ability to recognize familiar forms in a sea of sensory information. For example, recognizing a friend’s face in a crowd is a form of pattern recognition. Another area of interest in perception concerns the difference between perceiving and imagining. Some cognitive psychologists propose that perceiving and imagining are often quite similar, but others disagree with this point of view.
Many people think of learning as something that occurs in a classroom. To psychologists, the word learning refers more generally to how we acquire knowledge, develop new behaviours, and adapt to life’s challenges. Researchers have discovered many general principles that govern basic learning. For example, two common forms of learning are operant conditioning (the shaping of behaviour through reward and punishment) and learning through observation. Cognitive psychologists are particularly interested in complex forms of learning, such as learning languages or advanced mathematics.
Learning is tightly interwoven with memory, the process of storing and retrieving information in the brain. Memory plays a central role in nearly all mental activities. More than just a fact-retrieval system, memory allows us to make inferences, solve unfamiliar problems, and relate objects and events to prior knowledge. Memory is one of the most active areas of research in cognitive psychology. Researchers investigate questions such as, What is the capacity of memory? Why do people forget information? What parts of the brain are involved in memory? How is knowledge represented and organized in memory? What factors influence the accuracy of memories?
Most psychologists distinguish at least three systems or components of memory. The first is sensory memory, in which information is held by the sensory system for only an instant. Working memory, also called short-term memory, holds information in consciousness temporarily for immediate manipulation and use. Long-term memory is what most people think of as memory. It stores immense volumes of information for long periods of time.
Do babies have a basic ability to count? In one test of five-month-old infants, American psychologist Karen Wynn placed two Mickey Mouse dolls on a stage, hid the dolls behind a screen, then added another doll behind the screen as the infant watched. The screen was then removed to reveal two, not three, dolls. Infants in the study, like this five-month-old, stared longer at the incorrect outcome than when three dolls were revealed, indicating surprise at the outcome and suggesting that they expected to see three dolls. Some researchers interpret these findings as evidence that young infants have a simple understanding of quantity.
Thinking involves the mental manipulation of information for the purpose of reasoning, solving problems, making decisions and judgments, or simply imagining. Although cognitive psychologists cannot see thinking processes, they can make inferences about these processes from behaviour.
Cognitive psychologists have noted that people use a number of strategies when reasoning about a problem or decision. Often people employ deductive reasoning or inductive reasoning, two forms of logic. In deductive reasoning, people draw conclusions about specific cases from general principles that are assumed to be true. In inductive reasoning, people infer a general rule from specific cases. When making judgments or solving problems, people also frequently rely on heuristics, rules of thumb that usually lead to the correct solution but are not guaranteed to work all of the time.
Many philosophers have asserted that humans are rational thinkers who are careful and systematic in their evaluation of information. But when cognitive psychologists look carefully at the kinds of decisions people make and how they arrive at those decisions, they find that people are often less than rational. For example, imagine that you have a serious tropical disease and must decide whether to have surgery or take medication. The medication, while not particularly dangerous, is also not extremely effective. The surgery is very effective, but there is 30 percent chance that you will die within six months after the surgery. Given this hypothetical scenario, most people choose the medication. But when the risk is phrased another way—that 70 percent of those who select surgery are still alive six months later—people are more willing to choose the dangerous procedure. The term framing effects refers to the fact that people’s decisions are heavily influenced by the way information is framed. One focus of research in decision making is how to help people avoid these effects when making difficult or life-threatening decisions.
Of all human abilities, language is perhaps the most impressive. In spoken, written, and gestured forms, language is the primary means of communication among people. Although other animal species have evolved sophisticated systems of communication, none of these systems approaches human language in complexity. With language, we can refer to events or ideas in the past or future, talk about abstract concepts such as morality, and record the stories of human civilization.
Language is a central topic of study in cognitive psychology because it is closely connected with perception, memory, thinking, problem solving, and other mental processes. Of particular interest to psychologists is how children acquire language and why they have an easier time mastering language than adults who try to learn a second language. Many scientists believe the human brain is uniquely ‘wired’ to learn language during a critical period in infancy and early childhood. Supporters of this idea note that children all over the world achieve specific language milestones at roughly the same age. However, scholars continue to debate how much of language capacity is inborn.
Another widely debated question is whether animals other than humans have the capacity for language. Researchers have tried to answer this question by training chimpanzees and gorillas—the closest genetic relatives of the human species—to use sign language or to press symbols on a keyboard. This research has shown that apes can produce and understand simple phrases and sentences and even appreciate subtle differences in word order and sentence structure. One chimpanzee, Kanzi, has demonstrated the ability to understand spoken English sentences at the level of a 2y-year-old child. Although some scientists remain sceptical of these findings, most now agree that apes can attain a rudimentary form of language.
Other areas of research include the structure of language, how language is organized and represented in the mind, how we process and understand language, the neurological basis of language, and language disorders. Another subject of investigation concerns the relationship between language and thought. For example, is thinking merely speech that is not vocalized, or are other processes involved? How does language influence the way we think?
Psycholinguistics is the interdisciplinary study of the mental processes involved in language acquisition, production, and comprehension. Specialists in this field may come from one of various disciplines, including cognitive psychology, linguistics, neuroscience, and anthropology.
Schopenhauer on the Suffering of the World developed a philosophy of pessimism that focused on the nature of the ‘will,’ a term Schopenhauer used to mean both a person’s individual desires as well as the overall essence of being alive. Schopenhauer believed that although ‘will’ was essential to life, it was also the source of endless striving and discontent. In this excerpt from Parerga und Paralipomena (1851, translated as Essays and Aphorisms), Schopenhauer contemplated the role of suffering in human life, and argued that pain was an inescapable part of life. Schopenauer’s acceptance of human suffering reflected the influence of both Christian and Indian Buddhist religious traditions.
Until the 20th century most philosophers conceived the will as a separate faculty with which every person is born. They differed, however, about the role of this faculty in the personality makeup. For one school of philosophers, most notably represented by the German philosopher Arthur Schopenhauer, a universal will is the primary reality, and the individual's will forms part of it. In his view, the will dominates every other aspect of an individual's personality, knowledge, feelings, and direction in life. A contemporary form of Schopenhauer's theory is implicit in some forms of existentialism, such as the existentialist view expressed by the French philosopher Jean-Paul Sartre, which regards personality as the product of actions, and actions as manifestations of the will to give meaning to the universe.
Most other philosophers have regarded the will as coequal or secondary to other aspects of personality. Plato believed that the psyche is divided into three parts: reason, will, and desire. For rationalist philosophers, such as Aristotle, Thomas Aquinas, and René Descartes, the will is the agent of the rational soul in governing purely animal appetites and passions. Some empirical philosophers, such as David Hume, discount the importance of rational influences upon the will; they think of the will as ruled mainly by emotion. Evolutionary philosophers, such as Herbert Spencer, and pragmatist philosophers, such as John Dewey, conceive the will not as an innate faculty but as a product of experience evolving gradually as the mind and personality of the individual develop in social interaction.
Modern psychologists tend to accept the pragmatic theory of the will. They regard the will as an aspect or quality of behaviour, rather than as a separate faculty. It is the whole person who wills. This act of willing is manifested by (1) the fixing of attention on relatively distant goals and relatively abstract standards and principles of conduct; (2) the weighing of alternative courses of action and the taking of deliberate action that seems best calculated to serve specific goals and principles; (3) the inhibition of impulses and habits that might distract attention from, or otherwise conflict with, a goal or principle; and (4) perseverance against obstacles and frustrations in pursuit of goals or adherence to principles.
Among the common deficiencies that may lead to infirmity of will are absence of goals worth striving for or of ideals and standards of conduct worth respecting; vacillating attention; incapacity to resist impulses or to break habits; and inability to decide among alternatives or to stick to a decision, once made.
The precise determination of time rests on astronomical and atomic definitions that scientists have established with the utmost mathematical exactness.
As physicists agree that time is one of the most difficult properties of our universe to understand. Although scientists are able to describe the past and the future and demarcations such as seconds and minutes, they cannot define exactly what time is. The scientific study of time began in the 16th century with the work of Italian physicist and astronomer Galileo Galilei. In the 17th century English mathematician and physicist Sir Isaac Newton continued the study of time. A comprehensive explanation of time did not exist until the early 20th century, when German-born American physicist Albert Einstein proposed his theories of relativity. These theories define time as the fourth dimension of a four-dimensional world consisting not just of space but of space and time.
Several ways to measure time are in use today. Solar time is based on the rotation of Earth on its axis. It makes use of the Sun’s apparent motion across the sky to measure the duration of a day. Sidereal time is also based on Earth’s rotation, but uses the apparent motion of the ‘fixed’ stars across the sky as Earth rotates as the basis for time determination. Standard time, the familiar clock time most people use in everyday life, is based on the division of Earth’s sphere into 24 equal time zones. Dynamical time—formerly called ephemeris time—is the timescale of astronomy. Astronomers use the orbit of Earth around the Sun, as well as the orbital motions of the Moon and the other planets, to determine dynamical time. Atomic time is based on the frequency of electromagnetic waves that are emitted or absorbed by certain atoms or molecules under particular conditions. It is the most precise method for measuring time.
On January 1, 2000, people around the world celebrated the arrival of a new millennium. Some observers noted that the Gregorian calendar, which most of the world uses, actually began in AD 1 and that the new millennium truly begins in 2001. This detail failed to stem millennial festivities, but the issue shed light on the arbitrary nature of the way human beings have measured time for, . . . well . . . several millennia. The measurement of time passage probably began with the concepts of past, present, and future. Throughout history humans have used various celestial bodies—that is, the Sun, the Moon, the planets, and the stars—to measure the passage of time. Ancient peoples used the apparent motion of these bodies through the sky to determine the seasons, the length of the month, and the length of the year. The first mechanical clocks were invented in the 14th century. The use of the pendulum clock became popular in the 1600s when Dutch astronomer Christian Huygens applied the pendulum to regulate the movement of clocks, is that at this point, clocks became accurate enough to record minutes as well as hours.
The use of chronometers (precision timepieces) for precise measurement of time played an important role in navigation from the mid-18th century to the 1920s by helping to determine longitude. Prior to the invention of an accurate chronometer in the mid-18th century, navigators could easily determine their latitude, but determining longitude was more difficult. If a reading of the Sun’s position was not made at precisely the noon hour, great errors in longitude could result. For example, an error of a second of longitude, for a ship at Earth’s equator, produces an error in longitude position of about 400 m (about 1,300 ft). Precise time measurement gained further importance with the evolution of modern industrial societies. During the late 18th century, the Industrial Revolution prompted factory work to start and stop at appointed times, thus changing the tempo of life. The growth of railroads and the use of train schedules in the mid-19th century further emphasized the need for precise timekeeping.
The apparent motion of the Sun across the sky has long been used as a basis for measuring time. Under solar time, at any given locality it is noon—twelve o’clock in the daytime, or midday—when the Sun reaches the highest point in the sky. Noon at any place on the surface of Earth is when the Sun's direct rays pass over the meridian of that particularly occupied place and its point in tome. A meridian is an imaginary line that stretches from pole to pole on Earth's surface. A meridian is also known as a line of longitude, the interval between successive passages of the Sun across the same meridian is one day, and this day is by custom divided into 24 hours. The amount of daylight in a day varies throughout the year, based on the tilt of Earth’s axis and its orientation to the Sun as the seasons change. For the same reasons, a day in solar time is not always 24 hours long. The difference in the length of the 24-hour day during different seasons of the year can amount to as much as 16 minutes. With the invention of accurate timepieces in the 17th century, this difference in the length of the day became significant. To overcome this problem scientists invented mean solar time, which is based on the motion of a hypothetical sun travelling at an even rate throughout the year.
Universal time is simply the mean solar time measured at the Greenwich meridian, which is designated 0° longitude and from which the longitude of all points on the surface of Earth are measured. The meridian passing through the original site of the Royal Greenwich Observatory in Greenwich, England, has been recognized by international agreement since 1884 as the prime meridian. Universal time was originally called Greenwich Mean Time (GMT) but replaced that designation in 1928. Universal time is used to denote solar time when an accuracy to about one second suffices.
Because the basis of mean solar time relates to the motion of a hypothetical sun, scientists established a base position from which the mean time is calculated. This base position is the vernal, or spring, equinox as the imaginary point in the sky that is, nevertheless, calculated with great accuracy by astronomers, practically, as scientists define the location of the vernal equinox by reference to the position of the ‘fixed’ stars.
Scientists use stars as reference points to measure the time it takes Earth to make one full rotation on its axis. When the sun is used as a reference, the rotation is called a mean solar day. When scientists use a fixed star other than the sun as a reference point, the rotation is called a sidereal day. A sidereal day is 4 minutes shorter than the mean solar day.
Sidereal time is based on the apparent motion of the distant, ‘fixed’ stars across the sky. It has various astronomical purposes, such as predicting locations of objects in outer space. The primary unit of sidereal time is the sidereal day, which is subdivided into 24 sidereal hours. Each sidereal hour is subdivided into 60 minutes, and each minute into 60 seconds. Astronomers rely on sidereal clocks because any given star will cross the same meridian, or line of longitude, at the same sidereal time throughout the year.
According to convention, each sidereal day begins at the instant the vernal equinox crosses the prime meridian. The vernal equinox is the point on the celestial sphere at which the sun crosses the plane of the equator, moving from south to north. The celestial sphere is the apparent surface of the heavens, on which the stars appear to be fixed.
The US Naval Observatory in Washington, D.C., uses mathematical tables to calculate mean solar time from mean sidereal time. The sidereal day is almost four minutes shorter than the mean solar day, so a discrepancy exists between the total number of hours in a mean solar year and in a mean sidereal year. This discrepancy arises because Earth rotates on its axis at the same time that it revolves around the Sun. According to mean sidereal time, Earth returns to the vernal equinox every 365 days 6 hours 9 minutes 9.54 seconds. According to mean solar time, Earth returns to the vernal equinox every 365 days 5 hours 48 minutes 45.5 seconds. The difference between the two is 20 minutes 24.04 seconds.
In 1883 an international agreement introduced the concept of standard time. Standard time was adopted to avoid the complications of adhering to railroad time schedules when each community used its own local solar time. The base position for standard time is the prime meridian. The distance east or west of Greenwich determines the standard time zone and, thus, the standard time of a particular location.
Astronomers use dynamical time for the precise study of the motion of celestial bodies. Dynamical time replaced ephemeris time in 1984, when the International Astronomical Union (IAU) updated the Astronomical Almanac. Scientists introduced ephemeris time in 1940 and selected the orbital position of Earth around the Sun as the standard by which to define the numerical measure of ephemeris time. In the 1950s the IAU decided that ephemeris time could be based on the orbital position of any planet or satellite. Time would be determined by comparing the orbital position of a particular planet or satellite (natural or artificial) at a particular time to an ephemeride. An ephemeride is a table of orbital positions of a planet or a satellite mapped over a period of time.
The annual revolution of Earth around the Sun is the basis for dynamical time, and the base position of measure (as in sidereal time) is the vernal equinox. When the greatest degree of accuracy is required in computing the positions of a planet or star, astronomers use dynamical time, because neither mean solar time nor mean sidereal time is sufficiently accurate, as the motion of Earth on its axis is not regular and even. Variations in the rate of Earth’s rotation amount to 1 or 2 seconds per year.
On December 29, 1999, the United States National Institute of Standards and Technology unveiled the NIST F-1, the most accurate clock in the world (a distinction it shares with a similar device located in Paris, France). NIST F-1, an atomic cesium fountain clock, replaces the NIST-7, which served as the primary United States time standard from 1993 to the end of 1999. The new atomic timekeeper is so accurate that it could run for nearly 20 million years without gaining or losing a single second. The clock is called a fountain clock because it measures the light emitted by super-cooled cesium atoms as they fall through a microwave cavity.
Atomic time is the time scale of physics. Scientists use atomic time when they require exceptionally precise measurements of time intervals relating to physical phenomena. Clocks became more accurate and precise through the centuries, and with the introduction of atomic clocks—specifically, the construction of a high-precision cesium atomic clock in 1955—extremely accurate measurement of time became possible. Early mechanical clocks varied by several minutes each day. In the 1920s, vibrating quartz crystals were accurate to a few ten-thousandths of a second per day. The cesium atom clocks used in the 1980s lost less than a second in 3,000 years. In the 1990s the National Institute of Standards and Technology (NIST) in the United States established an atomic clock—the NIST-7, also a cesium clock—that is accurate to a single second over 3 million years. The electronic components of atomic clocks are regulated by the frequency of radiation emitted or absorbed by a particular atom or molecule.
Until 1955 astronomers and scientists calculated the scientific standard of time—the second—based on Earth's period of rotation. They defined the second as 1/86,400 of a mean solar day. When scientists realized that Earth's rate of rotation is irregular, a redefinition of the second became necessary. In 1955 the IAU defined the second as 1/31,556,925.9747 of the solar year that was in progress at noon on December 31, 1899. The International Committee on Weights and Measures adopted this definition in 1956. Since 1967 the official length of a second in the International System of Units (SI) has been defined by atomic standards: a second is equal to 9,192,631,770 oscillations, or periods, of the radiation corresponding to the transition between two hyperfine (closely spaced) energy states of the cesium-133 atom.
International time zones define the time of day in places around the world with respect to the standard time kept in Greenwich, England, a city that lies on the prime meridian. Each time zone spans about 15 degrees of longitude, but actual zone lines vary to account for political boundaries and economic considerations.
For the purposes of standard time, Earth is divided into 24 standard time zones. The time zones extend from the North Pole to the South Pole, and within each zone the time is the same throughout. Within each time zone, local noon corresponds approximately to the time at which the Sun crosses the central meridian, or longitude, of that zone.
The distance east or west of the Greenwich meridian determines different time zones. According to the scientific model of standard time, each standard time zone spans 15° of longitude. In fact, the borders of time zones are bent to conform to state and country boundaries, as well as to facilitate commercial activities. In 1966 the US Congress passed the Uniform Time Act, which established eight standard time zones for the United States and its outlying regions. In 1983 several time zone boundaries were altered so that most of Alaska, which formerly spanned four zones, could be unified under one time zone. The US standard time zones are the Atlantic, Eastern, Central, Mountain, Pacific, Alaska, Hawaii-Aleutian, and Samoa zones.
There are five standard time zones in Canada. From east to west these are the Atlantic, Eastern, Central, Mountain, and Pacific time zones. Newfoundland has its own time zone, which is not a standard time zone. Newfoundland time is 30 minutes ahead of Atlantic time.
The International Date Line is a time zone boundary. It is an imaginary line extending from the North Pole to the South Pole and separating one calendar day from the next. Along most of its length, the International Date Line corresponds to the 180th meridian of longitude. A traveller moving eastward across the line sets his or her calendar back one day, and one travelling westward sets the calendar a day ahead.
Several areas of science and the humanities—including physics, geology, biology, and philosophy—overlap with the scientific study of time. Time scales and the concept of time are integral to our understanding of the universe, Earth, and the organisms that live on Earth.
Einstein’s first major contribution to the study of time occurred in 1905, when he introduced his special theory of relativity and showed how time changes with motion. The word relativity derives from the fact that the appearance of the world depends on the observer’s state of motion and is relative to the observer. Today scientists do not see problems of time or motion as absolute with single correct answers. Because time is relative to the speed an observer is travelling, there can never be a clock at the centre of the universe to which everyone can set his or her watch. Einstein’s special theory of relativity tell us that an object travelling at high speeds ages more slowly than an object that is not travelling as fast. This means that if a person from Earth were to travel in outer space at a speed close to the speed of light (about 300,000 km per sec or about 186,000 mi per sec), that person could return to Earth thousands of years into Earth’s future.
Time is distorted in regions of large masses, such as stars and black holes. In Einstein’s general theory of relativity, which was introduced in 1916, the very existence of time depends on the presence of space. Einstein’s general theory explains how gravity warps and slows time and why time moves very slightly slower in regions of high gravity, such as near stars, compared to regions of lesser gravity, such as on planets. This time-slowing effect becomes pronounced in regions of extremely high gravity, such as near black holes.
Gravitation, the force of attraction between all objects that tends to pull them toward one another. It is a universal force, affecting the largest and smallest objects, all forms of matter, and energy. Gravitation governs the motion of astronomical bodies. It keeps the moon in orbit around the earth and keeps the earth and the other planets of the solar system in orbit around the sun. On a larger scale, it governs the motion of stars and slows the outward expansion of the entire universe because of the inward attraction of galaxies to other galaxies. Typically the term gravitation refers to the force in general, and the term gravity refers to the earth's gravitational pull.
Gravitation is one of the four fundamental forces of nature, along with electromagnetism and the weak and strong nuclear forces, which hold together the particles that make up atoms. Gravitation is by far the weakest of these forces and, as a result, is not important in the interactions of atoms and nuclear particles or even of moderate-sized objects, such as people or cars. Gravitation is important only when very large objects, such as planets, are involved. This is true for several reasons. First, the force of gravitation reaches great distances, while nuclear forces operate only over extremely short distances and decrease in strength very rapidly as distance increases. Second, gravitation is always attractive. In contrast, electromagnetic forces between particles can be repulsive or attractive depending on whether the particles both have a positive or negative electrical charge, or they have opposite electrical charges, as these attractive and repulsive forces tend to cancel each other out, leaving only a weak net force. Gravitation has no repulsive force and, therefore, no such cancellation or weakening.
After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.
The gravitational attraction of objects for one another is the easiest fundamental force to observe and was the first fundamental force to be described with a complete mathematical theory by the English physicist and mathematician Sir Isaac Newton. A more accurate theory called general relativity was formulated early in the 20th century by the German-born American physicist Albert Einstein. Scientists recognize that even this theory is not correct for describing how gravitation works in certain circumstances, and they continue to search for an improved theory.
Gravitation plays a crucial role in most processes on the earth. The ocean tides are caused by the gravitational attraction of the moon and the sun on the earth and its oceans. Gravitation drives weather patterns by making cold air sink and displace less dense warm air, forcing the warm air to rise. The gravitational pull of the earth on all objects holds the objects to the surface of the earth. Without it, the spin of the earth would send them floating off into space.
The gravitational attraction of every bit of matter in the earth for every other bit of matter amounts to an inward pull that holds the earth together against the pressure forces tending to push it outward. Similarly, the inward pull of gravitation holds stars together. When a star's fuel nears depletion, the processes producing the outward pressure weaken and the inward pull of gravitation eventually compresses the star to a very compact size.
Falling objects accelerate in response to the force exerted on them by Earth’s gravity. Different objects accelerate at the same rate, regardless of their mass. This illustration shows the speed at which a ball and a cat would be moving and the distance each would have fallen at intervals of a tenth of a second during a short fall.
If an object held near the surface of the earth is released, it will fall and accelerate, or pick up speed, as it descends. This acceleration is caused by gravity, the force of attraction between the object and the earth. The force of gravity on an object is also called the object's weight. This force depends on the object's mass, or the amount of matter in the object. The weight of an object is equal to the mass of the object multiplied by the acceleration due to gravity.
A bowling ball that weighs 16 lb is actually being pulled toward the earth with a force of 16 lb. In the metric system, the bowling ball is pulled toward the earth with a force of 71 newtons (a metric unit of force abbreviated N). The bowling ball also pulls on the earth with a force of 16 lb (71 N), but the earth is so massive that it does not move appreciably. In order to hold the bowling ball up and keep it from falling, a person must exert an upward force of 16 lb (71 N) on the ball. This upward force acts to oppose the 16 lb (71 N) downward weight force, leaving a total force of zero. The total force on an object determines the object's acceleration.
If the pull of gravity is the only force acting on an object, then all objects, regardless of their weight, size, or shape, will accelerate in the same manner. At the same place on the earth, the 16 lb (71 N) bowling ball and a 500 lb (2200 N) boulder will fall with the same rate of acceleration. As each second passes, each object will increase its downward speed by about 9.8 m/sec (32 ft/sec), resulting in an acceleration of 9.8 m/sec/sec (32 ft/sec/sec). In principle, a rock and a feather both would fall with this acceleration if there were no other forces acting. In practice, however, air friction exerts a greater upward force on the falling feather than on the rock and makes the feather fall more slowly than the rock.
The mass of an object does not change as it is moved from place to place, but the acceleration due to gravity, and therefore the object's weight, will change because the strength of the earth's gravitational pull is not the same everywhere. The earth's pull and the acceleration due to gravity decrease as an object moves farther away from the center of the earth. At an altitude of 4000 miles (6400 km) above the earth's surface, for instance, the bowling ball that weighed 16 lb (71 N) at the surface would weigh only about 4 lb (18 N). Because of the reduced weight force, the rate of acceleration of the bowling ball at that altitude would be only one quarter of the acceleration rate at the surface of the earth. The pull of gravity on an object also changes slightly with latitude. Because the earth is not perfectly spherical, but bulges at the equator, the pull of gravity is about 0.5 percent stronger at the earth's poles than at the equator.
The ancient Greek philosophers developed several theories about the force that caused objects to fall toward the earth. In the 4th century bc, the Greek philosopher Aristotle proposed that all things were made from some combination of the four elements, earth, air, fire, and water. Objects that were similar in nature attracted one another, and as a result, objects with more earth in them were attracted to the earth. Fire, by contrast, was dissimilar and therefore tended to rise from the earth. Aristotle also developed a cosmology, that is, a theory describing the universe, that was geocentric, or earth-centred, with the moon, sun, planets, and stars moving around the earth on spheres. The Greek philosophers, however, did not propose a connection between the force behind planetary motion and the force that made objects fall toward the earth.
At the beginning of the 17th century, the Italian physicist and astronomer Galileo discovered that all objects fall toward the earth with the same acceleration, regardless of their weight, size, or shape, when gravity is the only force acting on them. Galileo also had a theory about the universe, which he based on the ideas of the Polish astronomer Nicolaus Copernicus. In the mid-16th century, Copernicus had proposed a heliocentric, or sun-centred system, in which the planets moved in circles around the sun, and Galileo agreed with this cosmology. However, Galileo believed that the planets moved in circles because this motion was the natural path of a body with no forces acting on it. Like the Greek philosophers, he saw no connection between the force behind planetary motion and gravitation on earth.
In the late 16th and early 17th centuries the heliocentric model of the universe gained support from observations by the Danish astronomer Tycho Brahe, and his student, the German astronomer Johannes Kepler. These observations, made without telescopes, were accurate enough to determine that the planets did not move in circles, as Copernicus had suggested. Kepler calculated that the orbits had to be ellipses (slightly elongated circles). The invention of the telescope made even more precise observations possible, and Galileo was one of the first to use a telescope to study astronomy. In 1609 Galileo observed that moons orbited the planet Jupiter, a fact that could not reasonably fit into an earth-centered model of the heavens.
The new heliocentric theory changed scientists' views about the earth's place in the universe and opened the way for new ideas about the forces behind planetary motion. However, it was not until the late 17th century that Isaac Newton developed a theory of gravitation that encompassed both the attraction of objects on the earth and planetary motion.
Because the Moon has significantly less mass than Earth, the weight of an object on the Moon’s surface is only one-sixth the object’s weight on Earth’s surface. This graph shows how much an object that weighs w on Earth would weigh at different points between the Earth and Moon. Since the Earth and Moon pull in opposite directions, there is a point, about 346,000 km (215,000 mi) from Earth, where the opposite gravitational forces would cancel, and the object's weight would be zero.
To develop his theory of gravitation, Newton first had to develop the science of forces and motion called mechanics. Newton proposed that the natural motion of an object is motion at a constant speed on a straight line, and that it takes a force to slow down, speed up, or change the path of an object. Newton also invented calculus, a new branch of mathematics that became an important tool in the calculations of his theory of gravitation.
Newton proposed his law of gravitation in 1687 and stated that every particle in the universe attracts every other particle in the universe with a force that depends on the product of the two particles' masses divided by the square of the distance between them. The gravitational force between two objects can be expressed by the following equation: F= GMm/d2 where F is the gravitational force, G is a constant known as the universal constant of gravitation, M and m are the masses of each object, and d is the distance between them. Newton considered a particle to be an object with a mass that was concentrated in a small point. If the mass of one or both particles increases, then the attraction between the two particles increases. For instance, if the mass of one particle is doubled, the force of attraction between the two particles is doubled. If the distance between the particles increases, then the attraction decreases as the square of the distance between them. Doubling the distance between two particles, for instance, will make the force of attraction one quarter as great as it was.
According to Newton, the force acts along a line between the two particles. In the case of two spheres, it acts along the line between their centers. The attraction between objects with irregular shapes is more complicated. Every bit of matter in the irregular object attracts every bit of matter in the other object. A simpler description is possible near the surface of the earth where the pull of gravity is approximately uniform in strength and direction. In this case there is a point in an object (even an irregular object) called the center of gravity, at which all the force of gravity can be considered to be acting.
Newton's law affects all objects in the universe, from raindrops in the sky to the planets in the solar system. It is therefore known as the universal law of gravitation. In order to know the strength of gravitational forces in general, however, it became necessary to find the value of G, the universal constant of gravitation. Scientists needed to perform an experiment, but gravitational forces are very weak between objects found in a common laboratory and thus hard to observe. In 1798 the English chemist and physicist Henry Cavendish finally measured G with a very sensitive experiment in which he nearly eliminated the effects of friction and other forces. The value he found was 6.754 x 10-11 N-m2/kg2—close to the currently accepted value of 6.670 x 10-11 N-m2/kg2 (a decimal point followed by 10 zeros and then the number 6670). This value is so small that the force of gravitation between two objects with a mass of 1 metric ton each, 1 meter from each other, is about 67 millionths of a newton, or about 15 millionths of a pound.
Gravitation may also be described in a completely different way. A massive object, such as the earth, may be thought of as producing a condition in space around it called a gravitational field. This field causes objects in space to experience a force. The gravitational field around the earth, for instance, produces a downward force on objects near the earth surface. The field viewpoint is an alternative to the viewpoint that objects can affect each other across distance. This way of thinking about interactions has proved to be very important in the development of modern physics.
Newton's law of gravitation was the first theory to accurately describe the motion of objects on the earth as well as the planetary motion that astronomers had long observed. According to Newton's theory, the gravitational attraction between the planets and the sun holds the planets in elliptical orbits around the sun. The earth's moon and moons of other planets are held in orbit by the attraction between the moons and the planets. Newton's law led to many new discoveries, the most important of which was the discovery of the planet Neptune. Scientists had noted unexplainable variations in the motion of the planet Uranus for many years. Using Newton's law of gravitation, the French astronomer Urbain Leverrier and the British astronomer John Couch each independently predicted the existence of a more distant planet that was perturbing the orbit of Uranus. Neptune was discovered in 1864, in an orbit close to its predicted position.
Scientists used Newton's theory of gravitation successfully for many years. Several problems began to arise, however, involving motion that did not follow the law of gravitation or Newtonian mechanics. One problem was the observed and unexplainable deviations in the orbit of Mercury (which could not be caused by the gravitational pull of another orbiting body).
Another problem with Newton's theory involved reference frames, that is, the conditions under which an observer measures the motion of an object. According to Newtonian mechanics, two observers making measurements of the speed of an object will measure different speeds if the observers are moving relative to each other. A person on the ground observing a ball that is on a train passing by will measure the speed of the ball as the same as the speed of the train. A person on the train observing the ball, however, will measure the ball's speed as zero. According to the traditional ideas about space and time, then, there could not be a constant, fundamental speed in the physical world because all speed is relative. However, near the end of the 19th century the Scottish physicist James Clerk Maxwell proposed a complete theory of electric and magnetic forces that contained just such a constant, which he called c. This constant speed was 300,000 km/sec (186,000 mi/sec) and was the speed of electromagnetic waves, including light waves. This feature of Maxwell's theory caused a crisis in physics because it indicated that speed was not always relative.
Albert Einstein resolved this crisis in 1905 with his special theory of relativity. An important feature of Einstein's new theory was that no particle, and even no information, could travel faster than the fundamental speed c. In Newton's gravitation theory, however, information about gravitation moved at infinite speed. If a star exploded into two parts, for example, the change in gravitational pull would be felt immediately by a planet in a distant orbit around the exploded star. According to Einstein's theory, such forces were not possible.
Though Newton's theory contained several flaws, it is still very practical for use in everyday life. Even today, it is sufficiently accurate for dealing with earth-based gravitational effects such as in geology (the study of the formation of the earth and the processes acting on it), and for most scientific work in astronomy. Only when examining exotic phenomena such as black holes (points in space with a gravitational force so strong that not even light can escape them) or in explaining the big bang (the origin of the universe) is Newton's theory inaccurate or inapplicable.
In 1915 Einstein formulated a new theory of gravitation that reconciled the force of gravitation with the requirements of his theory of special relativity. He proposed that gravitational effects move at the speed of c. He called this theory general relativity to distinguish it from special relativity, which only holds when there is no force of gravitation. General relativity produces predictions very close to those of Newton's theory in most familiar situations, such as the moon orbiting the earth. Einstein's theory differed from Newton's theory, however, in that it described gravitation as a curvature of space and time.
In Einstein's general theory of relativity, he proposed that space and time may be united into a single, four-dimensional geometry consisting of 3 space dimensions and 1 time dimension. In this geometry, called spacetime, the motions of particles from point to point as time progresses are represented by curves called world lines. If there is no gravity acting, the most natural lines in this geometry are straight lines, and they represent particles that are moving always in the same direction with the same speed—that is, particles that have no force acting on them. If a particle is acted on by a force, then its world line will not be straight. Einstein also proposed that the effect of gravitation should not be represented as the deviation of a world line from straightness, as it would be for an electrical force. If gravitation is present, it should not be considered a force. Rather, gravitation changes the most natural world lines and thereby curves the geometry of space-time. In a curved geometry, such as the two-dimensional surface of the earth, there are no straight lines. Instead, there are special curves called geodesics, an example of which are great circles around the earth. These special curves are at each point as straight as possible, and they are the most natural lines in a curved geometry. The effect of gravitation would be to influence the geodesics in space-time. Near sources of gravitation the space is strongly curved and the geodesics behave less and less like those in flat, uncurved space-time. In the solar system, for example, the effect of the sun and the earth is to cause the moon to move on a geodesic that winds around the geodesic of the earth 12 times a year.
Einstein's theory required verification, but the level of precision in measurement needed to distinguish between Einstein's theory and Newton's theory was difficult to achieve in the early 20th century and remains so today. There were two predictions, however, that could be tested. One involved deviations in the orbit of Mercury. Astronomers had observed that the ellipse of Mercury's orbit itself rotated—that is, the point nearest the sun, called the perihelion, slowly advanced around the sun. The rate of advancement predicted by Newton's theory differed slightly from what astronomers had measured, but Einstein's theory predicted the correct rate.
The second test involved measuring the bending of light as it passed around the sun. Both Newton's and Einstein's theories predicted that light would be deflected by gravitation. But the amount of deflection predicted by the two theories differed. The light to be measured in such a test originates in distant stars. However, under ordinary conditions the sun's brightness prevents scientists from observing the light from these stars. This problem disappears during an eclipse, when the moon blocks the sun's light. In 1919 a special British expedition took photographs during an eclipse. Scientists measured the deflection of starlight as it passed by the sun and arrived at numbers that agreed with Einstein's prediction. Subsequent eclipse observations also have confirmed Einstein's theory.
Other physicists have proposed relativistic theories of gravitation to compete with Einstein's. In these competing theories, almost all of which are geometrical like Einstein's, gravitational effects move at the speed c. They differ mostly in the mathematical details. Even using the technology of the late 20th century, scientists still find it very difficult to test these theories with experiments and observations. But Einstein's theory has passed all tests that have been made so far.
Einstein's general relativity theory predicts special gravitational conditions. The Big Bang theory, which describes the origin and early expansion of the universe, is one conclusion based on Einstein's theory that has been verified in several independent ways.
Another conclusion suggested by general relativity, as well as other relativistic theories of gravitation, is that gravitational effects move in waves. Astronomers have observed a loss of energy in a pair of neutron stars (stars composed of densely packed neutrons) that are orbiting each other. The astronomers theorize that energy-carrying gravitational waves are radiating from the pair, depleting the stars of their energy. Very violent astrophysical events, such as the explosion of stars or the collision of neutron stars, can produce gravitational waves strong enough that they may eventually be directly detectable with extremely precise instruments. Astrophysicists are designing such instruments with the hope that they will be able to detect gravitational waves by the beginning of the 21st century.
Another gravitational effect predicted by general relativity is the existence of black holes. The idea of a star with a gravitational force so strong that light cannot escape from its surface can be traced to Newtonian theory. Einstein modified this idea in his general theory of relativity. Because light cannot escape from a black hole, for any object - a particle, spacecraft, or wave - to escape, it would have to move past light. But light moves outward at the speed c. According to relativity, c is the highest attainable speed, so nothing can pass it. The black holes that Einstein envisioned, then, allow no escape whatsoever. An extension of this argument shows that when gravitation is this strong, nothing can even stay in the same place, but must move inward. Even the surface of a star must move inward, and must continue the collapse that created the strong gravitational force. What remains then is not a star, but a region of space from which emerges a tremendous gravitational force.
Einstein's theory of gravitation revolutionized 20th-century physics. Another important revolution that took place was quantum theory. Quantum theory states that physical interactions, or the exchange of energy, cannot be made arbitrarily small. There is a minimal interaction that comes in a packet called the quantum of an interaction. For electromagnetism the quantum is called the photon. Like the other interactions, gravitation also must be quantized. Physicists call a quantum of gravitational energy a graviton. In principle, gravitational waves arriving at the earth would consist of gravitons. In practice, gravitational waves would consist of apparently continuous streams of gravitons, and individual gravitons could not be detected.
Einstein's theory did not include quantum effects. For most of the 20th century, theoretical physicists have been unsuccessful in their attempts to formulate a theory that resembles Einstein's theory but also includes gravitons. Despite the lack of a complete quantum theory, it is possible to make some partial predictions about quantized gravitation. In the 1970s, British physicist Stephen Hawking showed that quantum mechanical processes in the strong gravitational pull just outside of black holes would create particles and quanta that move away from the black hole, thereby robbing it of energy.
An important trend in modern theoretical physics is to find a theory of everything (TOE), in which all four of the fundamental forces are seen to be really different aspects of the same single universal force. Physicists already have unified electromagnetism and the weak nuclear force and have made progress in joining these two forces with the strong nuclear force, however, relativistic gravitation, with its geometric and mathematically complex character, poses the most difficult challenge. Einstein spent most of his later years searching for an all-encompassing theory by trying to make electromagnetism geometrical like gravitation. The modern approach is to try to make gravitation fit the pattern of the other fundamental forces. Much of this work involves looking for mathematical patterns. A TOE is difficult to test using experiments because the effects of a TOE would be important only in the rarest circumstances.
Geologists—scientists who study Earth—use the geologic time scale to measure spans of time in the 4.5-billion-year history of Earth. This time scale measures blocks of time and is important for understanding the biological and geologic history—and evolution—of Earth. The longest blocks of time, eons, are divided into shorter blocks called eras. Eras are divided into periods, which are made up of epochs.
Many organisms exhibit biological rhythms. These are periodic biological fluctuations—changes in sleep patterns or hibernation patterns, for example—that occur in response to periodic environmental changes such as the cycles of night and day, darkness and light, and winter and summer. Organisms use biological clocks—such as circadian, or daily, rhythms—to remain in harmony with the cycles of day and night and the seasons.
Philosophers have long argued about the nature of time. Some philosophers, notably German philosopher Immanuel Kant, have proposed that newborn babies may experience the passage of time. Others have proposed that the human mind must learn to construct time. For example, French philosopher Henri Bergson thought of time as something entirely derived from experience. In Bergson's doctoral dissertation, Time and Free Will (1889; translated 1910), he proposed that time is a matter of subjective experience. According to Bergson, an infant would not experience time directly but rather would have to learn how to experience it.
Time is not a physical constant. Motion and gravity effect time by dilating (slowing) it or by expanding its duration. In 1905 Albert Einstein described the effect of motion on time in his special theory of relativity. In 1916 he described the effect of gravity on time in his general theory of relativity.
Time dilation effects due to motion were experimentally observed in the early 1970s. Researchers placed atomic clocks on commercial airliners and observed the expected changes in time as measured by those clocks relative to similar clocks on the ground. In particular, when the planes travelled east, in the direction of Earth’s rotation, the clocks on the airliners were 59 nanoseconds (59 billionths of a second) slow relative to the atomic clocks on the ground. When the aeroplanes travelled west, the clocks were 273 nanoseconds fast. This discrepancy is caused by the rotation of Earth, which causes an additional time dilation. If the effect of Earth's rotation is removed, the time dilation produced by the motion of the airliners confirms Einstein's theory of how time changes with motion, as the dilation is in agreement with predictions made by the theory.
Time dilation effects due to gravity have been experimentally verified in many ways. For example, time on the Sun's surface runs about two parts in a million slower than on Earth because of the Sun's much higher gravity. In 1968 American physicist Irwin Shapiro confirmed this effect when he showed that radar signals and their reflections from planets are delayed when the Sun is near the path of the signals.
Time, can be considered for a conscious experience of duration, the period during which an action or event occurs. Time is also a dimension representing a succession of such actions or events. Time is one of the fundamental quantities of the physical world, similar to length and mass in this respect. The concept that time is a fourth dimension—on a par with the three dimensions of space: length, width, and depth—is one of the foundations of modern physics. Time measurement involves the establishment of a time scale in order to refer to the occurrence of events. The precise determination of time rests on astronomical and atomic definitions that scientists have established with the utmost mathematical exactness.
The apparent motion of the Sun across the sky has long been used as a basis for measuring time. Under solar time, at any given locality it is noon—twelve o’clock in the daytime, or midday—when the Sun reaches the highest point in the sky. Noon at any place on the surface of Earth is when the Sun's direct rays pass over the meridian of that particular place, as the meridian is that of some imaginary line that stretches from pole to pole on Earth's surface. A meridian is also known as a line of longitude. The interval between successive passages of the Sun across the same meridian is one day, and this day is by custom divided into 24 hours. The amount of daylight in a day varies throughout the year, based on the tilt of Earth’s axis and its orientation to the Sun as the seasons change. For the same reasons, a day in solar time is not always 24 hours long. The difference in the length of the 24-hour day during different seasons of the year can amount to as much as 16 minutes. With the invention of accurate timepieces in the 17th century, this difference in the length of the day became significant. To overcome this problem scientists invented mean solar time, which is based on the motion of a hypothetical sun travelling at an even rate throughout the year.
Universal time is simply the mean solar time measured at the Greenwich meridian, which is designated 0° longitude and from which the longitude of all points on the surface of Earth are measured. The meridian passing through the original site of the Royal Greenwich Observatory in Greenwich, England, has been recognized by international agreement since 1884 as the prime meridian. Universal time was originally called Greenwich Mean Time (GMT) but replaced that designation in 1928. Universal time is used to denote solar time when an accuracy to about one second suffices.
Because the basis of mean solar time relates to the motion of a hypothetical sun, scientists established a base position from which the mean time is calculated. This base position is the vernal, or spring, equinox, an imaginary point in the sky that is, nevertheless, calculated with great accuracy by astronomers. Practically, scientists define the location of the vernal equinox by reference to the position of the ‘fixed’ stars.
Scientists use stars as reference points to measure the time it takes Earth to make one full rotation on its axis. When the sun is used as a reference, the rotation is called a mean solar day. When scientists use a fixed star other than the sun as a reference point, the rotation is called a sidereal day. A sidereal day is 4 minutes shorter than the mean solar day.
Sidereal time is based on the apparent motion of the distant, ‘fixed’ stars across the sky. It has various astronomical purposes, such as predicting locations of objects in outer space. The primary unit of sidereal time is the sidereal day, which is subdivided into 24 sidereal hours. Each sidereal hour is subdivided into 60 minutes, and each minute into 60 seconds. Astronomers rely on sidereal clocks because any given star will cross the same meridian, or line of longitude, at the same sidereal time throughout the year.
According to convention, each sidereal day begins at the instant the vernal equinox crosses the prime meridian. The vernal equinox is the point on the celestial sphere at which the sun crosses the plane of the equator, moving from south to north. The celestial sphere is the apparent surface of the heavens, on which the stars appear to be fixed.
The US Naval Observatory in Washington, D.C., uses mathematical tables to calculate mean solar time from mean sidereal time. The sidereal day is almost four minutes shorter than the mean solar day, so a discrepancy exists between the total number of hours in a mean solar year and in a mean sidereal year. This discrepancy arises because Earth rotates on its axis at the same time that it revolves around the Sun. According to mean sidereal time, Earth returns to the vernal equinox every 365 days 6 hours 9 minutes 9.54 seconds. According to mean solar time, Earth returns to the vernal equinox every 365 days 5 hours 48 minutes 45.5 seconds. The difference between the two is 20 minutes 24.04 seconds.
In 1883 an international agreement introduced the concept of standard time. Standard time was adopted to avoid the complications of adhering to railroad time schedules when each community used its own local solar time. The base position for standard time is the prime meridian. The distance east or west of Greenwich determines the standard time zone and, thus, the standard time of a particular location.
Astronomers use dynamical time for the precise study of the motion of celestial bodies. Dynamical time replaced ephemeris time in 1984, when the International Astronomical Union (IAU) updated the Astronomical Almanac. Scientists introduced ephemeris time in 1940 and selected the orbital position of Earth around the Sun as the standard by which to define the numerical measure of ephemeris time. In the 1950s the IAU decided that ephemeris time could be based on the orbital position of any planet or satellite. Time would be determined by comparing the orbital position of a particular planet or satellite (natural or artificial) at a particular time to an ephemeride. An ephemeride is a table of orbital positions of a planet or a satellite mapped over a period of time.
The annual revolution of Earth around the Sun is the basis for dynamical time, and the base position of measure (as in sidereal time) is the vernal equinox. When the greatest degree of accuracy is required in computing the positions of a planet or star, astronomers use dynamical time, because neither mean solar time nor mean sidereal time is sufficiently accurate, as the motion of Earth on its axis is not regular and even. Variations in the rate of Earth’s rotation amount to 1 or 2 seconds per year.
On December 29, 1999, the United States National Institute of Standards and Technology unveiled the NIST F-1, the most accurate clock in the world (a distinction it shares with a similar device located in Paris, France). NIST F-1, an atomic cesium fountain clock, replaces the NIST-7, which served as the primary United States time standard from 1993 to the end of 1999. The new atomic timekeeper is so accurate that it could run for nearly 20 million years without gaining or losing a single second. The clock is called a fountain clock because it measures the light emitted by super-cooled cesium atoms as they fall through a microwave cavity.
Atomic time is the time scale of physics. Scientists use atomic time when they require exceptionally precise measurements of time intervals relating to physical phenomena. Clocks became more accurate and precise through the centuries, and with the introduction of atomic clocks—specifically, the construction of a high-precision cesium atomic clock in 1955—extremely accurate measurement of time became possible. Early mechanical clocks varied by several minutes each day. In the 1920s, vibrating quartz crystals were accurate to a few ten-thousandths of a second per day. The cesium atom clocks used in the 1980s lost less than a second in 3,000 years. In the 1990s the National Institute of Standards and Technology (NIST) in the United States established an atomic clock—the NIST-7, also a cesium clock—that is accurate to a single second over 3 million years. The electronic components of atomic clocks are regulated by the frequency of radiation emitted or absorbed by a particular atom or molecule.
Until 1955 astronomers and scientists calculated the scientific standard of time - the second—based on Earth's period of rotation. They defined the second as 1/86,400 of a mean solar day. When scientists realized that Earth's rate of rotation is irregular, a redefinition of the second became necessary. In 1955 the IAU defined the second as 1/31,556,925.9747 of the solar year that was in progress at noon on December 31, 1899. The International Committee on Weights and Measures adopted this definition in 1956. Since 1967 the official length of a second in the International System of Units (SI) has been defined by atomic standards: a second is equal to 9,192,631,770 oscillations, or periods, of the radiation corresponding to the transition between two hyperfine (closely spaced) energy states of the cesium-133 atom.
International time zones define the time of day in places around the world with respect to the standard time kept in Greenwich, England, a city that lies on the prime meridian. Each time zone spans about 15 degrees of longitude, but actual zone lines vary to account for political boundaries and economic considerations.
The purposes of standard time, Earth is divided into 24 standard time zones. The time zones extend from the North Pole to the South Pole, and within each zone the time is the same throughout. Within each time zone, local noon corresponds approximately to the time at which the Sun crosses the central meridian, or longitude, of that zone.
The distance east or west of the Greenwich meridian determines different time zones. According to the scientific model of standard time, each standard time zone spans 15° of longitude. In fact, the borders of time zones are bent to conform to state and country boundaries, as well as to facilitate commercial activities. In 1966 the US Congress passed the Uniform Time Act, which established eight standard time zones for the United States and its outlying regions. In 1983 several time zone boundaries were altered so that most of Alaska, which formerly spanned four zones, could be unified under one time zone. The US standard time zones are the Atlantic, Eastern, Central, Mountain, Pacific, Alaska, Hawaii-Aleutian, and Samoa zones.
There are five standard time zones in Canada. From east to west these are the Atlantic, Eastern, Central, Mountain, and Pacific time zones. Newfoundland has its own time zone, which is not a standard time zone. Newfoundland time is 30 minutes ahead of Atlantic time.
The International Date Line is a time zone boundary. It is an imaginary line extending from the North Pole to the South Pole and separating one calendar day from the next. Along most of its length, the International Date Line corresponds to the 180th meridian of longitude. A traveller moving eastward across the line sets his or her calendar back one day, and one travelling westward sets the calendar a day ahead.
Several areas of science and the humanities—including physics, geology, biology, and philosophy - overlap with the scientific study of time. Time scales and the concept of time are integral to our understanding of the universe, Earth, and the organisms that live on Earth.
Einstein’s first major contribution to the study of time occurred in 1905, when he introduced his special theory of relativity and showed how time changes with motion. The word relativity derives from the fact that the appearance of the world depends on the observer’s state of motion and is relative to the observer. Today scientists do not see problems of time or motion as absolute with single correct answers. Because time is relative to the speed an observer is travelling, there can never be a clock at the centre of the universe to which everyone can set his or her watch. Einstein’s special theory of relativity tell us that an object travelling at high speeds ages more slowly than an object that is not travelling as fast. This means that if a person from Earth were to travel in outer space at a speed close to the speed of light (about 300,000 km per sec or about 186,000 mi per sec), that person could return to Earth thousands of years into Earth’s future.
Time is distorted in regions of large masses, such as stars and black holes. In Einstein’s general theory of relativity, which was introduced in 1916, the very existence of time depends on the presence of space. Einstein’s general theory explains how gravity warps and slows time and why time moves very slightly slower in regions of high gravity, such as near stars, compared to regions of lesser gravity, such as on planets. This time-slowing effect becomes pronounced in regions of extremely high gravity, such as near black holes.
Geologist - scientists who study Earth - use the geologic time scale to measure spans of time in the 4.5-billion-year history of Earth. This time scale measures blocks of time and is important for understanding the biological and geologic history—and evolution—of Earth. The longest blocks of time, eons, are divided into shorter blocks called eras. Eras are divided into periods, which are made up of epochs.
Many organisms exhibit biological rhythms. These are periodic biological fluctuations - changes in sleep patterns or hibernation patterns, for example—that occur in response to periodic environmental changes such as the cycles of night and day, darkness and light, and winter and summer. Organisms use biological clocks—such as circadian, or daily, rhythms—to remain in harmony with the cycles of day and night and the seasons.
Philosophers have long argued about the nature of time. Some philosophers, notably German philosopher Immanuel Kant, have proposed that newborn babies may experience the passage of time. Others have proposed that the human mind must learn to construct time. For example, French philosopher Henri Bergson thought of time as something entirely derived from experience. In Bergson's doctoral dissertation, Time and Free Will (1889; translated 1910), he proposed that time is a matter of subjective experience. According to Bergson, an infant would not experience time directly but rather would have to learn how to experience it.
Time is not a physical constant. Motion and gravity effect time by dilating (slowing) it or by expanding its duration. In 1905 Albert Einstein described the effect of motion on time in his special theory of relativity. In 1916 he described the effect of gravity on time in his general theory of relativity.
Time dilation effects due to motion were experimentally observed in the early 1970s. Researchers placed atomic clocks on commercial airliners and observed the expected changes in time as measured by those clocks relative to similar clocks on the ground. In particular, when the planes travelled east, in the direction of Earth’s rotation, the clocks on the airliners were 59 nanoseconds (59 billionths of a second) slow relative to the atomic clocks on the ground. When the aeroplanes travelled west, the clocks were 273 nanoseconds fast. This discrepancy is caused by the rotation of Earth, which causes an additional time dilation. If the effect of Earth's rotation is removed, the time dilation produced by the motion of the airliners confirms Einstein's theory of how time changes with motion, as the dilation is in agreement with predictions made by the theory.
Time dilation effects due to gravity have been experimentally verified in many ways. For example, time on the Sun's surface runs about two parts in a million slower than on Earth because of the Sun's much higher gravity. In 1968 American physicist Irwin Shapiro confirmed this effect when he showed that radar signals and their reflections from planets are delayed when the Sun is near the path of the signals.
Albert Einstein (1879-1955), German-born American physicist and Nobel laureate, best known as the creator of the special and general theories of relativity and for his bold hypothesis concerning the particle nature of light. He is perhaps the most well-known scientist of the 20th century.
Albert Einstein is considered one of the greatest and most popular scientists of all time. Three papers he published in 1905 were pivotal in the development of physics and, to a large degree, Western thought. These papers discussed the quantum nature of light, provided a description of molecular motion, and introduced the special theory of relativity. Einstein was famous for continually reexamining traditional scientific assumptions and coming to straightforward, elegant conclusions no one else had reached. He is less famous for his social involvement, although he was a staunch supporter of both pacifism and Zionism. Here, Einstein discusses Gandhi and commends nonviolence.
Albert Einstein was most famous for his contributions to theoretical physics, but he was also active in social and political causes such as Zionism and political freedom. Here, some of his equations appear next to his signature on a letter addressing the threat of an atomic bomb in Nazi Germany.
Einstein was born in Ulm on March 14, 1879, and spent his youth in Munich, where his family owned a small shop that manufactured electric machinery. He did not talk until the age of three, but even as a youth he showed a brilliant curiosity about nature and an ability to understand difficult mathematical concepts. At the age of 12 he taught himself Euclidean geometry.
With his brilliant theoretical work, German-born American physicist Albert Einstein single-handedly revolutionized 20th-century physics and opened up many new branches of scientific research. In this 1955 article from Scientific American, Nobel-laureate physicists Niels Bohr of Denmark and Isidor Isaac Rabi of the United States paid tribute to Einstein and discussed the importance of his contributions to physics.
Einstein hated the dull regimentation and unimaginative spirit of school in Munich. When repeated business failure led the family to leave Germany for Milan, Italy, Einstein, who was then 15 years old, used the opportunity to withdraw from the school. He spent a year with his parents in Milan, and when it became clear that he would have to make his own way in the world, he finished secondary school in Aarau, Switzerland, and entered the Swiss Federal Institute of Technology in Zürich. Einstein did not enjoy the methods of instruction there. He often cut classes and used the time to study physics on his own or to play his beloved violin. He passed his examinations and graduated in 1900 by studying the notes of a classmate. His professors did not think highly of him and would not recommend him for a university position.
After presenting his general theory of relativity in 1915, German-born American physicist Albert Einstein tried in vain to unify his theory of gravitation with one that would include all the fundamental forces in nature. Einstein discussed his special and general theories of relativity and his work toward a unified field theory in a 1950 Scientific American article. At the time, he was not convinced that he had discovered a valid solution capable of extending his general theory of relativity to other forces. He died in 1955, leaving this problem unsolved.
For two years Einstein worked as a tutor and substitute teacher. In 1902 he secured a position as an examiner in the Swiss patent office in Bern. In 1903 he married Mileva Maric, who had been his classmate at the polytechnic. They had two sons but eventually divorced. Einstein later remarried.
German-born American physicist Albert Einstein’s elegant equation E=mc2 predicted that energy could be converted to matter. Using a linear accelerator and high-energy laser light, physicists have done just that.
In 1905 Einstein received his doctorate from the University of Zürich for a theoretical dissertation on the dimensions of molecules, and he also published three theoretical papers of central importance to the development of 20th-century physics. In the first of these papers, on Brownian motion, he made significant predictions about the motion of particles that are randomly distributed in a fluid. These predictions were later confirmed by experiment.
The second paper, on the photoelectric effect, contained a revolutionary hypothesis concerning the nature of light. Einstein not only proposed that under certain circumstances light can be considered as consisting of particles, but he also hypothesized that the energy carried by any light particle, called a photon, is proportional to the frequency of the radiation. The formula for this is E = h?, where E is the energy of the radiation, h is a universal constant known as Planck’s constant, and ? is the frequency of the radiation. This proposal - that the energy contained within a light beam is transferred in individual units, or quanta—contradicted a hundred-year-old tradition of considering light energy a manifestation of continuous processes. Virtually no one accepted Einstein’s proposal. In fact, when the American physicist Robert Andrews Millikan experimentally confirmed the theory almost a decade later, he was surprised and somewhat disquieted by the outcome.
Einstein, whose prime concern was to understand the nature of electromagnetic radiation, subsequently urged the development of a theory that would be a fusion of the wave and particle models for light. Again, very few physicists understood or were sympathetic to these ideas.
Einstein’s third major paper in 1905, ‘On the Electrodynamics of Moving Bodies,’ contained what became known as the special theory of relativity. Since the time of the English mathematician and physicist Sir Isaac Newton, natural philosophers (as physicists and chemists were known) had been trying to understand the nature of matter and radiation, and how they interacted in some unified world picture. The position that mechanical laws are fundamental has become known as the mechanical world view, and the position that electrical laws are fundamental has become known as the electromagnetic world view. Neither approach, however, is capable of providing a consistent explanation for the way radiation (light, for example) and matter interact when viewed from different inertial frames of reference, that is, an interaction viewed simultaneously by an observer at rest and an observer moving at uniform speed.
In the spring of 1905, after considering these problems for ten years, Einstein realized that the crux of the problem lay not in a theory of matter but in a theory of measurement. At the heart of his special theory of relativity was the realization that all measurements of time and space depend on judgments as to whether two distant events occur simultaneously. This led him to develop a theory based on two postulates: the principle of relativity, that physical laws are the same in all inertial reference systems, and the principle of the invariance of the speed of light, that the speed of light in a vacuum is a universal constant. He was thus able to provide a consistent and correct description of physical events in different inertial frames of reference without making special assumptions about the nature of matter or radiation, or how they interact. Virtually no one understood Einstein’s argument.
The difficulty that others had with Einstein’s work was not because it was too mathematically complex or technically obscure; the problem resulted, rather, from Einstein’s beliefs about the nature of good theories and the relationship between experiment and theory. Although he maintained that the only source of knowledge is experience, he also believed that scientific theories are the free creations of a finely tuned physical intuition and that the premises on which theories are based cannot be connected logically to experiment. A good theory, therefore, is one in which a minimum number of postulates is required to account for the physical evidence. This sparseness of postulates, a feature of all Einstein’s work, was what made his work so difficult for colleagues to comprehend, let alone support.
Einstein did have important supporters, however. His chief early patron was the German physicist Max Planck. Einstein remained at the patent office for four years after his star began to rise within the physics community. He then moved rapidly upward in the German-speaking academic world; his first academic appointment was in 1909 at the University of Zürich. In 1911 he moved to the German-speaking university at Prague, and in 1912 he returned to the Swiss National Polytechnic in Zürich. Finally, in 1914, he was appointed director of the Kaiser Wilhelm Institute for Physics in Berlin.
Even before he left the patent office in 1907, Einstein began work on extending and generalizing the theory of relativity to all coordinate systems. He began by enunciating the principle of equivalence, a postulate that gravitational fields are equivalent to accelerations of the frame of reference. For example, people in a moving elevator cannot, in principle, decide whether the force that acts on them is caused by gravitation or by a constant acceleration of the elevator. The full general theory of relativity was not published until 1916. In this theory the interactions of bodies, which heretofore had been ascribed to gravitational forces, are explained as the influence of bodies on the geometry of space-time (four-dimensional space, a mathematical abstraction, having the three dimensions from Euclidean space and time as the fourth dimension).
On the basis of the general theory of relativity, Einstein accounted for the previously unexplained variations in the orbital motion of the planets and predicted the bending of starlight in the vicinity of a massive body such as the sun. The confirmation of this latter phenomenon during an eclipse of the sun in 1919 became a media event, and Einstein’s fame spread worldwide.
For the rest of his life Einstein devoted considerable time to generalizing his theory even more. His last effort, the unified field theory, which was not entirely successful, was an attempt to understand all physical interactions—including electromagnetic interactions and weak and strong interactions—in terms of the modification of the geometry of space-time between interacting entities.
Most of Einstein’s colleagues felt that these efforts were misguided. Between 1915 and 1930 the mainstream of physics was in developing a new conception of the fundamental character of matter, known as quantum theory. This theory contained the feature of wave-particle duality (light exhibits the properties of a particle, as well as of a wave) that Einstein had earlier urged as necessary, as well as the uncertainty principle, which states that precision in measuring processes is limited. Additionally, it contained a novel rejection, at a fundamental level, of the notion of strict causality. Einstein, however, would not accept such notions and remained a critic of these developments until the end of his life. ‘God,’ Einstein once said, ‘does not play dice with the world.’
German-born American physicist Albert Einstein is best known for his work on relativity, but he was also an outspoken political activist. After World War II, he became a strong advocate for disarmament. In this excerpt, he discusses the dangers faced by the world following the invention of the atomic bomb.
After 1919, Einstein became internationally renowned. He accrued honours and awards, including the Nobel Prize in physics in 1921, from various world scientific societies. His visit to any part of the world became a national event; photographers and reporters followed him everywhere. While regretting his loss of privacy, Einstein capitalized on his fame to further his own political and social views.
German-born physicist Albert Einstein became an avowed pacifist during World War I (1914-1918) and continued to speak out for antiwar efforts throughout his life, although he renounced pacifism in the 1930s in the face of the threat to humanity posed by Nazi Germany. In this message, written from Berlin, Germany, in 1931, Einstein stresses the importance of the upcoming World Disarmament Conference, held in 1932. The conference did not produce any substantive agreements, however, and Einstein left Germany in 1933 when Nazi leader Adolf Hitler came to power.
The two social movements that received his full support were pacifism and Zionism. During World War I he was one of a handful of German academics willing to publicly decry Germany’s involvement in the war. After the war his continued public support of pacifist and Zionist goals made him the target of vicious attacks by anti-Semitic and right-wing elements in Germany. Even his scientific theories were publicly ridiculed, especially the theory of relativity.
When Hitler came to power, Einstein immediately decided to leave0 Germany for the United States. He took a position at the Institute for Advanced Study at Princeton, New Jersey. While continuing his efforts on behalf of world Zionism, Einstein renounced his former pacifist stand in the face of the awesome threat to humankind posed by the Nazi regime in Germany.
Einstein was one of several concerned physicists who collaborated on this letter to President Roosevelt, informing him of the possibility of an unimaginably powerful and dangerous new weapon: the atomic bomb. In the first dark days of World War II, these physicists believed that the Germans were already at work on an atomic bomb, using the results of French and American research. Einstein's letter undoubtedly helped to convince President Roosevelt that the United States had to develop its own atomic weapons program quickly.
In 1939 Einstein collaborated with several other physicists in writing a letter to President Franklin D. Roosevelt, pointing out the possibility of making an atomic bomb and the likelihood that the German government was embarking on such a course. The letter, which bore only Einstein’s signature, helped lend urgency to efforts in the US to build the atomic bomb, but Einstein himself played no role in the work and knew nothing about it at the time.
After the war, Einstein was active in the cause of international disarmament and world government. He continued his active support of Zionism but declined the offer made by leaders of the state of Israel to become president of that country. In the US during the late 1940s and early ‘50s he spoke out on the need for the nation’s intellectuals to make any sacrifice necessary to preserve political freedom. Einstein died in Princeton on April 18, 1955.
Einstein’s efforts in behalf of social causes have sometimes been viewed as unrealistic. In fact, his proposals were always carefully thought out. Like his scientific theories, they were motivated by sound intuition based on a shrewd and careful assessment of evidence and observation. Although Einstein gave much of himself to political and social causes, science always came first, because, he often said, only the discovery of the nature of the universe would have lasting meaning. His writings include Relativity: The Special and General Theory (1916); About Zionism (1931); Builders of the Universe (1932); Why War? (1933), with Sigmund Freud; The World as I See It (1934); The Evolution of Physics (1938), with the Polish physicist Leopold Infeld; and Out of My Later Years (1950). Einstein’s collected papers are being published in a multi-volume work, beginning in 1987.
Astronomy, is that which is exclusively a study of the universe and the celestial bodies, gas, and dust within it. Astronomy includes observations and theories about the solar system, the stars, the galaxies, and the general structure of space. Astronomy also includes cosmology, the study of the universe and its past and future. People who study astronomy are called astronomers, and they use a wide variety of methods to perform their research. These methods usually involve ideas of physics, so most astronomers are also astrophysicists, and the terms astronomer and astrophysicist are basically identical. Some areas of astronomy also use techniques of chemistry, geology, and biology.
Astronomy is the oldest science, dating back thousands of years to when primitive people noticed objects in the sky overhead and watched the way the objects moved. In ancient Egypt, the first appearance of certain stars each year marked the onset of the seasonal flood, an important event for agriculture. In 17th-century England, astronomy provided methods of keeping track of time that were especially useful for accurate navigation. Astronomy has a long tradition of practical results, such as our current understanding of the stars, day and night, the seasons, and the phases of the Moon. Much of today's research in astronomy does not address immediate practical problems. Instead, it involves basic research to satisfy our curiosity about the universe and the objects in it. One day such knowledge may well be of practical use to humans.
Amateur astronomers can get a clear view of some astronomical objects even without a telescope. Binoculars can make features on the Moon visible and reveal some detail in more distant objects such as nebulas and some of the planets.
The comet Hyakutake was discovered by Japanese amateur astronomer Yuji Hyakutake in 1996. Hyakutake used a large pair of binoculars to scan the sky. Comet Hyakutake was the second comet that Hyakutake discovered in two months.
Astronomers use tools such as telescopes, cameras, spectrographs, and computers to analyze the light that astronomical objects emit. Amateur astronomers observe the sky as a hobby, while professional astronomers are paid for their research and usually work for large institutions such as colleges, universities, observatories, and government research institutes. Amateur astronomers make valuable observations, but are often limited by lack of access to the powerful and expensive equipment of professional astronomers.
A wide range of astronomical objects is accessible to amateur astronomers. Many solar system objects—such as planets, moons, and comets—are bright enough to be visible through binoculars and small telescopes. Small telescopes are also sufficient to reveal some of the beautiful detail in nebulas—clouds of gas and dust in our galaxy. Many amateur astronomers observe and photograph these objects. The increasing availability of sophisticated electronic instruments and computers over the past few decades has made powerful equipment more affordable and allowed amateur astronomers to expand their observations to much fainter objects. Amateur astronomers sometimes share their observations by posting their photographs on the World Wide Web, a network of information based on connections between computers.
Amateurs often undertake projects that require numerous observations over days, weeks, months, or even years. By searching the sky over a long period of time, amateur astronomers may observe things in the sky that represent sudden change, such as new comets or novas (stars that brighten suddenly). This type of consistent observation is also useful for studying objects that change slowly over time, such as variable stars and double stars. Amateur astronomers observe meteor showers, sunspots, and groupings of planets and the Moon in the sky. They also participate in expeditions to places in which special astronomical events—such as solar eclipses and meteor showers - are most visible. Several organizations, such as the Astronomical League and the American Association of Variable Star Observers, provide meetings and publications through which amateur astronomers can communicate and share their observations.
Professional astronomers usually have access to powerful telescopes, detectors, and computers. Most work in astronomy includes three parts, or phases. Astronomers first observe astronomical objects by guiding telescopes and instruments to collect the appropriate information. Astronomers then analyze the images and data. After the analysis, they compare their results with existing theories to determine whether their observations match with what theories predict, or whether the theories can be improved. Some astronomers work solely on observation and analysis, and some work solely on developing new theories.
Ever since the Italian astronomer Galileo first used a telescope in 1609 to study the heavens, astronomers have sought to build more powerful telescopes to probe the cosmos. Astronomers invented radio, X-ray, ultraviolet, and infrared telescopes to study heavenly objects at all wavelengths in the electromagnetic spectrum. This 1999 article from Scientific American describes seven extraordinary telescopes, from a massive radio array scattered across North America to a neutrino detector buried deep under ice in Antarctica.
Astronomy is such a broad topic that astronomers specialize in one or more parts of the field. For example, the study of the solar system is a different area of specialization than the study of stars. Astronomers who study our galaxy, the Milky Way, often use techniques different from those used by astronomers who study distant galaxies. Many planetary astronomers, such as scientists who study Mars, may have geology backgrounds and not consider themselves astronomers at all. Solar astronomers use different telescopes than nighttime astronomers use, because the Sun is so bright. Theoretical astronomers may never use telescopes at all. Instead, these astronomers use existing data or sometimes only previous theoretical results to develop and test theories. An increasing field of astronomy is computational astronomy, in which astronomers use computers to simulate astronomical events. Examples of events for which simulations are useful include the formation of the earliest galaxies of the universe or the explosion of a star to make a supernova.
Astronomers learn about astronomical objects by observing the energy they emit. These objects emit energy in the form of electromagnetic radiation. This radiation travels throughout the universe in the form of waves and can range from gamma rays, which have extremely short wavelengths, to visible light, to radio waves, which are very long. The entire range of these different wavelengths makes up the electromagnetic spectrum.
Astronomers gather different wavelengths of electromagnetic radiation depending on the objects that are being studied. The techniques of astronomy are often very different for studying different wavelengths. Conventional telescopes work only for visible light and the parts of the spectrum near visible light, such as the shortest infrared wavelengths and the longest ultraviolet wavelengths. Earth’s atmosphere complicates studies by absorbing many wavelengths of the electromagnetic spectrum. Gamma-ray astronomy, X-ray astronomy, infrared astronomy, ultraviolet astronomy, radio astronomy, visible-light astronomy, cosmic-ray astronomy, gravitational-wave astronomy, and neutrino astronomy all use different instruments and techniques.
Observational astronomers use telescopes or other instruments to observe the heavens. The astronomers who do the most observing, however, probably spend more time using computers than they do using telescopes. A few nights of observing with a telescope often provide enough data to keep astronomers busy for months analyzing the data.
The simplest refracting telescope has two convex lenses, which are thicker in the middle than at the edges. The lens closest to the object is called the objective lens. This lens collects light from a distant source and brings it to a focus as an upside-down image within the telescope tube. The eyepiece lens forms an image that remains inverted. More complex refracting telescopes contain an additional lens to flip the image right-side up.
Until the 20th century, all observational astronomers studied the visible light that astronomical objects emit. Such astronomers are called optical astronomers, because they observe the same part of the electromagnetic spectrum that the human eye sees. Optical astronomers use telescopes and imaging equipment to study light from objects. Professional astronomers today hardly ever actually look through telescopes. Instead, a telescope sends an object’s light to a photographic plate or to an electronic light-sensitive computer chip called a charge-coupled device, or CCD. CCDs are about 50 times more sensitive than film, so today's astronomers can record in a minute an image that would have taken about an hour to record on film.
Telescopes may use either lenses or mirrors to gather visible light, permitting direct observation or photographic recording of distant objects. Those that use lenses are called refracting telescopes, since they use the property of refraction, or bending, of light or refraction of light (bending or the curvature of light). The largest refracting telescope is the 40-in (1-m) telescope at the Yerkes Observatory in Williams Bay, Wisconsin, founded in the late 19th century. Lenses bend different colours of light by different amounts, so different colours focus slightly differently. Images produced by large lenses can be tinged with colour, often limiting the observations to those made through filters. Filters limit the image to one colour of light, so the lens bends all of the light in the image the same amount and makes the image more accurate than an image that includes all colours of light. Also, because light must pass through lenses, lenses can only be supported at the very edges. Large, heavy lenses are so thick that all the large telescopes in current use are made with other techniques.
A reflecting telescope uses a curved mirror to focus light. Light from distant objects, such as stars and galaxies, enters the telescope tube in parallel rays. These rays are reflected from the concave objective mirror to a diagonal flat mirror. The diagonal mirror reflects the light through a hole in the side of the telescope tube to a lens in the eyepiece.
Reflecting telescopes, which use mirrors, are easier to make than refracting telescopes and reflect all colours of light equally. All the largest telescopes today are reflecting telescopes. The largest single telescopes are the Keck telescopes at Mauna Kea Observatory in Hawaii. The Keck telescope mirrors are 394 in (10.0 m) in diameter. Mauna Kea Observatory, at an altitude of 4,205 m (13,796 ft), is especially high. The air at the observatory is very clear, so many major telescope projects are located there.
The Hubble Space Telescope, free of the distorting effects of Earth’s atmosphere, has an unprecedented view of distant galaxies. The telescope is capable of recording information in various wavelengths, but its optical telescope has produced some of the most spectacular results. It has revealed some of the most distant and oldest galaxies in the universe and helped astronomers get a clearer picture of our solar system
The Hubble Space Telescope (HST), a reflecting telescope that orbits Earth, has returned the clearest images of any optical telescope. The main mirror of the HST is only 94 in (2.4 m) across, far smaller than that of the largest ground-based reflecting telescopes. Turbulence in the atmosphere makes observing objects as clearly as the HST can see impossible for ground-based telescopes. HST images of visible light are about five times finer than any produced by ground-based telescopes. Giant telescopes on Earth, however, collect much more light than the HST can. Examples of such giant telescopes include the twin 32-ft (10-m) Keck telescopes in Hawaii and the four 26-ft (8-m) telescopes in the Very Large Telescope array in the Atacama Desert in northern Chile (the nearest city is Antofagasta, Chile). Often astronomers use space- and ground-based telescopes in conjunction.
Astronomers usually share telescopes. Many institutions with large telescopes accept applications from any astronomer who wishes to use the instruments, though others have limited sets of eligible applicants. The institution then divides the available time among successful applicants and assigns each astronomer an observing period. Astronomers can collect data from telescopes remotely. Data from Earth-based telescopes can be sent electronically over computer networks. Data from space-based telescopes reach Earth through radio waves collected by antennas on the ground.
A gamma-ray telescope detects radiation that has a shorter wavelength than visible light. Gamma rays enter the telescope through the charged-particle detector and pass into layers of material that transform the gamma rays into electrons and positrons. The electrons and positrons have electric charges, which cause sparks as the particles pass through the spark chambers in the lower part of the telescope. Light detectors at the bottom of the telescope record the sparks.
In order to observe celestial X-ray sources, astronomers use a special kind of telescope launched into orbit, because the earth’s atmosphere absorbs X rays from space. X rays are so short that lenses do not refract, or bend, them as they do ordinary light. However, X rays can be reflected if they make grazing contact with a metal surface. An X-ray telescope uses sets of nested, slightly tapering cylinders to focus X rays onto a detector.
Gamma rays have the shortest wavelengths. Special telescopes in orbit around Earth, such as the National Aeronautics and Space Administration’s (NASA’s) Compton Gamma-Ray Observatory, gather gamma rays before Earth’s atmosphere absorbs them. X rays, the next shortest wavelengths, also must be observed from space. NASA’s Chandra X-Ray Observatory (CXO) is a school-bus-sized spacecraft scheduled to begin studying X rays from orbit in 1999. It is designed to make high-resolution images.
Ultraviolet light has wavelengths longer than X rays, but shorter than visible light. Ultraviolet telescopes are similar to visible-light telescopes in the way they gather light, but the atmosphere blocks most ultraviolet radiation. Most ultraviolet observations, therefore, must also take place in space. Most of the instruments on the Hubble Space Telescope (HST) are sensitive to ultraviolet radiation. Humans cannot see ultraviolet radiation, but astronomers can create visual images from ultraviolet light by assigning particular colours or shades to different intensities of radiation.
Infrared telescopes detect radiation that has wavelengths longer than the light that humans can see. Infrared radiation enters the telescope and reflects off of a large mirror on the bottom of the telescope, then off of a smaller mirror. Detectors and instruments beneath the mirrors record the radiation. Infrared telescopes must be kept at very low temperatures to prevent their own heat from producing infrared radiation that could interfere with observations.
Infrared astronomers study parts of the infrared spectrum, which consists of electromagnetic waves with wavelengths ranging from just longer than visible light to 1,000 times longer than visible light. Earth’s atmosphere absorbs infrared radiation, so astronomers must collect infrared radiation from places where the atmosphere is very thin, or from above the atmosphere. Observatories for these wavelengths are located on certain high mountaintops or in space, where most infrared wavelengths can be observed only from space. Every warm object emits some infrared radiation. Infrared astronomy is useful because objects that are not hot enough to emit visible or ultraviolet radiation may still emit infrared radiation. Infrared radiation also passes through interstellar and intergalactic gas and dust more easily than radiation with shorter wavelengths. Further, the brightest part of the spectrum from the farthest galaxies in the universe is shifted into the infrared. The Next Generation Space Telescope, which NASA plans to launch in 2006, will operate especially in the infrared.
The Very Large Array is a collection of parabolic dish antennas, located near Socorro, New Mexico. The 27 antennas are attached to a system of Y-shaped tracks; each track is 21 km (13 mi) in length. The individual signals from each telescope are combined into one high-resolution image, making the array the world's largest radio telescope.
Radio waves have the longest wavelengths. Radio astronomers use giant dish antennas to collect and focus signals in the radio part of the spectrum. These celestial radio signals, often from hot bodies in space or from objects with strong magnetic fields, come through Earth's atmosphere to the ground. Radio waves penetrate dust clouds, allowing astronomers to see into the centre of our galaxy and into the cocoons of dust that surround forming stars.
Sometimes astronomers study emissions from space that are not electromagnetic radiation. Some of the particles of interest to astronomers are neutrinos, cosmic rays, and gravitational waves. Neutrinos are tiny particles with no electric charge and very little or no mass. The Sun and supernovas emit neutrinos. Most neutrino telescopes consist of huge underground tanks of liquid. These tanks capture a few of the many neutrinos that strike them, while the vast majority of neutrinos pass right through the tanks.
Cosmic rays are electrically charged particles that come to Earth from outer space at almost the speed of light. They are made up of negatively charged particles called electrons and positively charged nuclei of atoms. Astronomers do not know where most cosmic rays come from, but they use cosmic-ray detectors to study the particles. Cosmic-ray detectors are usually grids of wires that produce an electrical signal when a cosmic ray passes close to them.
Gravitational waves are a predicted consequence of the general theory of relativity developed by German-born American physicist Albert Einstein. Since the 1960s astronomers have been building detectors for gravitational waves. Older gravitational-wave detectors were huge instruments that surrounded a carefully measured and positioned massive object suspended from the top of the instrument. Lasers trained on the object were designed to measure the object’s movement, which theoretically would occur when a gravitational wave hit the object. At the end of the 20th century, these instruments had picked up no gravitational waves. Gravitational waves should be very weak, and the instruments were probably not yet sensitive enough to register them. In the 1970s and 1980s American physicists Joseph Taylor and Russell Hulse observed indirect evidence of gravitational waves by studying systems of double pulsars. A new generation of gravitational-wave detectors, developed in the 1990s, uses interferometers to measure distortions of space that would be caused by passing gravitational waves.
Some objects emit radiation more strongly in one wavelength than in another, but a set of data across the entire spectrum of electromagnetic radiation is much more useful than observations in any one wavelength. For example, the supernova remnant known as the Crab Nebula has been observed in every part of the spectrum, and astronomers have used all the discoveries together to make a complete picture of how the Crab Nebula is evolving.
Astronomer Jay M. Pasachoff, author of A Field Guide to the Stars and Planets (Fourth edition, 2001), is one of the world’s leading experts on the universe and the celestial bodies it contains. Here, Pasachoff answers a wide range of intriguing questions, including what could be done if an asteroid threatened to collide with the Earth, what are brown dwarfs and wormholes, is there life on Mars or on extrasolar planets, and can you ever escape Earth’s gravity.
Whether astronomers take data from a ground-based telescope or have data radioed to them from space, they must then analyze the data. Usually the data are handled with the aid of a computer, which can carry out various manipulations the astronomer requests. For example, some of the individual picture elements, or pixels, of a CCD may be slightly more sensitive than others. Consequently, astronomers sometimes take images of blank sky to measure which pixels appear brighter. They can then take these variations into account when interpreting the actual celestial images. Astronomers may write their own computer programs to analyze data or, as is increasingly the case, use certain standard computer programs developed at national observatories or elsewhere.
Often an astronomer uses observations to test a specific theory. Sometimes, a new experimental capability allows astronomers to study a new part of the electromagnetic spectrum or to see objects in greater detail or through special filters. If the observations do not verify the predictions of a theory, the theory must be discarded or, if possible, modified.
Up to about 3,000 stars are visible at a time from Earth with the unaided eye, far away from city lights, on a clear night. A view at night may also show several planets and perhaps a comet or a meteor shower. Increasingly, human-made light pollution is making the sky less dark, limiting the number of visible astronomical objects. During the daytime the Sun shines brightly. The Moon and bright planets are sometimes visible early or late in the day but are rarely seen at midday.
Earth moves in two basic ways: It turns in place, and it revolves around the Sun. Earth turns around its axis, an imaginary line that runs down its centre through its North and South poles. The Moon also revolves around Earth. All of these motions produce day and night, the seasons, the phases of the Moon, and solar and lunar eclipses.
Earth is about 12,000 km (about 7,000 mi) in diameter. As it revolves, or moves in a circle, around the Sun, Earth spins on its axis. This spinning movement is called rotation. Earth’s axis is tilted 23.5° with respect to the plane of its orbit. Each time Earth rotates on its axis, it goes through one day, a cycle of light and dark. Humans artificially divide the day into 24 hours and then divide the hours into 60 minutes and the minutes into 60 seconds.
Earth revolves around the Sun once every year, or 365.25 days (most people use a 365-day calendar and take care of the extra 0.25 day by adding a day to the calendar every four years, creating a leap year). The orbit of Earth is almost, but not quite, a circle, so Earth is sometimes a little closer to the Sun than at other times. If Earth were upright as it revolved around the Sun, each point on Earth would have exactly 12 hours of light and 12 hours of dark each day. Because Earth is tilted, however, the northern hemisphere sometimes points toward the Sun and sometimes points away from the Sun. This tilt is responsible for the seasons. When the northern hemisphere points toward the Sun, the northernmost regions of Earth’s trajectory allows us to see the Sun 24 hours a day. The whole northern hemisphere gets more sunlight and gets it at a more direct angle than the southern hemisphere does during this period, which lasts for half of the year. The second half of this period, when the northern hemisphere points most directly at the Sun, is the northern hemisphere's summer, which corresponds to winter in the southern hemisphere. During the other half of the year, the southern hemisphere points more directly toward the Sun, so it is spring and summer in the southern hemisphere and fall and winter in the northern hemisphere.
One revolution of the Moon around Earth takes a little over 27 days 7 hours. The Moon rotates on its axis in this same period of time, so the same face of the Moon is always presented to Earth. Over a period a little longer than 29 days 12 hours, the Moon goes through a series of phases, in which the amount of the lighted half of the Moon we see from Earth changes. These phases are caused by the changing angle of sunlight hitting the Moon. (The period of phases is longer than the period of revolution of the Moon, because the motion of Earth around the Sun changes the angle at which the Sun’s light hits the Moon from night to night.)
The Moon’s orbit around Earth is tilted 5° from the plane of Earth’s orbit. Because of this tilt, when the Moon is at the point in its orbit when it is between Earth and the Sun, the Moon is usually a little above or below the Sun. At that time, the Sun lights the side of the Moon facing away from Earth, and the side of the Moon facing toward Earth is dark. This point in the Moon’s orbit corresponds to a phase of the Moon called the new moon. A quarter moon occurs when the Moon is at right angles to the line formed by the Sun and Earth. The Sun lights the side of the Moon closest to it, and half of that side is visible from Earth, forming a bright half-circle. When the Moon is on the opposite side of Earth from the Sun, the face of the Moon visible from Earth is lit, showing the full moon in the sky.
Because of the tilt of the Moon's orbit, the Moon usually passes above or below the Sun at new moon and above or below Earth's shadow at full moon. Sometimes, though, the full moon or new moon crosses the plane of Earth's orbit. By a coincidence of nature, even though the Moon is about 400 times smaller than the Sun, it is also about 400 times closer to Earth than the Sun is, so the Moon and Sun look almost exactly the same size from Earth. If the Moon lines up with the Sun and Earth at new moon (when the Moon is between Earth and the Sun), it blocks the Sun’s light from Earth, creating a solar eclipse. If the Moon lines up with Earth and the Sun at the full moon (when Earth is between the Moon and the Sun), Earth’s shadow covers the Moon, making a lunar eclipse.
During a solar eclipse, the Moon moves between the Sun and Earth. The light from the outer part of the Sun’s atmosphere, called the corona, became visible during a total solar eclipse on July 11, 1991, in La Paz, Baja California, Mexico. The Moon’s shadow on Earth appeared only as a thin band not more than 269 km (167 mi) wide.
A total solar eclipse is visible from only a small region of Earth. During a solar eclipse, the complete shadow of the Moon that falls on Earth is only about 160 km (about 100 mi) wide. As Earth, the Sun, and the Moon move, however, the Moon’s shadow sweeps out a path up to 16,000 km (10,000 mi) long. The total eclipse can only be seen from within this path. A total solar eclipse occurs about every 18 months. Off to the sides of the path of a total eclipse, a partial eclipse, in which the Sun is only partly covered, is visible. Partial eclipses are much less dramatic than total eclipses. The Moon’s orbit around Earth is slightly elliptical, or egg-shaped. The distance between Earth and the Moon varies slightly as the Moon orbits Earth. When the Moon is farther from Earth than usual, it appears smaller and may not cover the entire Sun during an eclipse. A ring, or annulus, of sunlight remains visible, making an annular eclipse. An annular solar eclipse also occurs about every 18 months. Additional partial solar eclipses are also visible from Earth in between.
At a lunar eclipse, the Moon is actually in Earth's shadow. When the Moon is completely in the shadow, the total lunar eclipse is visible from everywhere on the half of Earth from which the Moon is visible at that time. As a result, more people see total lunar eclipses than see total solar eclipses.
In an open place on a clear dark night, streaks of light may appear in a random part of the sky about once every 10 minutes. These streaks are meteors—bits of rock—burning up in Earth's atmosphere. The bits of rock are called meteoroids, and when these bits survive Earth’s atmosphere intact and land on Earth, they are known as meteorites.
Every month or so, Earth passes through the orbit of a comet. Dust from the comet remains in the comet's orbit. When Earth passes through the band of dust, the dust and bits of rock burn up in the atmosphere, creating a meteor shower. Many more meteors are visible during a meteor shower than on an ordinary night. The most observed meteor shower is the Perseid shower, which occurs each year on August 11th or 12th.
Humans have picked out landmarks in the sky and mapped the heavens for thousands of years. Maps of the sky helped people navigate, measure time, and track celestial events. Now astronomers methodically map the sky to produce a universal format for the addresses of stars, galaxies, and other objects of interest.
Some of the stars in the sky are brighter and more noticeable than others are, and some of these bright stars appear to the eye to be grouped together. Ancient civilizations imagined that groups of stars represented figures in the sky. The oldest known representations of these groups of stars, called constellations, are from ancient Sumer (now Iraq) from about 4000 bc. The constellations recorded by ancient Greeks and Chinese resemble the Sumerian constellations. The northern hemisphere constellations that astronomers recognize today are based on the Greek constellations. Explorers and astronomers developed and recorded the official constellations of the southern hemisphere in the 16th and 17th centuries. The International Astronomical Union (IAU) officially recognizes 88 constellations. The IAU defined the boundaries of each constellation, so the 88 constellations divide the sky without overlapping.
Ancient astronomers noted that the Sun makes a yearly journey across the celestial sphere, part of which is represented in this picture by the blue band. The ancient astronomers associated dates with the constellations in this narrow belt (which is known as the zodiac), assigning to each constellation of stars the dates when the Sun was in the same region of the celestial sphere as the constellation. The twelve zodiacal signs for these constellations were named by the 2nd-century astronomer Ptolemy, as follows: Aries (ram), Taurus (bull), Gemini (twins), Cancer (crab), Leo (lion), Virgo (virgin), Libra (balance), Scorpio (scorpion), Sagittarius (archer), Capricorn (goat), Aquarius (water-bearer), and Pisces (fishes).
Astronomers use coordinate systems to label the positions of objects in the sky, just as geographers use longitude and latitude to label the positions of objects on Earth. Astronomers use several different coordinate systems. The two most widely used are the altazimuth system and the equatorial system. The altazimuth system gives an object’s coordinates with respect to the sky visible above the observer. The equatorial coordinate system designates an object’s location with respect to Earth’s entire night sky, or the celestial sphere.
One of the ways astronomers give the position of a celestial object is by specifying its altitude and its azimuth. This coordinate system is called the altazimuth system. The altitude of an object is equal to its angle, in degrees, above the horizon. An object at the horizon would have an altitude of 0°, and an object directly overhead would have an altitude of 90°. The azimuth of an object is equal to its angle in the horizontal direction, with north at 0°, east at 90°, south at 180°, and west at 270°. For example, if an astronomer were looking for an object at 23° altitude and 87° azimuth, the astronomer would know to look fairly low in the sky and almost directly east.
As Earth rotates, astronomical objects appear to rise and set, so their altitudes and azimuths are constantly changing. An object’s altitude and azimuth also vary according to an observer’s location on Earth. Therefore, astronomers almost never use altazimuth coordinates to record an object’s position. Instead, astronomers with altazimuth telescopes translate coordinates from equatorial coordinates to find an object. Telescopes that use an altazimuth mounting system may be simple to set up, but they require many calculated movements to keep them pointed at an object as it moves across the sky. These telescopes fell out of use with the development of the equatorial coordinate and mounting system in the early 1800s. However, computers have made the return to popularity possible for altazimuth systems. Altazimuth mounting systems are simple and inexpensive, and—with computers to do the required calculations and control the motor that moves the telescope—they are practical.
The celestial sphere is an imaginary globe surrounding Earth. Astronomers give stars coordinates from the globe to locate them just as geographers give latitude and longitude coordinates to places on Earth. Right ascension is the celestial equivalent of longitude, and declination is celestial equivalent of latitude.
The equatorial coordinate system is a coordinate system fixed on the sky. In this system, a star keeps the same coordinates no matter what the time is or where the observer is located. The equatorial coordinate system is based on the celestial sphere. The celestial sphere is a giant imaginary globe surrounding Earth. This sphere has north and south celestial poles directly above Earth’s North and South poles. It has a celestial equator, directly above Earth’s equator. Another important part of the celestial sphere is the line that marks the movement of the Sun with respect to the stars throughout the year. This path is called the ecliptic. Because Earth is tilted with respect to its orbit around the Sun, the ecliptic is not the same as the celestial equator. The ecliptic is tilted 23.5° to the celestial equator and crosses the celestial equator at two points on opposite sides of the celestial sphere. The crossing points are called the vernal (or spring) equinox and the autumnal equinox. The vernal equinox and autumnal equinox mark the beginning of spring and fall, respectively. The points at which the ecliptic and celestial equator are farthest apart are called the summer solstice and the winter solstice, which mark the beginning of summer and winter, respectively.
As Earth rotates on its axis each day, the stars and other distant astronomical objects appear to rise in the eastern part of the sky and set in the west. They seem to travel in circles around Earth’s North or South poles. In the equatorial coordinate system, the celestial sphere turns with the stars (but this movement is really caused by the rotation of Earth). The celestial sphere makes one complete rotation every 23 hours 56 minutes, which is four minutes shorter than a day measured by the movement of the Sun. A complete rotation of the celestial sphere is called a sidereal day. Because the sidereal day is slightly shorter than a solar day, the stars that an observer sees from any location on Earth change slightly from night to night. The difference between a sidereal day and a solar day occurs because of Earth’s motion around the Sun.
The equivalent of longitude on the celestial sphere is called right ascension and the equivalent of latitude is declination. Specifying the right ascension of a star is equivalent to measuring the east-west distance from a line called the prime meridian that runs through Greenwich, England, for a place on Earth. Right ascension starts at the vernal equinox. Longitude on Earth is given in degrees, but right ascension is given in units of time—hours, minutes, and seconds. This is because the celestial equator is divided into 24 equal parts—each called an hour of right ascension instead of 15°. Each hour is made up of 60 minutes, each of which is equal to 60 seconds. Measuring right ascension in units of time makes determining when will be the best time for observing an object easier for astronomers. A particular line of right ascension will be at its highest point in the sky above a particular place on Earth four minutes earlier each day, so keeping track of the movement of the celestial sphere with an ordinary clock would be complicated. Astronomers have special clocks that keep sidereal time (24 sidereal hours are equal to 23 hours 56 minutes of familiar solar time). Astronomers compare the current sidereal time to the right ascension of the object they wish to view. The object will be highest in the sky when the sidereal time equals the right ascension of the object.
The direction perpendicular to right ascension - and the equivalent to latitude on Earth—is declination. Declination is measured in degrees. These degrees are divided into arcminutes and arcseconds. One arcminute is equal to 1/60 of a degree, and one arcsecond is equal to 1/60 of an arcminute, or 1/360 of a degree. The celestial equator is at declination 0°, the north celestial pole is at declination 90°, and the south celestial pole has a declination of –90°. Each star has a right ascension and a declination that mark its position in the sky. The brightest star, Sirius, for example, has right ascension 6 hours 45 minutes (abbreviated as 6h 45m) and declination -16 degrees 43 arcminutes (written –16° 43').
Stars are so far away from Earth that the main star motion we see results from Earth’s rotation. Stars do move in space, however, and these proper motions slightly change the coordinates of the nearest stars over time. The effects of the Sun and the Moon on Earth also cause slight changes in Earth’s axis of rotation. These changes, called precession, cause a slow drift in right ascension and declination. To account for precession, astronomers redefine the celestial coordinates every 50 years or so.
Solar systems, both our own and those located around other stars, are a major area of research for astronomers. A solar system consists of a central star orbited by planets or smaller rocky bodies. The gravitational force of the star holds the system together. In our solar system, the central star is the Sun. It holds all the planets, including Earth, in their orbits and provides light and energy necessary for life. Our solar system is just one of many. Astronomers are just beginning to be able to study other solar systems.
Our solar system contains the Sun, nine planets (of which Earth is third from the Sun), and the planets’ satellites. It also contains asteroids, comets, and interplanetary dust and gas.
Nine known planets revolve around the Sun in our solar system. The planets, shown here in order of their distance from the Sun, vary greatly in size, rotation, colour, and composition. For instance, Mercury, a small, hot planet, is, on average, 58 million km (36 million mi) from the Sun, while icy Pluto is 5.9 billion km (3.67 billion mi) away. Venus rotates relatively slowly around its axis, so that one day on the planet equals 58 Earth days. Jupiter is the largest planet in the system, with a volume 1,400 times greater than that of Earth. Saturn has a broad set of rings and features more than twenty satellites, the most of any planet. Mars is characterized by orange colouration and distinct polar ice caps, while methane in the atmospheres of Uranus and Neptune makes these planets a bright blue-green. In addition to being the farthest planet from the Sun, Pluto has the longest period of revolution: 247.7 years.
Until the end of the 18th century, humans knew of five planets—Mercury, Venus, Mars, Jupiter, and Saturn—in addition to Earth. When viewed without a telescope, planets appear to be dots of light in the sky. They shine steadily, while stars seem to twinkle. Twinkling results from turbulence in Earth's atmosphere. Stars are so far away that they appear as tiny points of light. A moment of turbulence can change that light for a fraction of a second. Even though they look the same size as stars to unaided human eyes, planets are close enough that they take up more space in the sky than stars do. The disks of planets are big enough to average out variations in light caused by turbulence and therefore do not twinkle.
Between 1781 and 1930, astronomers found three more planets - Uranus, Neptune, and Pluto. This brought the total number of planets in our solar system to nine. In order of increasing distance from the Sun, the planets in our solar system are Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto.
Astronomers call the inner planets—Mercury, Venus, Earth, and Mars—the terrestrial planets. Terrestrial (from the Latin word terra, meaning ‘Earth’) planets are Earthlike in that they have solid, rocky surfaces. The next group of planets—Jupiter, Saturn, Uranus, and Neptune—is called the Jovian planets, or the giant planets. The word Jovian has the same Latin root as the word Jupiter. Astronomers call these planets the Jovian planets because they resemble Jupiter in that they are giant, massive planets made almost entirely of gas. The mass of Jupiter, for example, is 318 times the mass of Earth. The Jovian planets have no solid surfaces, although they probably have rocky cores several times more massive than Earth. Rings of chunks of ice and rock surround each of the Jovian planets. The rings around Saturn are the most familiar.
Pluto, the outermost planet, is tiny, with a mass about one five-hundredth the mass of Earth. Pluto seems out of place, with its tiny, solid body out beyond the giant planets. Many astronomers believe that Pluto is really just the largest, or one of the largest, of a group of icy objects in the outer solar system. These objects orbit in a part of the solar system called the Kuiper Belt. Even if astronomers decide that Pluto belongs to the Kuiper Belt objects, it will probably still be called a planet for historical reasons.
Most of the planets have moons, or satellites. Earth's Moon has a diameter about one-fourth the diameter of Earth. Mars has two tiny chunks of rock, Phobos and Deimos, each only about 10 km (about 6 mi) across. Jupiter has more than 60 satellites. The largest four, known as the Galilean satellites, are Io, Europa, Ganymede, and Callisto. Ganymede is even larger than the planet Mercury. Saturn has more than 30 satellites. Saturn’s largest moon, Titan, is also larger than the planet Mercury and is enshrouded by a thick, opaque, smoggy atmosphere. Uranus has nearly 30 known moons, and Neptune has at least 11 moons. Pluto has one moon, called Charon. Charon is more than half as big as Pluto.
A comet is classified by its period, the length of time it takes to travel once around the sun. A short-period comet has an orbit approximately as large as Jupiter’s; such a comet has a period of 3.3 to 9 years. A long-period comet follows a path about the size of Neptune’s orbit; Halley’s Comet is an example of a long-period comet. A very long-period comet may take thousands of years to orbit the sun, or it may pass by the sun once and then never return.
Comets and asteroids are rocky and icy bodies that are smaller than planets. The distinction between comets, asteroids, and other small bodies in the solar system is a little fuzzy, but generally a comet is icier than an asteroid and has a more elongated orbit. The orbit of a comet takes it close to the Sun, then back into the outer solar system. When comets near the Sun, some of their ice turns from solid material into gas, releasing some of their dust. Comets have long tails of glowing gas and dust when they are near the Sun. Asteroids are rockier bodies and usually have orbits that keep them at always about the same distance from the Sun.
Asteroids and comets have long been the subject of scientific research. Only recently, as a result of the release of several motion pictures, has public concern been raised about a possible asteroid/comet collision with Earth. But while there are numerous instruments in place to warn of an ‘extraterrestrial bombardment,’ how effective would such a warning be? Could catastrophe be averted?
Both comets and asteroids have their origins in the early solar system. While the solar system was forming, many small, rocky objects called planetesimals condensed from the gas and dust of the early solar system. Millions of planetesimals remain in orbit around the Sun. A large spherical cloud of such objects out beyond Pluto forms the Oort cloud. The objects in the Oort cloud are considered comets. When our solar system passes close to another star or drifts closer than usual to the centre of our galaxy, the change in gravitational pull may disturb the orbit of one of the icy comets in the Oort cloud. As this comet falls toward the Sun, the ice turns into vapour, freeing dust from the object. The gas and dust form the tail or tails of the comet. The gravitational pull of large planets such as Jupiter or Saturn may swerve the comet into an orbit closer to the Sun. The time needed for a comet to make a complete orbit around the Sun is called the comet’s period. Astronomers believe that comets with periods longer than about 200 years come from the Oort Cloud. Short-period comets, those with periods less than about 200 years, probably come from the Kuiper Belt, a ring of planetesimals beyond Neptune. The material in comets is probably from the very early solar system, so astronomers study comets to find out more about our solar system’s formation.
When the solar system was forming, some of the planetesimals came together more toward the centre of the solar system. Gravitational forces from the giant planet Jupiter prevented these planetesimals from forming full-fledged planets. Instead, the planetesimals broke up to create thousands of minor planets, or asteroids, that orbit the Sun. Most of them are in the asteroid belt, between the orbits of Mars and Jupiter, but thousands are in orbits that come closer to Earth or even cross Earth's orbit. Scientists are increasingly aware of potential catastrophes if any of the largest of these asteroids hits Earth. Perhaps 2,000 asteroids larger than 1 km (0.6 mi) in diameter are potential hazards.
The chromosphere is a layer of the Sun’s atmosphere. Astronomers cannot see it in ordinary visible light, so they use instruments that detect other wavelengths of light, then transform the data into pictures that they can see. Astronomers using the European Solar and Heliospheric Observatory (SOHO) used such a process to obtain these images.
The Sun is the nearest star to Earth and is the centre of the solar system. It is only 8 light-minutes away from Earth, meaning light takes only eight minutes to travel from the Sun to Earth. The next nearest star is 4 light-years away, so light from this star, Proxima Centauri (part of the triple star Alpha Centauri), takes four years to reach Earth. The Sun's closeness means that the light and other energy we get from the Sun dominate Earth’s environment and life. The Sun also provides a way for astronomers to study stars. They can see details and layers of the Sun that are impossible to see on more distant stars. In addition, the Sun provides a laboratory for studying hot gases held in place by magnetic fields. Scientists would like to create similar conditions (hot gases contained by magnetic fields) on Earth. Creating such environments could be useful for studying basic physics.
Regions of the Sun include the core, radiation zone, convection zone, and photosphere. Gases in the core are about 150 times as dense as water and reach temperatures as high as 16 million degrees C (29 million degrees F). The Sun’s energy is produced in the core through nuclear fusion of hydrogen atoms into helium. In the radiation zone, heat flows outward through gases that are about as dense as water. The radiation zone is cooler than the core, about 2.5 million degrees C (4.5 million degrees F). In the convection zone, churning motions of the gases carry the Sun’s energy further outward. The convection zone is slightly cooler, about 2 million degrees C (3.6 million degrees F), and less dense, about one-tenth as dense as water. The photosphere is much cooler, about 5500° C (10,000° F) and much less dense, about one-millionth that of water. The turbulence of this region is visible from earth in the form of sunspots, solar flares, and small patches of gas called granules.
The Sun produces its energy by fusing hydrogen into helium in a process called nuclear fusion. In nuclear fusion, two atoms merge to form a heavier atom and release energy. The Sun and stars of similar mass start off with enough hydrogen to shine for about 10 billion years. The Sun is less than halfway through its lifetime.
The Mars Pathfinder spacecraft, launched by the United States in 1997, was made up of a lander containing weather equipment and cameras, and a small rover, which explored the surface of Mars around the lander. The lander folded up around the equipment and the rover for the journey to Mars, then unfolded when it reached the planet's surface.
Although most telescopes are used mainly to collect the light of faint objects so that they can be studied, telescopes for planetary and other solar system studies are also used to magnify images. Astronomers use some of the observing time of several important telescopes for planetary studies. In general, planetary astronomers must apply and compete for observing time on telescopes with astronomers seeking to study other objects. Some planetary objects can be studied as they pass in front of, or occult, distant stars. The atmosphere of Neptune's moon Triton and the shapes of asteroids can be investigated in this way, for example. The fields of radio and infrared astronomy are useful for measuring the temperatures of planets and satellites. Ultraviolet astronomy can help astronomers study the magnetic fields of planets.
Nuclear reactions within the Sun produce extremely hot gasses that emit X rays. The Sun’s magnetic field captures some of these gases and holds them in the Sun’s corona, or outer atmosphere. In this X-ray photograph of the Sun, regions in which the Sun’s magnetic field is strong and can hold more X-ray producing gas are bright, while less active regions are dark.
During the space age, scientists have developed telescopes and other devices, such as instruments to measure magnetic fields or space dust, that can leave Earth's surface and travel close to other objects in the solar system. Robotic spacecraft have visited all of the planets in the solar system except Pluto. Some missions have targeted specific planets and spent much time studying a single planet, and some spacecraft have flown past a number of planets.
Astronomers use different telescopes to study the Sun than they use for nighttime studies because of the extreme brightness of the Sun. Telescopes in space, such as the Solar and Heliospheric Observatory (SOHO) and the Transition Region and Coronal Explorer (TRACE), are able to study the Sun in regions of the spectrum other than visible light. X rays, ultraviolet, and radio waves from the Sun are especially interesting to astronomers. Studies in various parts of the spectrum give insight into giant flows of gas in the Sun, into how the Sun's energy leaves the Sun to travel to Earth, and into what the interior of the Sun is like. Astronomers also study solar-terrestrial relations—the relation of activity on the Sun with magnetic storms and other effects on Earth. Some of these storms and effects can affect radio reception, cause electrical blackouts, or damage satellites in orbit.
Our solar system began forming about 5 billion years ago, when a cloud of gas and dust between the stars in our Milky Way Galaxy began contracting. A nearby supernova—an exploding star—may have started the contraction, but most astronomers believe a random change in density in the cloud caused the contraction. Once the cloud—known as the solar nebula—began to contract, the contraction occurred faster and faster. The gravitational energy caused by this contraction heated the solar nebula. As the cloud became smaller, it began to spin faster, much as a spinning skater will spin faster by pulling in his or her arms. This spin kept the nebula from forming a sphere; instead, it settled into a disk of gas and dust.
In this disk, small regions of gas and dust began to draw closer and stick together. The objects that resulted, which were usually less than 500 km (300 mi) across, are the planetesimals. Eventually, some planetesimals stuck together and grew to form the planets. Scientists have made computer models of how they believe the early solar system behaved. The models show that for a solar system to produce one or two huge planets like Jupiter and several other, much smaller planets is not unusual.
The largest region of gas and dust wound up in the centre of the nebula and formed the protosun (proto is Greek for ‘before’ and is used to distinguish between an object and its forerunner). The increasing temperature and pressure in the middle of the protosun vaporized the dust and eventually allowed nuclear fusion to begin, marking the formation of the Sun. The young Sun gave off a strong solar wind that drove off most of the lighter elements, such as hydrogen and helium, from the inner planets. The inner planets then solidified and formed rocky surfaces. The solar wind lost strength. Jupiter’s gravitational pull was strong enough to keep its shroud of hydrogen and helium gas. Saturn, Uranus, and Neptune also kept their layers of light gases.
The theory of solar system formation described above accounts for the appearance of the solar system as we know it. Examples of this appearance include the fact that the planets all orbit the Sun in the same direction and that almost all the planets rotate on their axes in the same direction. The recent discoveries of distant solar systems with different properties could lead to modifications in the theory, however.
Studies in the visible, the infrared, and the shortest radio wavelengths have revealed disks around several young stars in our galaxy. One such object, Beta Pictoris (about 62 light-years from Earth), has revealed a warp in the disk that could be a sign of planets in orbit. Astronomers are hopeful that, in the cases of these young stars, they are studying the early stages of solar system formation.
Although astronomers have long assumed that many other stars have planets, they have been unable to detect these other solar systems until recently. Planets orbiting around stars other than the Sun are called extrasolar planets. Planets are small and dim compared to stars, so they are lost in the glare of their parent stars and are invisible to direct observation with telescopes.
Astronomers have tried to detect other solar systems by searching for the way a planet affects the movement of its parent star. The gravitational attraction between a planet and its star pulls the star slightly toward the planet, so the star wobbles slightly as the planet orbits it. Throughout the mid- and late 1900s, several observatories tried to detect wobbles in the nearest stars by watching the stars’ movement across the sky. Wobbles were reported in several stars, but later observations showed that the results were false.
In the early 1990s, studies of a pulsar revealed at least two planets orbiting it. Pulsars are compact stars that give off pulses of radio waves at very regular intervals. The pulsar, designated PSR 1257+12, is about 1,000 light-years from Earth. This pulsar's pulses sometimes came a little early and sometimes a little late in a periodic pattern, revealing that an unseen object was pulling the pulsar toward and away from Earth. The environment of a pulsar, which emits X rays and other strong radiation that would be harmful to life on Earth, is so extreme that these objects would have little resemblance to planets in our solar system.
The wobbling of a star changes the star’s light that reaches Earth. When the star moves away from Earth, even slightly, each wave of light must travel farther to Earth than the wave before it. This increases the distance between waves (called the wavelength) as the waves reach Earth. When a star’s planet pulls the star closer to Earth, each successive wavefront has less distance to travel to reach Earth. This shortens the wavelength of the light that reaches Earth. This effect is called the Doppler effect. No star moves fast enough for the change in wavelength to result in a noticeable change in colour, which depends on wavelength, but the changes in wavelength can be measured with precise instruments. Because the planet’s effect on the star is very small, astronomers must analyze the starlight carefully to detect a shift in wavelength. They do this by first using a technique called spectroscopy to separate the white starlight into its component colours, as water vapour does to sunlight in a rainbow. Stars emit light in a continuous range. The range of wavelengths a star emits is called the star’s spectrum. This spectrum has dark lines, called absorption lines, at wavelengths at which atoms in the outermost layers of the star absorb light.
Astronomers know what the exact wavelength of each absorption line is for a star that is not moving. By seeing how far the movement of a star shifts the absorption lines in its spectrum, astronomers can calculate how fast the star is moving. If the motion fits the model of the effect of a planet, astronomers can calculate the mass of the planet and how close it is to the star. These calculations can only provide the lower limit to the planet’s mass, because it is impossible for astronomers to tell at what angle the planet orbits the star. Astronomers need to know the angle at which the planet orbits the star to calculate the planet’s mass accurately. Because of this uncertainty, some of the giant extrasolar planets may actually be a type of failed star called a brown dwarf instead of planets. Most astronomers believe that many of the suspected planets are true planets.
Between 1995 and 1999 astronomers discovered more than a dozen extrasolar planets. Astronomers now know of far more planets outside our solar system than inside our solar system. Most of these planets, surprisingly, are more massive than Jupiter and are orbiting so close to their parent stars that some of them have ‘years’ (the time it takes to orbit the parent star once) as long as only a few days on Earth. These solar systems are so different from our solar system that astronomers are still trying to reconcile them with the current theory of solar system formation. Some astronomers suggest that the giant extrasolar planets formed much farther away from their stars and were later thrown into the inner solar systems by some gravitational interaction.
Stars are an important topic of astronomical research. Stars are balls of gas that shine or used to shine because of nuclear fusion in their cores. The most familiar star is the Sun. The nuclear fusion in stars produces a force that pushes the material in a star outward. However, the gravitational attraction of the star’s material for itself pulls the material inward. A star can remain stable as long as the outward pressure and gravitational force balance. The properties of a star depend on its mass, its temperature, and its stage in evolution.
Astronomers study stars by measuring their brightness or, with more difficulty, their distances from Earth. They measure the ‘colour’ of a star—the differences in the star’s brightness from one part of the spectrum to another—to determine its temperature. They also study the spectrum of a star’s light to determine not only the temperature, but also the chemical makeup of the star’s outer layers.
Stars begin life as diffuse clouds of dust and gas. These clouds condense to form stars, after which the stars can develop into a variety of objects, depending on how much matter they contain. Stars that contain more matter experience the effects of gravity more strongly and evolve into dense bodies, such as neutron stars or even black holes.
A star begins life as a large, relatively cool mass of gas in a nebula, such as the Orion Nebula (left). As gravity causes the gas to contract, the nebula’s temperature rises, eventually becoming hot enough to trigger nuclear reactions in its atoms and form a star. A main sequence star (middle) shines because of the massive, fairly steady output of energy from the fusion of hydrogen nuclei to form helium. The main sequence phase of a medium-sized star is believed to last as long as 10 billion years. The Sun is just over halfway through this phase. Stars eventually use up their energy supply, ending their lives as white dwarfs, which are extremely small, dense globes, or in the case of larger stars, as spectacular explosions called supernovas.
Many different types of stars exist. Some types of stars are really just different stages of a star’s evolution. Some types are different because the stars formed with much more or much less mass than other stars, or because they formed close to other stars. The Sun is a type of star known as a main-sequence star. Eventually, main-sequence stars such as the Sun swell into giant stars and then evolve into tiny, dense, white dwarf stars. Main-sequence stars and giants have a role in the behaviour of most variable stars and novas. A star much more massive than the Sun will become a supergiant star, then explode as a supernova. A supernova may leave behind a neutron star or a black hole.
In about 1910 Danish astronomer Ejnar Hertzsprung and American astronomer Henry Norris Russell independently worked out a way to graph basic properties of stars. On the horizontal axis of their graphs, they plotted the temperatures of stars. On the vertical axis, they plotted the brightness of stars in a way that allowed the stars to be compared. (One plotted the absolute brightness, or absolute magnitude, of a star, a measurement of brightness that takes into account the distance of the star from Earth. The other plotted stars in a nearby galaxy, all about the same distance from Earth.) The resulting Hertzsprung-Russell diagram, also called an H-R diagram or a colour-magnitude diagram (where colour relates to temperature), is a basic tool of astronomers.
A few stars fall in the lower left portion of the H-R diagram, below the main sequence. Just as giant stars are larger and brighter than main-sequences stars, these stars are smaller and dimmer. These smaller, dimmer stars are hot enough to be white or blue-white in colour and are known as white dwarfs.
White dwarf stars are only about the size of Earth. They represent stars with about the mass of the Sun that have burned as much hydrogen as they can. The gravitational force of a white dwarf’s mass is pulling the star inward, but electrons in the star resist being pushed together. The gravitational force is able to pull the star into a much denser form than it was in when the star was burning hydrogen. The final stage of life for all stars like the Sun is the white dwarf stage.
Many stars vary in brightness over time. These variable stars come in a variety of types. One important type is called a Cepheid variable, named after the star delta Cephei, which is a prime example of a Cepheid variable. These stars vary in brightness as they swell and contract over a period of weeks or months. Their average brightness depends on how long the period of variation takes. Thus astronomers can determine how bright the star is merely by measuring the length of the period. By comparing how intrinsically bright these variable stars are with how bright they look from Earth, astronomers can calculate how far away these stars are from Earth. Since they are giant stars and are very bright, Cepheid variables in other galaxies are visible from Earth. Studies of Cepheid variables tell astronomers how far away these galaxies are and are very useful for determining the distance scale of the universe. The Hubble Space Telescope (HST) can determine the periods of Cepheid stars in galaxies farther away than ground-based telescopes can see. Astronomers are developing a more accurate idea of the distance scale of the universe with HST data.
Cepheid variables are only one type of variable star. Stars called long-period variables vary in brightness as they contract and expand, but these stars are not as regular as Cepheid variables. Mira, a star in the constellation Cetus (the whale), is a prime example of a long-period variable star. Variable stars called eclipsing binary stars are really pairs of stars. Their brightness varies because one member of the pair appears to pass in front of the other, as seen from Earth. A type of variable star called R Coronae Borealis stars varies because they occasionally give off clouds of carbon dust that dim these stars.
Sometimes stars brighten drastically, becoming as much as 100 times brighter than they were. These stars are called novas (Latin for ‘new stars’). They are not really new, just much brighter than they were earlier. A nova is a binary, or double, star in which one member is a white dwarf and the other is a giant or supergiant. Matter from the large star falls onto the small star. After a thick layer of the large star’s atmosphere has collected on the white dwarf, the layer burns off in a nuclear fusion reaction. The fusion produces a huge amount of energy, which, from Earth, appears as the brightening of the nova. The nova gradually returns to its original state, and material from the large star again begins to collect on the white dwarf.
Sometimes stars brighten many times more drastically than novas do. A star that had been too dim to see can become one of the brightest stars in the sky. These stars are called supernovas. Sometimes supernovas that occur in other galaxies are so bright that, from Earth, they appear as bright as their host galaxy.
There are two types of supernova. One type is an extreme case of a nova, in which matter falls from a giant or supergiant companion onto a white dwarf. In the case of a supernova, the white dwarf gains so much fuel from its companion that the star increases in mass until strong gravitational forces cause it to become unstable. The star collapses and the core explodes, vaporizing much of the white dwarf and producing an immense amount of light. Only bits of the white dwarf remain after this type of supernova occurs.
The other type of supernova occurs when a supergiant star uses up all its nuclear fuel in nuclear fusion reactions. The star uses up its hydrogen fuel, but the core is hot enough that it provides the initial energy necessary for the star to begin ‘burning’ helium, then carbon, and then heavier elements through nuclear fusion. The process stops when the core is mostly iron, which is too heavy for the star to ‘burn’ in a way that gives off energy. With no such fuel left, the inward gravitational attraction of the star’s material for itself has no outward balancing force, and the core collapses. As it collapses, the core releases a shock wave that tears apart the star’s atmosphere. The core continues collapsing until it forms either a neutron star or a black hole, depending on its mass.
Only a handful of supernovas are known in our galaxy. The last Milky Way supernova seen from Earth was observed in 1604. In 1987 astronomers observed a supernova in the Large Magellanic Cloud, one of the Milky Way’s satellite galaxies (and is the closed galaxy to Earth). This supernova became bright enough to be visible to the unaided eye and is still under careful study from telescopes on Earth and from the Hubble Space Telescope. A supernova in the process of exploding emits radiation in the X-ray range and ultraviolet and radio radiation studies in this part of the spectrum are especially useful for astronomers studying supernova remnants.
Neutron stars are the collapsed cores sometimes left behind by supernova explosions. Pulsars are a special type of neutron star. Pulsars and neutron stars form when the remnant of a star left after a supernova explosion collapses until it is about 10 km (about 6 mi) in radius. At that point, the neutrons—electrically neutral atomic particles—of the star resist being pressed together further. When the force produced by the neutrons balances the gravitational force, the core stops collapsing.
During the 20th century mathematics made rapid advances on all fronts. The foundations of mathematics became more solidly grounded in logic, while at the same time mathematics advanced the development of symbolic logic. Philosophy was not the only field to progress with the help of mathematics. Physics, too, benefited from the contributions of mathematicians to relativity theory and quantum theory. Indeed, mathematics achieved broader applications than ever before, as new fields developed within mathematics (computational mathematics, game theory, and chaos theory) and other branches of knowledge, including economics and physics, achieved firmer grounding through the application of mathematics. Even the most abstract mathematics seemed to find application, and the boundaries between pure mathematics and applied mathematics grew ever fuzzier.
Mathematicians searched for unifying principles and general statements that applied to large categories of numbers and objects. In algebra, the study of structure continued with a focus on structural units called rings, fields, and groups, and at mid-century it extended to the relationships between these categories. Algebra became an important part of other areas of mathematics, including analysis, number theory, and topology, as the search for unifying theories moved ahead. Topology—the study of the properties of objects that remain constant during transformation, or stretching—became a fertile research field, bringing together geometry, algebra, and analysis. Because of the abstract and complex nature of most 20th-century mathematics, most of the remaining sections of this article will discuss practical developments in mathematics with applications in more familiar fields.
No comments:
Post a Comment