Archaeology, also spelled archeology, the scientific study of the material remains of past human life and activities. These include human artifacts from the very earliest stone tools to the man-made objects that are buried or thrown away in the present day: everything made by human beings—from simple tools to complex machines, from the earliest houses and temples and tombs to palaces, cathedrals, and pyramids. Archaeological investigations are a principal source of knowledge of prehistoric, ancient, and extinct culture. The word comes from the Greek archaia (“ancient things”) and logos (“theory” or “science”).
The archaeologist is first a descriptive worker: he has to describe, classify, and analyze the artifacts he studies. An adequate and objective taxonomy is the basis of all archaeology, and many good archaeologists spend their lives in this activity of description and classification. But the main aim of the archaeologist is to place the material remains in historical contexts, to supplement what may be known from written sources, and, thus, to increase understanding of the past. Ultimately, then, the archaeologist is a historian: his aim is the interpretive description of the past of man.
Increasingly, many scientific techniques are used by the archaeologist, and he uses the scientific expertise of many persons who are not archaeologists in his work. The artifacts he studies must often be studied in their environmental contexts, and botanists, zoologists, soil scientists, and geologists may be brought in to identify and describe plants, animals, soils, and rocks. Radioactive carbon dating, which has revolutionized much of archaeological chronology, is a by-product of research in atomic physics. But although archaeology uses extensively the methods, techniques, and results of the physical and biological sciences, it is not a natural science; some consider it a discipline that is half science and half humanity. Perhaps it is more accurate to say that the archaeologist is first a craftsman, practicing many specialized crafts (of which excavation is the most familiar to the general public), and then a historian.
The justification for this work is the justification of all historical scholarship: to enrich the present by knowledge of the experiences and achievements of our predecessors. Because it concerns things people have made, the most direct findings of archaeology bear on the history of art and technology; but by inference it also yields information about the society, religion, and economy of the people who created the artifacts. Also, it may bring to light and interpret previously unknown written documents, providing even more certain evidence about the past. But no one archaeologist can cover the whole range of man’s history, and there are many branches of archaeology divided by geographical areas (such as classical archaeology, the archaeology of ancient Greece and Rome; or Egyptology, the archaeology of ancient Egypt) or by periods (such as medieval archaeology and industrial archaeology). Writing began 5,000 years ago in Mesopotamia and Egypt; its beginnings were somewhat later in India and China, and later still in Europe. The aspect of archaeology that deals with the past of man before he learned to write has, since the middle of the 19th century, been referred to as prehistoric archaeology, or prehistory. In prehistory the archaeologist is paramount, for here the only sources are material and environmental.
The scope of this article is to describe briefly how archaeology came into existence as a learned discipline; how the archaeologist works in the field, museum, laboratory, and study; and how he assesses and interprets his evidence and transmutes it into history.
No doubt there have always been people who were interested in the material remains of the past, but archaeology as a discipline has its earliest origins in 15th- and 16th-century Europe, when the Renaissance Humanists looked back upon the glories of Greece and Rome. Popes, cardinals, and noblemen in Italy in the 16th century began to collect antiquities and to sponsor excavations to find more works of ancient art. These collectors were imitated by others in northern Europe who were similarly interested in antique culture. All this activity, however, was still not archaeology in the strict sense. It was more like what would be called art collecting today.
Archaeology proper began with an interest in the Greeks and Romans and first developed in 18th-century Italy with the excavations of the Roman cities of Pompeii and Herculaneum. Classical archaeology was established on a more scientific basis by the work of Heinrich Schliemann, who investigated the origins of Greek civilization at Troy and Mycenae in the 1870s; of M.A. Biliotti at Rhodes in this same period; of the German Archaeological Institute under Ernst Curtius at Olympia from 1875 to 1881; and of Alexander Conze at Samothrace in 1873 and 1875. Conze was the first person to include photographs in the publication of his report. Schliemann had intended to dig in Crete but did not do so, and it was left to Arthur Evans to begin work at Knossos in 1900 and to discover the Minoan civilization, ancestor of classical Greece.
Egyptian archaeology began with Napoleon’s invasion of Egypt in 1798. He brought with him scholars who set to work recording the archaeological remains of the country. The results of their work were published in the Description de l’Égypte (1808–25). As a result of discoveries made by this expedition, Jean-François Champollion was able to decipher ancient Egyptian writing for the first time in 1822. This decipherment, which enabled scholars to read the numerous writings left by the Egyptians, was the first great step forward in Egyptian archaeology. The demand for Egyptian antiquities led to organized tomb robbing by men such as Giovanni Battista Belzoni. A new era in systematic and controlled archaeological research began with the Frenchman Auguste Mariette, who also founded the Egyptian Museum at Cairo. The British archaeologist Flinders Petrie, who began work in Egypt in 1880, made great discoveries there and in Palestine during his long lifetime. Petrie developed a systematic method of excavation, the principles of which he summarized in Methods and Aims in Archaeology (1904). It was left to Howard Carter and Lord Carnarvon to make the most spectacular discovery in Egyptian archaeology, that of the tomb of Tutankhamen in 1922.
Mesopotamian archaeology also began with hectic digging into mounds in the hopes of finding treasure and works of art, but gradually these gave way in the 1840s to planned digs such as those of the Frenchman Paul-Émile Botta at Nineveh and Khorsabad, and the Englishman Austen Henry Layard at Nimrud, Kuyunjik, Nabī Yūnus, and other sites. Layard’s popular account of his excavations, Nineveh and Its Remains (1849), became the earliest and one of the most successful archaeological best-sellers. In 1846 Henry Creswicke Rawlinson became the first man to decipher the Mesopotamian cuneiform writing. Toward the end of the 19th century, systematic excavation revealed a previously unknown people, the Sumerians, who had lived in Mesopotamia before the Babylonians and Assyrians. The most impressive Sumerian excavation was that of the Royal Tombs at Ur by Leonard Woolley in 1926.
The development of scientific archaeology in 19th-century Europe from the antiquarianism and treasure collecting of the previous three centuries was due to three things: a geological revolution, an antiquarian revolution, and the propagation of the doctrine of evolution. Geology was revolutionized in the early 19th century with the discovery and demonstration of the principles of uniformitarian stratigraphy (which determines the age of fossil remains by the stratum they occupy below the earth) by men like William Smith, Georges Cuvier, and Charles Lyell. Lyell, in his Principles of Geology (1830–33), popularized this new system and paved the way for the acceptance of the great antiquity of man. Charles Darwin regarded Lyell’s Principles as one of the two germinal works in the formation of his own ideas on evolution. Early stone tools had been identified in Europe since mid-16th century. That they were, however, older than 4004 BCE, the date of man’s origin according to biblical chronology, was not recognized until late in the 18th century, when John Frere suggested a great age for artifacts found in Suffolk, England, based on their location in certain strata. The discoveries of Jacques Boucher de Perthes in the Somme Valley in France, and of William Pengelly in the caves of South Devon in England, were used to demonstrate the antiquity of man in 1859, the same year that saw the publication of Darwin’s revolutionary Origin of Species. Approximate dates for the Paleolithic Period (Old Stone Age) of the prehistoric past were thus established, although the expression “Palaeolithic” was not used until John Lubbock coined it in his book Pre-historic Times (1865).
Half a century before this, Scandinavian archaeologists had created a revolution in antiquarian thought by postulating, on archaeological grounds, successive technological stages in man’s past. C.J. Thomsen classified the material in the Copenhagen Museum, opened to the public in 1819, on the basis of three successive ages of Stone, Bronze, and Iron. His pupil and successor, J.J.A. Worsaae, showed the correctness of this museum arrangement by observed stratigraphy in the Danish peat bogs and barrows (funerary mounds). Low lake levels in Switzerland in the mid-1850s permitted the excavation of the prehistoric Swiss lake dwellings, and here again the theory of a succession of technological stages was confirmed.
Darwin’s Origin of Species implied a long past for man, and the acceptance of the idea of human evolution in the last four decades of the 19th century created a climate of thought in which archaeology flourished and that led to great advances in the unfolding of the full story of man’s development.
In his Pre-historic Times, Lubbock expanded the three-age system of Thomsen and Worsaae to a four-age system, dividing the Stone Age into Old and New periods (Paleolithic and Neolithic). In the last quarter of the 19th century remarkable Paleolithic discoveries were made in France and Spain; these included the discovery and authentication of actual works of sculpture and cave paintings from the Upper (later) Paleolithic Period (c. 30,000–c. 10,000 BCE). When Marcellino de Sautuola discovered the cave paintings at Altamira, Spain (1875–80), most experts refused to believe they were Paleolithic; but after similar discoveries at Les Eyzies in France around 1900, they were accepted as such and were recognized as one of the most surprising and exciting archaeological discoveries. A succession of similar finds has continued in the 20th century. The most famous of these was at Lascaux, France, in 1940.
During the last quarter of the 19th century, Gen. A.H. Pitt-Rivers’ excavations of prehistoric and Roman sites at Cranborne Chase, Dorset, laid the foundations of modern scientific archaeological field technique, which was later developed and improved in England and Wales by men such as Sir Mortimer Wheeler and Sir Cyril Fox.
The 20th century saw the extension of archaeology outside the areas of the Near East, the Mediterranean, and Europe, to other parts of the world. In the early ’20s, excavations at Mohenjo-Daro and Harappā, in present Pakistan, revealed the existence of the prehistoric Indus civilization. In the late ’20s, excavations at An-yang in eastern China established the existence of a prehistoric Chinese culture that could be identified with the Shang dynasty of early Chinese records.
The Stone Age has been described and studied throughout the world; among the most sensational discoveries are those of L.S.B. Leakey, who found stone tools and skeletal remains of early man dating back 2,000,000 years in the Olduvai Gorge in Tanzania. Intensive work of great importance has brought to light early Neolithic sites at Jericho in Palestine; Hassuna, Iraq; Çatalhüyük, Turkey; and elsewhere in the Near East, establishing the origins of agriculture in that region.
Serious archaeological work began later in America than Europe, but as early as 1784 Thomas Jefferson had excavated mounds in Virginia and made careful stratigraphical observations. The 20th century saw a great increase in archaeological knowledge about prehistoric America: two startling advances were the discovery of the origin of domesticated crops (including maize) in Central America and of the Olmec civilization of Mexico (1000–300 BCE)—the oldest of the New World civilizations and probably the parent of all the others.
The enormous growth of archaeological work has meant the establishment of archaeology as an academic discipline; few important universities anywhere in the world are now without professors and departments of archaeology. There are now a very large number of scholarly journals in the field, as well as a considerable body of popularized books and journals that attempt to bridge the gap between professional and layman.
Some archaeologists call everything they do out-of-doors fieldwork, but others distinguish between fieldwork, in a narrower sense, and excavation. Fieldwork, in the narrow sense, consists of the discovery and recording of archaeological sites and their examination by methods other than the use of the spade and the trowel. Sites hitherto unknown are discovered by walking or motoring over the countryside: deliberate reconnaissance is an essential part of archaeological fieldwork.
In Europe, a study of old records and place-names may lead to the discovery of long-forgotten sites. The mapping of new and old sites is an essential part of archaeological survey. This process has been brought to a very high standard of perfection, both in the marking of archaeological sites on ordinary topographical maps and in the production of special period maps. The distribution map of artifacts, especially when studied against the background of the natural environment, is a key method of archaeological study.
The formerly earthbound archaeologist has been greatly helped by the development of aerial photography. The application of aerial photography to archaeological investigation began in a small way during World War I, as a side effect of military reconnaissance, and was given further impetus by World War II; the photographic intelligence departments of all the combatant nations were extensively staffed by archaeologists, who then carried their expertise and enthusiasm into the postwar years. The University of Cambridge now has its own department of air photography under J.K.S. St Joseph: using its own pilot and aircraft, it flies photographic missions over Ireland, Great Britain, Denmark, and The Netherlands. The number of new sites discovered each year by aerial photography is very large. Some of these are surface sites, especially partly destroyed sites that show up well in special conditions of light, as in early morning or late evening. But many are sites that could not be found on the ground and that show up in aerial photographs as variations in soil colour or in the density of crop.
Archaeological reconnaissance may be advanced from ordinary surface or aerial methods in a wide variety of ways. A very simple method is tapping the ground to sound for substructures and inequalities in the subsoil. Deep probes have made it possible to trace walls and ditches. The Lerici Foundation of Milan and Rome has had great success with this method since its development of the Nistri periscope, first used in 1957 in an Etruscan tomb in the cemetery of Monte Abbatone. The periscope is inserted into the burial chamber and can photograph the walls and contents of the whole tomb.
Other modern techniques that have been applied to archaeological prospecting employ electricity and magnetic fields (geophysical prospecting). A method of electrical prospecting had been developed in large-scale oil prospecting: this technique, based on the degree of electrical conductivity present in the soil, began to be used by archaeologists in the late 1940s and has since proved very useful. Magnetic methods of prospecting detect buried features by locating the magnetic disturbances they cause: these were introduced in 1957–58 and use such machines as the proton magnetometer, the proton gradiometer, and the fluxgate gradiometer. An American expedition discovered the site of Sybaris in Sicily by magnetic prospecting. Electromagnetic methods have been in use only since 1962; they employ developments of the concepts used in mine detectors. Instruments such as the pulsed-induction meter and the soil-conductivity meter detect magnetic soil anomalies, but only if the features are fairly shallow.
Excavation is the surgical aspect of archaeology: it is surgery of the buried landscape and is carried out with all the skilled craftsmanship that has been built up in the last hundred years since Schliemann and Flinders Petrie. Excavations can be classified, from the point of view of their purpose, as planned, rescue, or accidental. Most important excavations are the result of a prepared plan—that is to say, their purpose is to locate buried evidence about an archaeological site. Many are project oriented: as, for example, when a scholar studying the life of the pre-Roman, Celtic-speaking Gauls of France may deliberately select a group of hill forts and excavate them, as Sir Mortimer Wheeler did in northwestern France in the years before the outbreak of World War II. But many excavations, particularly in the heavily populated areas of central and northern Europe, are done not from choice but from necessity. Gravel digging, clearing the ground for airports, quarrying, road widening and building, the construction of houses, factories, and public buildings frequently threaten the destruction of sites known to contain archaeological remains. Emergency excavations then have to be mounted to rescue whatever knowledge of the past can be obtained before these remains are obliterated forever. Partial destruction of cities in western Europe by bombing during World War II allowed rescue excavations to take place before rebuilding. A temple of Mithras in the City of London, Viking settlements in Dublin and at Århus, Denmark, and the original 6th-century-BCE Greek settlement of Massalia (Marseille) were discovered in this way. An extension of the runways at London Airport led to the discovery of a pre-Roman Celtic temple there.
The role of chance in the discovery of archaeological sites and portable finds is considerable. Farmers have often unearthed archaeological finds while plowing their fields. The famous painted and engraved Upper Paleolithic cave of Lascaux in southern France was discovered by chance in 1940 when four French schoolboys decided to investigate a hole left by an uprooted tree. They widened a smaller shaft at the base of the hole and jumped through to find themselves in the middle of this remarkable pagan sanctuary. Similarly, the first cache of the Dead Sea Scrolls was discovered in 1947 by a Bedouin looking for a stray animal. These accidental finds often lead to important excavations. At Barnénès, in north Brittany, a contractor building a road got his stone from a neighbouring prehistoric cairn (burial mound) and, in so doing, discovered and partially destroyed a number of prehistoric burial chambers. The French archaeologist P.-R. Giot was able to halt these depredations and carry out scientific excavations that revealed Barnénès to be one of the most remarkable and interesting prehistoric burial mounds in western Europe.
All forms of archaeological excavation require great skill and careful preparation. Years of training in the field, first as an ordinary digger, then as a site supervisor, with spells of work as recorder, surveyor, and photographer, are required before anyone can organize and direct an excavation himself. Most museums, universities, and government archaeological departments organize training excavations. The very words dig and digging may give the impression to many that excavation is merely a matter of shifting away the soil and subsoil with a spade or shovel; the titles of such admirable and widely read books as Leonard Woolley’s Spadework (1953) and Digging Up the Past (1930) and Geoffrey Bibby’s Testimony of the Spade (1956) might appear to give credence to that view. Actually, much of the work of excavation is careful work with trowel, penknife, and brush. It is often the recovery of features that are almost indistinguishable from nonarchaeological aspects of the buried landscape: one example of this is the recovery of mud-brick walls in Mesopotamia; another is the tracing of collapsed walls of dry stone slabs in a cairn in stony country in the southwest Midlands of England. Sometimes it is the recovery of features of which only ghost traces remain, like the burnt-out bodies from the buried city of Pompeii, or the strings of a harp that were found among the furnishings of Mesopotamian tombs at Ur.
Because of the damage he may cause by inexperience and haste, the untrained amateur archaeologist often hinders the work of the professional. Amateur archaeology is forbidden in many countries by stringent antiquity laws. At the same time, it is certainly true that nonprofessionals have made important contributions in many areas of archaeology. Occasionally, an amateur does make an important discovery the further excavation of which can then be taken over by trained professionals. Such was the case at Sutton Hoo in Suffolk in 1939, when work begun by a competent amateur was taken over by a team of experts who were able to uncover a great Anglo-Saxon burial boat and its treasure, without doubt the most remarkable archaeological find ever made in Britain.
There are, of course, many different types of archaeological sites, and there is no one set of precepts and rules that will apply to excavation as a whole. Some sites, such as temples, forts, roads, villages, ancient cities, palaces, and industrial remains, are easily visible on the surface of the ground. Among the most obvious archaeological sites that have yielded spectacular results by excavation are the huge man-made mounds (tells) in the Near East, called in Arabic tilāl, and in Turkish tepes or hüyüks. They result from the accumulation of remains caused by centuries of human habitation on one spot. The sites of the ancient cities of Troy and Ur are examples. Another type consists of closed sites such as pyramids, chambered tombs, barrows (burial mounds), sealed caves, and rock shelters. In other cases there are no surface traces, and the outline of suspected structures is revealed only by aerial or geophysical reconnaissance as described above. Finally, there are sites in cliffs and gravel beds, where many Paleolithic finds have been made.
The wide range of techniques employed by the archaeologist vary in their application to different kinds of sites. The opening of the tomb chamber in an Egyptian pyramid is, for example, a very different operation from the excavation of a tell in Mesopotamia or a barrow grave in western Europe. Some sites are explored provisionally by sampling cuts known as sondages. Large sites are not usually dug out entirely, although a moderate-sized round barrow may be completely moved by excavation. Whatever the site and the extent of the excavation, one element of the technique is common to all digs, namely, the use of the greatest care in the actual surgery and in the recording of what is found by word, diagram, survey, and photography. To a certain extent all excavation is destruction, and the total excavation of a site subsequently engulfed by a housing estate or gravel digging is total destruction. This is why the archaeologist’s field notes and his published report become primary archaeological documents. They are not themselves, strictly speaking, archaeological facts: they are the excavator’s interpretation of what he saw, or thought he saw, but this is the nearest the discipline can ever get to archaeological facts as established by excavation. The really great excavators leave such a fine record of their digs that subsequent archaeologists can re-create and reinterpret what they saw and found. To delay publishing the results of an excavation within a reasonable time is a serious fault from the point of view of archaeological method. An excavation is not complete until the printed report is available to the world. Often the publication of the report takes as long as, or much longer than, the actual work in the field.
When a site like the Palace of Minos at Knossos or the city of Harappā in Pakistan has been excavated, and the excavations are over, the excavator and the antiquities service of the country concerned have to face the problem of what to do with the excavated structures. Should they be covered in again, or should they be preserved for posterity, and if preserved, what degree of conservation and restoration is permissible? This is the same kind of problem that arises in connection with the removal of antiquities from their homeland to foreign museums, and there is no generally accepted answer to it. These problems remain to beset archaeology: should Sir Arthur Evans have reconstructed the Palace of Minos at Knossos? Should the art treasures of ancient Greece and Egypt, now in western European museums, be returned? There is no simple, straightforward, overall answer to these difficult questions.
Underwater archaeology is a branch of reconnaissance and excavation that has been developed only during the 20th century. It involves the same techniques of observation, discovery, and recording that are the basis of archaeology on land, but adapted to the special conditions of working underwater. It is obvious that no archaeologist working on submarine sites can get far unless he is trained as a diver. Helmeted sponge divers have made most of the important archaeological discoveries in the Mediterranean. The French scientist Jacques-Yves Cousteau developed the self-contained breathing apparatus known as the scuba, of which the most commonly used type is the aqualung. Cousteau’s work at Le Grand Congloué near Marseille was a pioneer underwater excavation, as was the work of the Americans Peter Throckmorton and George Bass off the coast of southern Turkey. In 1958 Throckmorton found a graveyard of ancient ships at Yassı Ada and then discovered the oldest shipwreck ever recorded, at Cape Gelidonya—a Bronze Age shipwreck of the 14th century BCE. George Bass of the University of Pennsylvania worked on a Byzantine wreck at Yassı Ada from 1961 onward, developing the mapping of wrecks photogrammetrically with stereophotographs and using a two-man submarine, the “Asherah,” launched in 1964. The “Asherah” was the first submarine ever built for archaeological investigation.
Excavation often seems to the general public the main and certainly the most glamorous aspect of archaeology; but fieldwork and excavation represent only a part of the archaeologist’s work. The other part is the interpretation in cultural and historical contexts of the facts established—by chance, by fieldwork, and by digging—about the material remains of man’s past. This task of interpretation has five main aspects.
The first concern is the accurate and exact description of all the artifacts concerned. Classification and description are essential to all archaeological work, and, as in botany and zoology, the first requirement is a good and objective taxonomy. Second, there is a need for interpretive analysis of the material from which artifacts were made. This is something that the archaeologist himself is rarely equipped to do; he has to rely on colleagues specializing in geology, petrology (analysis of rocks), and metallurgy. In the early 1920s, H.H. Thomas of the Geological Survey of Great Britain was able to show that stones used in the construction of Stonehenge (a prehistoric construction on Salisbury Plain in southern England) had come from the Prescelly Mountains of north Pembrokeshire; and he established as a fact of prehistory that over 4,000 years ago these large stones had been transported 200 miles from west Wales to Salisbury Plain. Detailed petrological analysis of the material of Neolithic polished stone axes have enabled archaeologists to establish the location of prehistoric ax factories and trade routes. It is also now possible, entirely on a petrological basis, to study the prehistoric distribution of obsidian (a volcanic glass used to make primitive tools).
In the third place, the archaeologist, having dealt with the material of his artifacts by classification and taxonomy, and with its physical nature by petrology and metallurgy, turns to the remaining information he can get from his colleagues in the natural sciences. These tell him the environmental conditions in which the people he is studying lived; he now sees his material remains not as isolated artifacts but in the context of their original environments.
Having analyzed his discoveries according to their form, material, and biological association, the archaeologist then comes to the all-important problem of dating. Many material remains of man’s past have no dating problem: they may be, like coins, or most coins, self-dating, or they may be dated by man-made dates in written records. But the great and difficult part of the archaeologist’s work is dating material remains that are not themselves dated. This can be done in one of three ways. Sometimes an object from another culture, the date of which is known (e.g., in the case of pottery, by its style), is found at a previously undated site. Then, using the relative dating principle (see below) the archaeologist reasons that the material found with the imported object is contemporary with it. Conversely, an object from an undated culture may be found at a site whose date is known. Thus nonliterate communities can be dated by their contact with literate ones. This technique is known as cross dating; it was first developed by Sir Flinders Petrie when he dated Palestinian and early Greek (Aegean) sites by reference to Egyptian ones. Much of the prehistoric chronology of Europe in the Neolithic, Bronze, and Early Iron ages is based on cross dating with the ancient Near East.
Aside from cross dating, the archaeologist faced with material in a site having no literate chronological evidence of its own has two other ways of dating his material. The first is relative, the second absolute. Relative dating merely means the relation of the date of anything found to the date of other things found in its immediate neighbourhood. As has already been described, this method also plays a part in cross dating. Stratigraphy is the essence of relative dating. The archaeologist observes the accumulation of deposits in a gravel pit, a peat bog, in the construction of a barrow, or in accumulated settlements in a tell, and, like the geologists who introduced the principles of stratigraphy in the late 18th and early 19th centuries, he can see the succession of layers in the site and can then establish the chronology of different levels of layers relative to each other. In the excavation of a great tell like Ur or Troy the relative chronology of the various levels of occupation is the first thing to be established. Some archaeologists, even until quite recent times, have mistakenly supposed that depth below ground level is itself an indication of antiquity.
But even in properly observed and recorded stratigraphic levels there is often doubt, and the question arises: are all the artifacts and human remains found in the same level contemporary? Is it possible that there could have been later intrusions that have been difficult to distinguish in the field? The analysis of the fluorine content of bones has been very helpful here. Recognized as a valuable technique by French scientists in the 19th century, it was developed in England by K.P. Oakley in the 1950s. If bones in apparently the same geological or archaeological level have markedly different fluorine content, then it is clear that there must be interference—for example, by a later burial, or by deliberate planting of faked remains, as happened in the case of the Piltdown “Man” hoax in England.
Absolute man-made chronology based on king lists and records in Egypt and Mesopotamia goes back only 5,000 years. For a long time archaeologists searched for an absolute chronology that went beyond this and could turn their relative chronologies into absolute dates. Clay-varve counting seemed to provide the first answer to this need for a nonhuman absolute chronology. Called geochronology by Baron Gerard De Geer, its Swedish inventor, this method was based on counting the thin layers of clay left behind by the melting glaciers when the European Ice Age came to an end. This gave a chronology of about 18,000 years—three times as long as the man-made chronology based on Egyptian and Mesopotamian king lists. Thus, absolute dates could be established for artifacts from the Late Paleolithic Period, the whole of the Mesolithic Period, or Middle Stone Age, and much of the Early Neolithic Period.
Dendrochronology, the dating of trees by counting their growth rings, was first developed for archaeological purposes by A.E. Douglass in the United States. The application of this method to archaeology depends, obviously, on the use in antiquity of old datable trees in the construction of houses and buildings. It has been possible by dendrochronology to date prehistoric American sites as far back as the 3rd and 4th centuries BCE.
The greatest revolution in prehistoric archaeology occurred in 1948, when Willard F. Libby, at the University of Chicago, developed the process of radioactive carbon dating. In this method, the activity of radioactive carbon (carbon-14) present in bones, wood, or ash found in archaeological sites is measured. Because the rate at which this activity decreases in time is known, the approximate age of the material can be determined by comparing it to carbon-14 activity in presently living organic matter. There have been problems and uncertainties about the application of the radioactive carbon method, but, although it is less than perfect, it has given archaeology a new and absolute chronology that goes back 40,000 years.
Following the revolutionary discovery of radioactive carbon dating, other physical techniques of absolute dating were developed, among them potassium–argon dating and dating by thermoluminescence. Potassium–argon dating has made it possible to establish that the earliest remains of man and his artifacts in East Africa go back at least 2,000,000 years, and probably further.
The last and most important task of the archaeologist is to transmute his interpretation of the material remains he studies into historical judgments. When he is dealing with medieval and modern history he is often doing no more than adding to knowledge already available from documentary sources: but even so his contribution is often of great importance; for example, in relation to the growth and development of towns and the study of deserted medieval villages. When he is dealing with ancient history and prehistory, he is making a contribution of the greatest importance and often one that is more important than that of purely literary and epigraphical sources. For the prehistoric period, which now appears to stretch from 2,000,000 years ago to about 3000 BCE, archaeological evidence is the only source of knowledge about human activities. But prehistoric remains have always been the most difficult to interpret, precisely because there are no written records to aid in the task. Now, with exact dating techniques at his disposal, the prehistorian is becoming more like the historical archaeologist and is concerned with the periodization and the historical contexts of his finds.
Heraldry, that is the use of inherited coats of arms and other symbols to show personal identity and family lineage, began on the mid-XIIth century CE battlefield as an easy means to identify medieval royalty and princes who were otherwise unrecognisable beneath their armour. By the XIIth century CE, the practice had spread to nobles and knights who began to take pride in bearing the colours and arms of their family predecessors. Shields and tunics were particularly good places to display such symbols as lions, eagles, crosses, and geometric forms. As more and more knights employed coats of arms so they had to become more sophisticated to differentiate them, and the use of heraldry even spread to institutions such as universities, guilds, and towns. The practice still continues today, with many countries having official colleges of arms which assign individuals and institutions with new arms, and although the medieval knight has long since disappeared, the symbolism of heraldry remains a common sight from company logos to sports teams' badges.
In the Middle Ages, heraldry was known as armoury (in Old French armoirie) and it was distinct from other and more ancient symbols worn by warriors on the battlefield because heraldic arms were both personal and hereditary. The name heraldry derives from the heralds, those officials responsible for listing and proclaiming ancient armorial bearings, especially at medieval tournaments. In the tournaments, a large number of knights either fought in mock cavalry battles or jousted against each other, and it was the heralds’ job to advertise the coming of a tournament, indicate the rules under which they would be held, and pass on challenges issued by one knight to another.
It was, above all, the heralds’ task to keep track of all the coats of arms and be able to identify which arms belonged to which name, perhaps listing them in a ‘roll of arms’. By the 14th century CE, as rulers grasped that heralds with their extensive knowledge of who’s who could be very useful sources of information on exactly who they were fighting against in battles, the status of heralds steadily grew. The heralds wore a short tunic (tabard) which was embroidered with the arms of their master. Heralds also acted as messengers and were given safe passage during times of war. Eventually, heralds were organising such important events as weddings and funerals for royalty and the nobility. The specialised study of family arms known as heraldry was now fully established, and it had become a social science with its own vocabulary, history, rules, and social grades.
From the XVth century CE, heralds and apprentice heralds (pursuivants) were employed in colleges of arms, which settled disputes over conflicting arms and examined people’s claims to have one in the first place. There arose a whole series of specific rules and conventions of heraldry, and it was these colleges of heralds who replaced the monarch as the power who granted or removed arms (due to cowardice or serious crimes). In England, for example, the function was and still is performed by the Royal College of Arms in, appropriately enough, Queen Victoria Street, London. Such offices helped to sort out the confusion which had arisen from anyone, even peasants, creating their own coat of arms, and they accumulated detailed records of all the arms that had ever been created in their jurisdiction. The oldest known English roll of arms dates to c. 1244 CE. Currently housed in the British Library, it is a single sheet, painted on both sides by Mathew Paris and showing 75 coats of arms starting with the king’s.
Medieval heraldry originated, then, sometime in the XIIth century CE as individual warriors - first kings and then knights, too - sought to show off to their opponents exactly who they were up against hidden behind the armour. The idea was that when the enemy saw the three lions motif of Richard I or the black shield of the Black Prince, they would tremble with fear in the knowledge they were not about to fight just any old knight. The retainers of a certain knight and those knights who fought for a particular baron or other nobleman might also wear their master’s arms and colours in special purpose liveries.
The next step was the children of celebrated warriors reusing the arms of their father and so the idea of a hereditary symbol developed with even daughters having the right to bear the arms of their parents. The first recognised instance of a coat of arms being passed on from one generation to another is that of Geoffrey, Count of Anjou (d. 1151 CE) and his grandson William Longespée (‘Longsword’, d. 1226 CE), who both have six lions rampant on the carved shield on their tombs.
The first symbols of identification did not have to be very complicated, indeed, simplicity and boldness made them all the more visible on the battlefield. The most obvious and striking place to carry identification was the shield, which might bear a single specific colour or two colours separated by a horizontal, vertical or diagonal line. Then, as more and more knights took up the trend, so arms had to become more varied if they were to keep their purpose of identification. As a result, not just colours but also symbols were adopted, for example, lions, eagles, tools, flowers, crosses, and stars were all popular choices. Symbols were sometimes stylised because they had to be recognisable from a distance and fit into the peculiar shape of a shield. In addition, certain colours were not mixed as that made the shield difficult to identify (e.g. black on purple and vice-versa).
The next step was to create a unique combination of these designs with certain colours. An additional source of variety was when two families married and their coats of arms could be mixed (compounding) - from a simple half and half ensemble to including a miniature version in one corner of the other. There were also symbols added to coats of arms to indicate the offspring of a holder of arms, for example, a white line through the shield to indicate a first son who otherwise had the same arms as his father. Similarly, a coat of arms might carry an extra symbol to denote the holder was an illegitimate son of the original bearer of the arms.
Coats of arms could be repeated on other paraphernalia of warfare such as on the front and back of surcoats (a long sleeveless gown tied at the waist and worn over armour), pennons (triangular lance flags), horse coverings, banners, and hung below the trumpets of heralds. Although rare because it was expensive, some knights had their arms engraved on their armour. Coats of arms were not only useful in warfare, though. They were a good way to identify competitors in medieval tournaments and knights often had to hang their coat of arms outside the inn in which they were staying during the event. From this practice, the idea of a permanent inn sign took hold, a fact which explains why many of the oldest pubs in England have such names as the Red Lion, Rose and Crown, Black Swan, and White Horse, all classic heraldic symbols.
Coats of arms might appear in official records, where they were often used as seals instead of signatures, and they were painted on residence walls, appeared in the stained glass windows of churches, were sculpted in stone on building exteriors, painted on tableware, and, of course, were represented on the tomb of the person who had born the right to carry the arms while alive. The shield-shape was always maintained and even developed as real shield designs changed over the centuries. When the shield became redundant in the XVth century CE thanks to all-encompassing plate armour, the designs of coats of arms became ever more fanciful and the shield more elaborate. However, the classic kite-shaped shield, although a little squatter, remains the favourite of heralds even today. The notable exceptions are the arms of women who, from the XIVth century CE, began to bear their own coat of arms, typically in a lozenge shape.
As heraldry evolved and it became more important to show off family lineage than to identify oneself on a battlefield, coats of arms became more and more impressive and complex. These devices are known as an achievement in heraldic terms. No longer merely a shield form, they have retainers either side holding the shield (lions, unicorns, knights etc.), the shield might be topped with a crested helmet and even a crown in royal cases. Scrollorigins such as complicated leaf arrangements surround the shield and a motto may be added below which encapsulates a family saying or commemorates a memorable event in their history.
Heraldry employs an extensive range of specific vocabulary so that coats of arms may be precisely described in words (a blazon). The shield, known as the field or ground, is divided into specific areas such as the top (chief), middle (fesse) and bottom (base). The right side of the shield is the dexter and the left side the sinister, with the right and left being from the viewpoint of someone holding the shield from behind, as in battle. The colours used in a shield are known as tinctures and have their own particular heraldic names. The colours used in medieval times were generally limited to:
Green and purple were less commonly used than the others, while in the XVth century CE mulberry (murrey) and orange (tenné) were added to the list. An alternative background to colour was furs, that is designs which resemble the furs which were commonly used in medieval aristocratic clothing. The two most popular were ermine (from the stoat) with many small black tail tips and vair (from the squirrel) which was represented by various white and blue patterns.
To increase combinations, the shield was divided (parted) into different zones of colour by a single vertical (per pale), horizontal (fess) or diagonal line (bend dexter or bend sinister). Alternatively, the shield was divided into four blocks (quarterly), had a chevron, or was divided into either four (saltire) or eight triangles (gyronny). These standard eight variations eventually evolved into a much larger number of divisions and designs. The dividing line between areas of colour could also be altered to provide even more variety, becoming, for example, wavy, crenellated, or zig-zag. Yet another variety was to give the shield a border (sub-ordinary) or impose thick lines of colour (ordinaries) such as stripes, chevrons, crosses and Y-shapes.
Another popular form of identity on shields was to use animate charges (birds and animals) or inanimate charges (everyday objects like spurs, hammers, axes etc.). Monsters from mythology generally only appeared on arms after the medieval period.
The description of a coat of arms had to be precise so that artists could reproduce them without a more expensive visual source. For this reason, a convention of description evolved where the elements which made up a coat of arms were always described in the following order and their exact position noted:
Heraldry still thrives today, of course, and has spread from the individual to the group with clubs, sports teams, and even businesses all creating their own badges of identity. Colleges of arms continue to issue new coats of arms for families, although the process can be both lengthy and expensive so that, even in the more socially mobile societies of the modern world, there is still some distinction and cachet in having the right to them. Coats of arms can still be seen in all manner of places where they send clear visual messages such as those which proclaim state authority on military uniforms and banknotes, those which promote quality and history as on fine porcelain and foodstuffs, and those which promote civic pride such as on fountains and war memorials.