Origin of radiometric dating
Some of the methods have internal checks, so that the data themselves provide good evidence of reliability or lack thereof.
Commonly, a radiometric age is checked by other evidence, such as the relative order of rock units as observed in the field, age measurements based on other decay schemes, or ages on several samples from the same rock unit.
he question of the ages of the Earth and its rock formations and features has fascinated philosophers, theologians, and scientists for centuries, primarily because the answers put our lives in temporal perspective.
Until the 18th century, this question was principally in the hands of theologians, who based their calculations on biblical chronology.
Lord Kelvin and Clarence King calculated the length of time required for the Earth to cool from a white-hot liquid state; they eventually settled on 24 million years.
James Joly calculated that the Earth’s age was 89 million years on the basis of the time required for salt to accumulate in the oceans.
By the mid- to late 1800s, geologists, physicists, and chemists were searching for ways to quantify the age of the Earth.
By the early 1960s, most of the major radiometric dating techniques now in use had been tested and their general limitations were known.
No technique, of course, is ever completely perfected and refinement continues to this day, but for more than two decades radiometric dating methods have been used to measure reliably the ages of rocks, the Earth, meteorites, and, since 1969, the Moon.
Second, the rock or mineral must not lose or gain either potassium or argon from the time of its formation to the time of analysis.
By many experiments over the past three decades, geologists have learned which types of rocks and minerals meet these requirements and which do not.