Richter magnitude scale

The so-called Richter scale[1] – also Richter magnitude or Richter magnitude scale, more accurately but informally Richter's magnitude scale – for measuring the strength ("size") of earthquakes refers to the original "magnitude scale" developed by Charles F. Richter and presented in his landmark 1935 paper.[2] This was later revised and renamed the Local magnitude scale, denoted as "ML" or "ML". Because of various shortcomings of the ML scale most seismological authorities now use other scales, such as the moment magnitude scale (Mw ), to report earthquake magnitudes, but much of the news media still refers to these as "Richter" magnitudes. All magnitude scales retain the logarithmic character of the original and are scaled to have roughly comparable numeric values.


Prior to the development of the magnitude scale, the only measure of an earthquake's strength or "size" was a subjective assessment of the intensity of shaking observed near the epicenter of the earthquake, categorized by various seismic intensity scales such as the Rossi-Forel scale. In 1883 John Milne surmised that the shaking of large earthquakes might generate waves detectable around the globe, and in 1899 E. Von Rehbur Paschvitz observed in Germany seismic waves attributable to an earthquake in Tokyo.[3] In the 1920s Harry O. Wood and John A. Anderson developed the Wood–Anderson Seismograph, one of the first practical instruments for recording seismic waves.[4] Wood then built, under the auspices of the California Institute of Technology and the Carnegie Institute, a network of seismographs stretching across Southern California.[5] He also recruited the young and unknown Charles Richter to measure the seismograms and locate the earthquakes generating the seismic waves.[6]

In 1931 Kiyoo Wadati showed how he had measured, for several strong earthquakes in Japan, the amplitude of the shaking observed at various distances from the epicenter. He then plotted the logarithm of the amplitude against the distance and found a series of curves that showed a rough correlation with the estimated magnitudes of the earthquakes.[7] Richter resolved some difficulties with this method[8] and then, using data collected by his colleague Beno Gutenberg, he produced similar curves, confirming that they could be used to compare the relative magnitudes of different earthquakes.[9]

To produce a practical method of assigning an absolute measure of magnitude required additional developments. First, to span the wide range of possible values, Richter adopted Gutenberg's suggestion of a logarithmic scale, where each step represents a tenfold increase of magnitude, similar to the magnitude scale used by astronomers for star brightness.[10] Second, he wanted a magnitude of zero to be around the limit of human perceptibility.[11] Third, he specified the Wood–Anderson seismograph as the standard instrument for producing seismograms. Magnitude was then defined as "the logarithm of the maximum trace amplitude, expressed in microns", measured at a distance of 100 km (62 mi). The scale was calibrated by defining a magnitude 3 shock as one that produces (at a distance of 100 km (62 mi)) a maximum amplitude of 1 micron (1 µm, or 0.001 millimeters) on a seismogram recorded by a Wood–Anderson torsion seismograph.[12] Finally, Richter calculated a table of distance corrections,[13] in that for distances less than 200 kilometers[14] the attenuation is strongly affected by the structure and properties of the regional geology.[15]

When Richter presented the resulting scale in 1935, he called it (at the suggestion of Harry Wood) simply a "magnitude" scale.[16] "Richter magnitude" appears to have originated when Perry Byerly told the press that the scale was Richter's and "should be referred to as such."[17] In 1956, Gutenberg and Richter, while still referring to "magnitude scale", labelled it "local magnitude", with the symbol ML , to distinguish it from two other scales they had developed, the surface wave magnitude (MS) and body wave magnitude (MB) scales.[18]