Computational & Technology Resources
an online resource for computational,
engineering & technology publications
PROCEEDINGS OF THE EIGHTH INTERNATIONAL CONFERENCE ON CIVIL AND STRUCTURAL ENGINEERING COMPUTING
Edited by: B.H.V. Topping
A Review of Procedures used for the Correction of Seismic data
N.A. Alexander, A.A. Chanerley and N. Goorvadoo
School of Engineering, University of East London, Dagenham, United Kingdom
N.A. Alexander, A.A. Chanerley, N. Goorvadoo, "A Review of Procedures used for the Correction of Seismic data", in B.H.V. Topping, (Editor), "Proceedings of the Eighth International Conference on Civil and Structural Engineering Computing", Civil-Comp Press, Stirlingshire, UK, Paper 39, 2001. doi:10.4203/ccp.73.39
Keywords: correction, filter, seismic, phase, decimate, adaptive.
Seismic data is sampled over the duration of an earthquake in the form of accelerograms. However the accelerometer records the response of the instrument to the ground motion being measured. The data is inevitable smeared with background noise in both the short and long frequency range. Appropriate signal processing techniques are therefore necessary to extract acceleration data in order to mimic the actual ground motion.
The first ever attempt to devise a procedure to correct recorded accelerograms was made in the 1970's by Trifunac et al. . In this procedure, the raw data is first low-pass filtered to remove high frequency noise. The data is then instrument- corrected followed by high-pass filtering to remove baseline error. This process makes use of an Ormsby filter. The difficulty of choosing the cut-off and roll-off frequencies associated with the Ormsby filter remains unclear. Furthermore an Ormsby filter also corrupts the phase of the signal, a characteristic of an IIR filter. In 1991, a computer program (BAP) was developed by Converse at the U.S. Geological Survey (USGS) to process and plot digitised strong-motion earthquake records (Converse ). This procedure starts with interpolating the unequally quantised data to 600Hz. Then removing the baseline error prior to instrument correction. Before decimation a low-pass filter is employed to remove the effects of aliasing. Then the time-series is decimated from 600Hz to 200Hz. Finally a high- pass filter removes effects of background noise. Although it makes use of an IIR filter in the form of a Butterworth filter, it uses bi-directional filtering so as to cancel out the phase distortion. In this sense the BAP's method is a less invasive procedure. There is some confusion about the baseline correction. BAP adopts a least-square regression technique on the acceleration data while Trifunac's method uses high-pass filtering. As with the original procedure, the method used by BAP relies on the careful selection of cut-off frequencies. In order to reduce computation time and memory, both procedures perform correction on segmented data rather than using the whole available length of the record.
These two procedures are still being used today for data from US events and some other parts of the world. In the last decade, researches have used modified versions of the above two procedures to suit their needs however the core remained unchanged. As more emphasis is being placed on understanding earthquakes in terms of occurrence, damage limitation and aseismic design, there is a necessity for better information regarding the corrected earthquake data and a scrutiny of the procedures therein. The corruption of phase information in particular, during correction has been a cause for concern as phase plays an important role in the torsional behaviour of asymmetric buildings, Alexander et al. . The problem of the low and high-pass filtering has been reviewed by Akkar et al.  and their study proposed a procedure that makes use of segmented polynomials instead of band-pass filters to minimise filtering out of valuable long period information. Although earthquake records from US events are widely available, freely available records from Europe has been lacking. With this aim in mind, the European Council, Environment and Climate Research Programme have disseminated a database of corrected European records, Ambraseys et al. . This programme adopts a correction procedure, which includes adaptive filtering possibly in place of instrument correction, however the details are to scant to be of any use.
As discussed above corrected seismic data received from various sources is without information regarding the particular correction techniques used. This adds to the difficulty of accessing and selecting suitable data for more global seismic studies. This paper traces the development of the correction procedures and describes their rationale and methodology, reviews some of the existing procedures used in the correction of seismic data of phase data. In the light of this review, modifications to the existing procedures are proposed and a software listing used in this study is provided as a step in the interests of good practise in disseminating information on seismic correction. This study also proposes several metrics for judging the reliability of corrected data through the use of power spectral densities, phase spectra, coherence estimates, acceleration response spectra and the short-time Fourier transform and draws conclusions on the reliability of some of the correction procedures used.
purchase the full-text of this paper (price £20)