首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In one program of Project VELA UNIFORM, the Air Force Cambridge Research Laboratories and Texas Instruments developed the ocean-bottom seismograph as a research tool for the study of acoustical wave energy at the solid earth-water interface generated either by earthquakes or by explosive sources. The seismograph described by this article is a third-generation unit having progressed through more than four years of development, design, and field experiments. The present configuration is spherical, 40 inches in diameter. It has been tested successfully in various locations to a depth of 24 000 feet. The primary components, in addition to the three-component seismometer system, contained within the seismograph are a digital clock having a 40-day capacity, with 0.1 second accuracy, seismic amplifiers capable of detecting noise level of 0.1 microvolt or less, a magnetic tape transport having a 30-day recording capacity for signals of less than 10 c/s frequency, and internal battery power supply for all components. In operation, the seismograph descends to the desired depth free fall, remains unattended and untethered, and is retrieved on sonar code command or preset time by decoupling the anchor base through a cocked spring mechanism, which is triggered by fusion of a small steel wire. Results to date have proved the feasibility of such an ocean-bottom data system, including the important capability to recall the package by sonar code command.  相似文献   

2.
射电天文站电磁环境测量方法及分析   总被引:2,自引:0,他引:2  
射电天文业务是接收来自遥远宇宙天体的射电信号进行研究的科学.由于来自宇宙天体的射电天文信号非常微弱,比通信系统的信号暗弱百万倍,使射电天文望远镜对周围电磁环境的要求极其严格.射电天文站址要选在电磁环境良好的地区,并在以后的运行中得到电磁环境的保护.建立一套测试射电天文站址电磁环境的设备和方法,对射电天文站址的电磁环境进行实际测试,并对实测数据进行科学合理地分析、评价,作为以后射电天文观测和保护电磁环境研究的依据都是非常重要的.  相似文献   

3.
<  sup>  &  <  /sup>  陈双远  <  sup>  &  <  /sup>  张芳  齐琳琳  韩成鸣  曾丽  许方宇 《红外与激光工程》2019,48(12):1203010-1203010(9)
大气背景辐射强弱决定了红外望远镜系统极限灵敏度,直接影响系统设计,背景辐射也是反映天文台站观测性能优劣与否的一项重要指标。实测了国内的阿里天文台、德令哈观测站、怀柔观测站等几个典型天文台站大气红外背景辐射,尤其是获得了阿里天文台大气红外背景辐射的一手资料。实测结果表明:阿里天文台大气红外背景辐射的强弱以及辐亮度均值的昼夜变化在几个台站中均最小,其最大辐亮度均值为1.3010-6 Wcm-2sr-1,辐亮度均值的最大昼夜变化仅为18%,其红外背景辐射限最优;其次是德令哈观测站。将扫天实测辐亮度与MODTRAN模拟辐亮度进行对比,发现对于国内如阿里等青藏高原高海拔地区无论是标准大气模式还是实际大气模式其模拟结果与实测间皆存在较大差异。  相似文献   

4.
Measurements of atmospheric transmittance in the 7–10 cm?1 window were made for a period of 1 month, in the dry season (observation time about 300 h.) from the F. Duarte Observatory (3,600 m. above sea level) and from experimental station of Pico Espejo (4,765 m. above sea level). Correlation with the vertical water vapor density profile have been made.  相似文献   

5.
Multichannel seismic deconvolution   总被引:1,自引:0,他引:1  
Deals with Bayesian estimation of 2D stratified structures from echosounding signals. This problem is of interest in seismic exploration, but also for nondestructive testing or medical imaging. The proposed approach consists of a multichannel Bayesian deconvolution method of the 2D reflectivity based upon a theoretically sound prior stochastic model. The Markov-Bernoulli random field representation introduced by Idier et al. (1993) is used to model the geometric properties of the reflectivity, and emphasis is placed on representation of the amplitudes and on deconvolution algorithms. It is shown that the algorithmic structure and computational complexity of the proposed multichannel methods are similar to those of single-channel B-G deconvolution procedures, but that explicit modeling of the stratified structure results in significantly better performances. Simulation results and examples of real-data processing illustrate the performances and the practicality of the multichannel approach  相似文献   

6.
樊浩 《电子测试》2016,(8):153-155
从"地震沉积学"概念提出到现在,地震沉积学已经经历了十多年的发展,其理论体系及方法技术正在不断地完善之中.地震沉积学结合地质规律,尤其是沉积环境及沉积相模式的指导,利用三维地震信息和现代地球物理技术对沉积岩的沉积体系、沉积相平面展布以及沉积发育史进行宏观研究.地震沉积学的技术手段包括相位转换、地层切片及分频解释等.其目前研究的热点问题有地层切片的建立,地震资料相位转换及分频解释等.  相似文献   

7.
Migration of seismic data   总被引:2,自引:0,他引:2  
Reflection seismology seeks to determine the structure of the earth from seismic records obtained at the surface. The processing of these data by digital computers is aimed at rendering them more comprehensible geologically. Seismic migration is one of these processes. Its purpose is to "migrate" the recorded events to their correct spatial positions by backward projection or depropagation based on wave theoretical considerations. During the last 15 years several methods have appeared on the scene. The purpose of this paper is to provide an overview of the major advances in this field. Migration methods examined here fall in three major categories: 1) integral solutions, 2) depth extrapolation methods, and 3) time extrapolation methods. Within these categories, the pertinent equations and numerical techniques are discussed in some detail. The topic of migration before stacking is treated separately with an outline of two different approaches to this important problem.  相似文献   

8.
Multidimensional filtering has been applied during recording and processing of seismic reflection data since the earliest days of analog recording on paper records. As the state of the art has evolved to digital recording and processing, and acquisition has expanded to include dense spatial sampling over a large number of channels, more sophisticated multichannel filters have been developed. These include simple "mixes" (spatial convolution with small operators), two-dimensional Fourier transforms with appropriate limits in spatial and temporal frequencies, and more geometrically, as well as geophysically, meaningful Radon transform techniques. All multidimensional filtering limits the data in some fashion, be it temporal frequency bandwidth, spatial frequency bandwidth, limits in apparent horizontal phase velocity across a recording array (antenna), or limits in apparent wave-propagation velocity. These limits generally are defined to pass regions of high signal level and reject regions of high noise levels. As more recent techniques have emerged, such as Tau-p transforms (special cases of the Radon transform), filter limits may be described in terms of geophysical knowledge as well as signal characteristics. Thus additional information, derived from regional geophysical knowledge, may be added to the data processing sequence. Many new considerations and potential problems have arisen as new multidimensional filtering techniques have been developed, including spatial sampling, aliasing with different transforms, maintenance of dynamic range, and effects of multidimensional filtering at different points in the processing sequence.  相似文献   

9.
A seismic event detection and source location (SEDSL) scheme which uses single-station (three-component) seismic data to analyze seismic event is presented. Each station monitors ground motion along three orthogonal directions-vertical, north, and east. To detect events, SEDSL combines the signals on the three components in a manner analogous to beam steering. Once an event is detected, SEDSL estimates the bearing of the source with respect to the receiver by estimating the polarization direction of the initial compressional phase. The range estimate is then obtained from the relative times of arrival of the different phases  相似文献   

10.
Deconvolution is one of the most important aspects of seismic signal processing. The objective of the deconvolution procedure is to remove the obscuring effect of the wavelet's replica making up the seismic trace and therefore obtain an estimate of the reflection coefficient sequence. This paper introduces a new deconvolution algorithm. Optimal distributed estimators and smoothers are utilized in the proposed solution. The new distributed methodology, perfectly suitable for a multisensor environment, such as the seismic signal processing, is compared to the centralized approach, with respect to computational complexity and architectural efficiency. It is shown that the distributed approach greatly outperforms the currently used centralized methodology offering flexibility in the design of the data fusion network  相似文献   

11.
A new efficient technique for the classification of signals, in the form of earthquake-induced ground-acceleration time histories, according to the damage that they cause in buildings, is presented for the first time. A training set of real seismic accelerograms with well-known damage effects is utilised and fuzzy representations of prototype signals are extracted. These prototypes are selected with respect to the architectural and structural damage caused by the seismic-acceleration time histories. The classification of the unknown accelerograms takes place through a fuzzy comparison with the prototypes and each is classified to the most similar prototype. Real, seismic time-acceleration records were used for testing the algorithm and the high percentage of the correctly recognised signals prove the effectiveness of the algorithm. Correct classification rates of up to 84% are achieved.  相似文献   

12.
Transform methods for seismic data compression   总被引:7,自引:0,他引:7  
The authors consider the development and evaluation of transform coding algorithms for the storage of seismic signals. Transform coding algorithms are developed using the discrete Fourier transform (DFT), the discrete cosine transform (DCT), the Walsh-Hadamard transform (WHT), and the Karhunen-Loeve transform (KLT). These are evaluated and compared to a linear predictive coding algorithm for data rates ranging from 150 to 550 bit/s. The results reveal that sinusoidal transforms are well-suited for robust, low-rate seismic signal representation. In particular, it is shown that a DCT coding scheme reproduces faithfully the seismic waveform at approximately one-third of the original rate  相似文献   

13.
Seismic energy creates acoustic waves that can be recorded and measured. The signal-to-noise ratios influence exploration seismology. Through the use of computers, seismic interpretation can be performed more effectively and efficiently.  相似文献   

14.
Analysis of a polarized seismic wave model   总被引:2,自引:0,他引:2  
We present a model for polarized seismic waves where the data are collected by three-component geophone receivers. The model is based on two parameters describing the polarization properties of the waveforms. These parameters are the ellipticity and the orientation angle of the polarization ellipse. The model describes longitudinal waveforms (P-waves) as well as elliptically polarized waves. For the latter waves the direction-of-propagation of the waveform is in the plane spanned by the ellipse's major and minor axes; Rayleigh waves are treated as a special case. We analyze the identifiability of the models and derive the Cramer-Rao and mean-square-angular-error (MSAE) bounds involving one or two three-component geophones  相似文献   

15.
For some classes of signals, particularly those dominated by low frequency components, such as seismic data first and higher order differences between adjacent signal samples are generally smaller compared with the signal samples. In this paper, evaluating the differencing approach for losslessly compressing several classes of seismic signals is given. Three different approaches employing derivatives are developed and applied. The performance of the techniques presented and the adaptive linear predictor are evaluated and compared for the lossless compression of different seismic signal classes. The proposed differentiator approach yields comparable residual energy compared with that obtained employing the linear predictor technique. The two main advantages of the differentiation method are: (1) the coefficients are fixed integers which do not have to be encoded; and (2) greatly reduced computational complexity, relative to the existing algorithms. These advantages are particularly attractive for real time processing. They have been confirmed experimentally by compressing different seismic signals. Sample results including the compression ratio, i.e., the ratio of the number of bits per sample without compression to those with compression using arithmetically encoded residues are also given  相似文献   

16.
Outlines an eigen-structure algorithm for passive seismic array data. The approach combines broadband multicomponent and narrowband multi-sensor eigen-structure routines. Synthetic data is used to demonstrate the algorithm's ability to resolve multiple signals in both bearing and wavenumber. Analysis of three-component seismic array data is also shown  相似文献   

17.
Multiple reflections in seismic data are generally considered to be unwanted noise that often seriously impedes correct mapping of the subsurface geology in search of oil and gas reservoirs. We train a backpropagation neural network in order to recognize and remove these multiple reflections and thereby bring out the primary reflections underneath. The training data consist of model data containing all multiples and the corresponding seismic sections containing only the primary arrivals. The basis for the modeling is data from a real well log that is typical for the area in which the data were gathered. In contrast to existing conventional deconvolution methods, the neural network does not depend on such restricting assumptions concerning the underlying model as, for example, the Wiener filter, and it has the potential to be successful in cases where other methods fail. A further advantage of the neural net approach is that it is possible to make extensive use of a priori knowledge about the geology, which is present in the form of well log data. Tests with realistic data show the ability of the neural network to extract the desired information  相似文献   

18.
Low bit-rate efficient compression for seismic data   总被引:3,自引:0,他引:3  
Some marine seismic data sets exceed 10 Tbytes, and there are seismic surveys planned with a volume of around 120 Tbytes. The need to compress these very large seismic data files is imperative. Nevertheless, seismic data are quite different from the typical images used in image processing and multimedia applications. Some of their major differences are the data dynamic range exceeding 100 dB in theory, very often it is data with extensive oscillatory nature, the x and y directions represent different physical meaning, and there is significant amount of coherent noise which is often present in seismic data. Up to now some of the algorithms used for seismic data compression were based on some form of wavelet or local cosine transform, while using a uniform or quasiuniform quantization scheme and they finally employ a Huffman coding scheme. Using this family of compression algorithms we achieve compression results which are acceptable to geophysicists, only at low to moderate compression ratios. For higher compression ratios or higher decibel quality, significant compression artifacts are introduced in the reconstructed images, even with high-dimensional transforms. The objective of this paper is to achieve higher compression ratio, than achieved with the wavelet/uniform quantization/Huffman coding family of compression schemes, with a comparable level of residual noise. The goal is to achieve above 40 dB in the decompressed seismic data sets. Several established compression algorithms are reviewed, and some new compression algorithms are introduced. All of these compression techniques are applied to a good representation of seismic data sets, and their results are documented in this paper. One of the conclusions is that adaptive multiscale local cosine transform with different windows sizes performs well on all the seismic data sets and outperforms the other methods from the SNR point of view. All the described methods cover wide range of different data sets. Each data set will have his own best performed method chosen from this collection. The results were performed on four different seismic data sets. Special emphasis was given to achieve faster processing speed which is another critical issue that is examined in the paper. Some of these algorithms are also suitable for multimedia type compression.  相似文献   

19.
Imaging and inversion of zero-offset seismic data   总被引:1,自引:0,他引:1  
We present some of the basic problems of seismic inverse theory and some of the basic principles used in solving them. The one-dimensional (1-D) acoustic inverse problem is treated as an introduction to the more important and difficult three-dimensional (3-D) imaging and inverse problems. We argue that certain aspects of seismic data (e.g., CMP stacking and band- and aperture-limiting) are sufficient to prevent a useful generalization to 3-D of some very sophisticated 1-D solution techniques. This leaves such simple and relatively crude methods as Born inversion as useful candidates for generalization from one to three dimensions. A close investigation of 1-D Born inversion yields fairly general principles for overcoming its inadequacies, which are limited accuracy in mapping size and location of reflection events. These principles are based on ray theory, and lead directly to analogous improvements in higher dimensional seismic inversion techniques. These improvements, combined with others which are based on the fact that seismic data reside in the high-frequency regime as far as mapping isolated reflectors is concerned, yield an integral solution of the higher dimensional acoustic inverse problem which has been shown to be useful in practice.  相似文献   

20.
Adaptive prediction was applied to the problem of detecting small seismic events in microseismic background noise. The Widrow-Hoff LMS adaptive filter [1], [2] used in a prediction configuration is compared with two standard seismic filters as an onset indicator. Examples demonstrate the technique's usefulness with both synthetic and actual seismic data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号