首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Duplex DNA must remain stable when not in use to protect the genetic material. However, the two strands must be separated whenever genes are copied or expressed to expose the coding strand for synthesis of complementary RNA or DNA bases. Therefore, the double stranded structure must be relatively easy to take apart when required. These conflicting biological requirements have important implications for the mechanical properties of duplex DNA. Considerable insight into the forces required to denature DNA has been provided by nanomanipulation experiments, which measure the mechanical properties of single molecules in the laboratory. This paper describes recent computer simulation methods that have been developed to mimic nanomanipulation experiments and which, quite literally, 'destruction test' duplex DNA in silico. The method is verified by comparison with single molecule stretching experiments that measure the force required to unbind the two DNA strands. The model is then extended to investigate the thermodynamics of DNA bending and twisting. This is of biological importance as the DNA must be very tightly packaged to fit within the nucleus, and is therefore usually found in a highly twisted or supercoiled state (in bacteria) or wrapped tightly around histone proteins into a densely compacted structure (in animals). In particular, these simulations highlight the importance of thermal fluctuations and entropy in determining the biomechanical properties of DNA. This has implications for the action of DNA processing molecular motors, and also for nanotechnology. Biological machines are able to manipulate single molecules reliably on an energy scale comparable to that of thermal noise. The hope is that understanding the statistical mechanisms that a cell uses to achieve this will be invaluable for the future design of 'nanoengines' engineered to perform new technological functions at the nanoscale.  相似文献   

2.
3.
This simulation is intended for use at low, intermediate, and high bit densities. While simpler algorithms can be found for use with either very low bit densities only, or with very high bit densities only, they lack the generality needed for this purpose. The simulation is composed of three distinct sequential computations. First, using a noninteracting particle idealizedM_{r}-Hmodel and the arctangent head field formula, the tape magnetization is computed at 40 points per bit length in each of 5 laminae. Second, this magnetization is averaged throughout the tape thickness and harmonically analyzed. Finally, each harmonic component is weighted according to a demagnetizing-remagnetizing factor given previously, and the final output voltage waveform is computed. Linearity and superposition are thus assumed for all processes following the obviously nonlinear record mechanism. Computed outputs are compared with experimental results for both single transition and multiple transition inputs. The widths of computed and measured isolated output pulses differ by no more than 10 percent, without the adoption of physically unreasonable parameters. Output signals were computed for multiple transition inputs up to 20 000 flux reversals per inch (fr/in), and these compare well with experimental results up to 15000 fr/in.  相似文献   

4.
5.
6.
We propose a method of calculating the heat-transfer coefficients that is based on comparison of the operational data and the equations of heat-transfer dynamics.Translated from Inzhenerno-Fizicheskii Zhurnal, Vol.16, No. 3, pp. 427–479, March, 1969.  相似文献   

7.
8.
A novel method of noise reduction has been tested for mammography using computer-simulated images for which the truth is known exactly. This method is based on comparing two images. The images are compared at different scales, using a cross-correlation function as a measure of similarity to define the image modifications in the wavelet domain. The computer-simulated images were calculated for noise-free primary radiation using a quasi-realistic voxel phantom. Two images corresponding to slightly different geometry were produced. Gaussian noise was added with certain properties to simulate quantum noise. The added noise could be reduced by >70% using the proposed method without any noticeable corruption of the structures. It is possible to save 50% dose in mammography by producing two images (each 25% of the dose for a standard mammogram). Additionally, a reduction of the anatomical noise and, therefore, better detection rates of breast cancer in mammography are possible.  相似文献   

9.
10.
11.
Over recent years the pressures put upon design engineers to produce new, better and cheaper `widgets” has steadily increased. The need for companies to remain competitive requires that products be designed better and often within shorter time scales. In some cases, laws have changed to require tighter restrictions on designs-such as electronic components and systems (electromagnetic compatibility). What many designers no not realise is that within a wide range of engineering and physics disciplines, effective computer tools exist to allow designs to be created and simulated, without the lengthy and expensive need for prototyping  相似文献   

12.
Past research on Computer Adaptive Testing (CAT) has focused almost exclusively on the use of binary items and minimizing the number of items to be administrated. To address this situation, extensive computer simulations were performed using partial credit items with two, three, four, and five response categories. Other variables manipulated include the number of available items, the number of respondents used to calibrate the items, and various manipulations of respondents' true locations. Three item selection strategies were used, and the theoretically optimal Maximum Information method was compared to random item selection and Bayesian Maximum Falsification approaches. The Rasch partial credit model proved to be quite robust to various imperfections, and systematic distortions did occur mainly in the absence of sufficient numbers of items located near the trait or performance levels of interest. The findings further indicate that having small numbers of items is more problematic in practice than having small numbers of respondents to calibrate these items. Most importantly, increasing the number of response categories consistently improved CAT's efficiency as well as the general quality of the results. In fact, increasing the number of response categories proved to have a greater positive impact than did the choice of item selection method, as the Maximum Information approach performed only slightly better than the Maximum Falsification approach. Accordingly, issues related to the efficiency of item selection methods are far less important than is commonly suggested in the literature. However, being based on computer simulations only, the preceding presumes that actual respondents behave according to the Rasch model. CAT research could thus benefit from empirical studies aimed at determining whether, and if so, how, selection strategies impact performance.  相似文献   

13.
A binary 50% mixture of soft spheres is studied via nonequilibrium molecular dynamics, and the equilibrium and nonequilibrium radial distribution functions for a nonconformal mixture with a mass ratio of 10 and a size ratio of about 2 are examined. This model system is related to the real methane/decane mixture, and it is shown that apparently anomalous properties of this mixture, especially the viscosity, could perhaps be understood in terms of the local or ambient mole fraction. In addition, the postulates of the Van der Waals one fluid conformal solution theory are discussed, and a mixing rule for the mass is derived.  相似文献   

14.
A modification of the method of lines is proposed for the approximate solution of the system of partial differential equations describing the thermal dynamics of a multistage forced-circulation heat exchanger (HE).Translated from Inzhenerno-Fizicheskii Zhurnal, Vol.21, No.6, pp. 1053–1059, December, 1971.  相似文献   

15.
16.
Digital filters are finding wide applications in signal processing aspects of such fields as engineering, geophysics and biosignal analysis. The design of a digital filter and its realization with near minima computer word length are two problems which can be solved by computer-aided design techniques In this paper, the automated design of the initial filter (recursive type) only is considered. Both of the basic design methods—the direct and indirect approaches—have been analysed, and compute programs have been written to implement the various design techniques. The present program are limited to the design of flat loss type filters specified by their gain rather than their phase character istics. although provision has been made to allow for other types of design in the indirect approach Design examples are given, and the results analysed with a view to the suitability and limitations of the different design techniques.  相似文献   

17.
18.
Plastic flow of near‐surface rail material under contact loading is a feature of rail–wheel contact, and severe flow typically leads to both wear, and the initiation and development of small surface‐breaking cracks. This paper presents results from a ratcheting based computer simulation, which has been developed to allow the simultaneous investigation of wear, crack initiation and early crack propagation. To identify repeatably small crack‐like flaws, image analysis is applied to the visual representation of the wearing surface generated by the model. This representation shows a good similarity to traditional micrographs taken from sections of worn surfaces. The model clearly reveals the interaction of wear with crack development, processes which are linked because wear truncates surface‐breaking cracks, and can completely remove small surface‐breaking cracks.  相似文献   

19.
Radiation transfer is treated by the application of a narrow-band statistical model (NBSM) that takes emission and absorption gas spectral structures into account. A Monte Carlo method (MCM) using a net exchange technique was developed to integrate the radiative-transfer equation in nongray gas. The proposed procedure is based on a net-exchange formulation (NEF). This formulation provides an efficient way of systematically fulfilling the reciprocity principle, which avoids some of the major problems usually associated with the Monte Carlo method; the numerical efficiency becomes independent of the optical thickness, highly nonuniform grid sizes can be used with no increase in computation time, and configurations with small temperature differences can be treated with very good accuracy. It is shown that the radiative term is significant compared to the conductive term in just two specific regions in the emitting and absorbing gas in the immediate vicinity of the wall and in the external part of the boundary layer. The exchange Monte Carlo method (EMCM) is described in detail for a one-dimensional slab.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号