首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A total of fifteen laboratories participated in the CIE detector response intercomparison which was designed to assess the level of agreement among participating laboratories in the absolute measurement (with respect to SI) of photodetector response in the visible spectral region. Most participants were either commercial laboratories or university laboratories with the National Institute of Standards and Technology (NIST) serving as the host laboratory. Each laboratory determined the absolute response of each of two silicon photodiode radiometers which were designed for the intercomparison by NIST. Approximately two-thirds of the laboratories reported response values which agreed with the NIST values to within ±1.0% at the two wavelengths of 488 and 633 nm.3  相似文献   

2.
The stereochemistry of catenanes, knotted molecules, and Borromean rings is discussed. An augmentation of the Cahn-Ingold-Prelog convention for designating absolute configuration is proposed. A convention is proposed for designating the absolute configuration of knotted molecules. A suggestion is made concerning the citing of the absolute configuration of molecularly dissymmetric diastereomers.Proposals of rules for designating absolute configuration have been put forth by Cahn, Ingold, and Prelog1 and by Terentiev and Potapov.2 The rules of the latter workers are deliberately linked to nomenclature. Those of the former workers rely entirely on structure and have found wide acceptance among organic chemists. In their closely reasoned paper Cahn, Ingold, and Prelog properly claim to have covered by their rules all known types of dissymmetry deriving from tetracovalent and tricovalent atoms.Work aiming toward the establishment of a major system for handling chemical information requires the ability precisely to designate chemical structures without recourse to structural formulas. A number of notation systems have been devised for representing chemical structures by linear arrays of symbols,3 although none of the systems has yet been perfected even for organic compounds.4 Since an adequate information handling system must accommodate not only all known structural types but also those structural types which have to date been merely speculated on, an examination of more esoteric stereochemistry was undertaken. I now wish to propose an augmentation of the Cahn-Ingold-Prelog (C-I-P) rules and the addition of a new (specialized) set of rules to cover a wider range of molecularly dissymmetric structural types.The molecularly dissymmetric structures for which further rules are necessary are of two types: (1) substituted catenanes,5 which are stereochemically related to substituted allenes and spiro compounds, and (2) knots,6, 7 compounds whose dissymmetry is due entirely to topology, independent of substitution. 6, 7 The C-I-P convention is inapplicable to knots and before being applied to catenanes requires a further convention for defining the “near groups”.  相似文献   

3.
A procedure is described for the preparation of sodium amalgam in the form of pellets.In connection with the development of methods for the synthesis of radioactive carbohydrates,1 a procedure and apparatus were previously described [1, 2, 3]2 for the reduction of semimicro quantities of aldonic lactones to sugars. The method employs sodium amalgam together with a slightly soluble, acid salt (sodium binoxalate) as buffer.3 The amalgam is used in the form of pellets made by dropping the molten amalgam into a “shot tower” of oil. Because other workers have had difficulty in making these pellets, the procedure is now given in detail.The amalgam is prepared in a 500-ml, round-bottomed, stainless-steel flask having a single neck, with a 24/40 standard-taper joint (outer),4 and a thermometer inlet. The joint is fitted with a stainless-steel stopper, which has an inlet tube bent at right angles to the stopper and covered with asbestos for convenience in handling. The flask is held in a sturdy, asbestos-covered clamp, which serves as a handle for the vessel during the heating step. An alundum (Soxhlet extraction) thimble 45 mm in diam, in the bottom of which six 1.5-mm holes have been drilled, is held in a second asbestos-covered clamp. The oil bath is a thick-walled heat-resistant glass jar, 6 in. in diam and 18 in. high, containing paraffin oil to within 3 in. of the top (see fig. 1).Open in a separate windowFigure 1Apparatus used in the preparation of sodium amalgam pellets.To prepare the amalgam, a weighed amount of mercury is placed in the flask, into which a continuous stream of dry nitrogen is passed by means of the inlet tube of the stopper. The required amount of sodium5 is weighed under paraffin oil, and then cut into pieces just small enough to be readily slipped through the neck of the flask. Each piece is rinsed in a hydrocarbon solvent, such as heptane or toluene, quickly blotted dry, and dropped through the neck of the flask into the mercury; the stopper is immediately replaced. The sodium reacts quickly with the mercury and may be added fairly rapidly because of the atomosphere of nitrogen. After the addition of the sodium is completed, the flask is heated with a Meeker burner until the amalgam is entirely molten. (The presence of remaining solid particles in the amalgam maybe detected by the sound of their impact on the walls of the flask when it is given a gentle, swirling motion.) While the amalgam is being prepared, the alundum thimble is heated over another Meeker burner by a second operator. The hot thimble is then so clamped that its bottom is 1 to 2 in. above the surface of the oil, and the molten amalgam is poured into the thimble from the thermometer inlet of the flask. The amalgam flows through the holes in the bottom of the thimble, drops through the oil, and collects at the bottom of the oil bath as small, rather flat pellets.Optimal conditions for the production of smooth pellets must be determined by trial. If the flask and thimble have not been sufficiently heated, the amalgam may solidify in the thimble. If the holes in the thimble are too large, the product may be somewhat “thready.” However, once the optimal conditions have been established, the procedure may be repeated without difficulty.The entire operation must be performed in an efficient hood. The amalgam is stored under paraffin oil in a wide-mouthed, screw-capped bottle (see fig. 2). Pellets are removed as needed, weighed under oil, and rinsed with an inert, volatile solvent immediately before use.Open in a separate windowFigure 2Sodium amalgam pellets.  相似文献   

4.
Additional experiments on the rates of thermal degradation of polytetrafluoroethylene in a vacuum confirm an earlier conclusion that a first-order rate law is involved in the degradation reaction.In a study made by Madorsky, Hart, Straus, and Sedlak1 on the rates and activation energy of thermal degradation of polytetrafluoroethylene in a vacuum, two methods were employed: a gravimetric method, using a very sensitive tungsten spring balance in a vacuum system to measure the rate of loss of weight of the degrading sample, and a pressure method, using a multiplying manometer to measure the pressure of the C2F4 formed in the reaction. The material that was used was in the form of a tape 0.07 mm thick. Weight of the sample was about 7 mg in the gravimetric experiments and 5 to 306 mg in the pressure experiments.The rates obtained by the gravimetric method are reproduced in figure 1, plotted as a function of percentage loss of weight of the sample for 480, 490, 500, and 510 °C. The initial rates were obtained by extrapolating the rate curves to the ordinate. The rate curves beyond the initial 5 to 18 percent loss of weight of the sample are straight lines, and when extended to the right they approach near the zero rate at 100 percent volatilization. The rates obtained by the pressure method were studied at 10 different temperatures ranging from 423.5 to 513.0 °C. Logarithms of the initial rates obtained by both methods are shown in figure 2 plotted against the inverse of absolute temperature.2 From the Arrhenius equation the slope of the resulting straight line indicates an activation energy of 80.5 kcal/mole. From the appearance of the curves in figure 1 it seemed logical to conclude that the reaction involved in the thermal degradation of polytetrafluoroethylene in a vacuum follows a first-order law.Open in a separate windowFigure 1Rate of thermal degradation of polytetrafluoroethylene by the weight method as a function of percentage volatilization.Open in a separate windowFigure 2Activation energy slope for thermal degradation of polytetrafluoroethylene.●—weight method (see footnote 1)○—pressure method (see footnote 1)■—present work.At a later date Wall and Michaelson3 studied the rate of thermal degradation of polytetrafluoroethylene at 460 °C in a stream of nitrogen. They used a gravimetric method by heating 1-g samples of a powdered material and weighing the residues at intervals. They state that below about 480 °C the reaction is zero order, whereas above 510 °C they concede it is first order.In view of this result by Wall and Michaelson, it was deemed advisable to check further on the rate order involved in the thermal degradation of this material in a vacuum. Although experiments by the pressure method were carried out in our earlier work at temperatures below 480 °C, the extent of volatilization was at most only 6.4 percent. Degradation had not been carried far enough to determine accurately whether the percentage loss versus time plots were straight or curved lines, i.e., whether the indicated reaction is of zero or first order. Rate experiments were therefore carried out by the weight method at lower temperatures, namely at 460, 475, and 485 °C, and the results are shown in figure 3, where percentage loss of weight is plotted against time. The curves are definitely not straight lines, as would have been the case if the reaction had followed a zero order.Open in a separate windowFigure 3Pyrolysis of polytetrafluoroethylene at low temperatures.In our previous work (see footnote 1) the rates were obtained by plotting the slopes between two neighboring experimental points in the volatilization-time plots. In the present work the slopes were calculated from the curves shown in figure 3, and the resulting rate curves based on these calculations are shown in figure 4. The same type of rate curves were obtained as in the earlier work. Values obtained for the initial rates at these three temperatures fit nicely into the Arrhenius plot, as shown by the squares in figure 2.Open in a separate windowFigure 4Rates of thermal degradation of polytetraduoroethylene at low temperatures.The present work therefore confirms our earlier conclusion that the degradation of polytetrafluoroethylene in a vacuum follows a first-order rate law, where the rate of volatilization, based on the sample, is directly proportional to the amount of residue.  相似文献   

5.
Infrared absorption spectra of CO in the region of the first overtone have been observed in dilute (approximately 1 to 10 parts in 1000) liquid solutions of oxygen, nitrogen, and argon, and clear crystalline nitrogen and argon matrices. The overtone band was found at 4249.0, 4252.4, and 4252.0 cm−1 with half widths of 18.4, 17.8, and 13.7 cm−1 in liquid oxygen, nitrogen, and argon solutions at 82, 78, and 82 °K, respectively. The half width in liquid oxygen varied from 18.4 to 10.0 cm−1 in the temperature range 82 to 57 °K. The band position was the same but its width was smaller in the crystalline nitrogen matrix. Two bands were observed in the clear crystalline argon solid at 4245 and 4256 cm−1. The solution results cannot be interpreted with the recent theory of Buckingham.Infrared absorption spectra of carbon monoxide in the region of the first overtone have been observed in the liquid solvents oxygen, nitrogen, and argon. In addition, the spectra have been obtained in clear crystalline solutions of argon and of nitrogen near the triple points of these solvents. The purpose of these experiments was to determine the influence of temperature, phase changes, and solvents on half width, position, and shape of the CO absorption band.A Perkin-Elmer model 99 monochromator with a 2000 lines/cm grating blazed at 10° (1.7μ in first order) was used in the first order with a spectral slit width of about 1 cm−1. An antireflection coated germanium filter eliminated the higher orders from the 1000 w tungsten filament lamp used as the light source. The quartz absorption cell used, recently described by Bass and Broida [1], was modified slightly by recessing the windows further into the coolant tube. The resultant increased thermal contact between the refrigerant and the solution greatly simplified the growing of the clear crystalline matrices. The temperature of the refrigerant, liquid oxygen, was regulated by pumping on it with a small vacuum pump (capacity 14 liters/min). The vapor pressure of the liquid oxygen refrigerant, measured with an aneroid type gauge, provided an indication of the temperature. The direct measurement of the vapor pressure above the solution with a mercury manometer also provided an indication of the temperature. Solutions were prepared from the gases which had been mixed in the ratios of 1 to 10 parts carbon monoxide to 1000 parts of the various solvents. The position, the half width, and the shape of the spectral band did not depend on the concentration in this range. The clear solid solutions were grown slowly from the liquids at or near the triple points of the solvents.The measured frequencies and half widths of the 0–2 transition of CO in condensed oxygen, nitrogen, and argon are summarized in 2]). There were no changes in the position or the shape of the band in liquid oxygen at temperatures from 57 to 82 °K. However the half-band width varied from 10.0 to 18.4 cm−1 in this temperature range. In liquid argon the band is slightly asymmetric with more absorption on the high-frequency side.

Table 1

The 0–2 transition of CO in condensed oxygen, nitrogen and argon
SolventPhaseTvΔp½





°Kcm−1cm−1
gas3004260. 0
O2liq574249.0 ±0.510.0 ±0.5
O2liq824249. 0 ±0. 518.4 ±0.5
N2liq784252. 4 ±0. 517.8 ±0.5
Arliq824252.0 ±0.513. 7 ±0. 5
N2solid624252. 0 ±0. 512.3 ±0.5
Arsolid67 [4245.0±1.04256.0±1.0]25.0 ±2.0
Open in a separate windowAlthough the position of the band in clear crystalline solid nitrogen is not greatly different from that of the corresponding liquid solution, the half width is reduced by one-third and the shape is asymmetric and broader on the high-frequency side in the solid matrix. The absorption in the wings of the band is less than one would expect for a Lorentzian band shape. This observation is in apparent agreement with Wieder and Dows [3] who recently have observed vibrational bands of solid C2H4 and C2D4 which had shapes between the Gaussian and Lorentzian forms. In clear crystalline argon, the band is split into two overlapping peaks with the high-frequency peak about 50 percent more intense than the other peak.Results for the band positions obtained in this study are in good agreement with the recently published results of Vu, Atwood, and Vodar [4]. The band contours which are shown by them appear quite similar to the ones observed in this study but half widths were not listed, so that a further comparison of our results with theirs is not possible. These workers did not study the influence of temperature on the spectrum.In an effort to find an explanation for the observations of the 0–2 band of CO in condensed phases, several theoretical models have been tried. Unfortunately none of these theories easily account for the band shapes and shifts.Ewing [5] has observed the CO fundamental vibration in the liquid phase, in nitrogen and argon solutions. The bands he observed were not only asymmetric but also broader than the 0–2 bands observed in this study. The carbon monoxide fundamental had half widths of 26 cm−1 and 18 cm−1 in liquid nitrogen and argon, respectively, at temperatures comparable to those in this study. Ewing ascribed the asymmetry and increased absorption to the high-frequency side of the bands to a low barrier to rotation. From the asymmetry he estimated the barrier to be 42 cm–1 in pure liquid carbon monoxide, while slightly lower and slightly higher barriers were estimated for carbon monoxide solutions in liquid nitrogen and argon, respectively. A comparison with the present results (6, 7]. The observed dependence is clearly a function of a higher power of the temperature.If hindered rotation is responsible for the band width, then an increase in half-band width and asymmetry to the high-frequency side of the band is to be expected with a rise in temperature if the barrier is comparable to kT. If the barrier is much higher than kT the band width is independent of temperature. Since the population of the J levels of a rotator is proportional to T1/2, one would expect the width of the band to vary roughly as T1/2 if free or hindered rotation is causing the observed breadth. The observed dependence of approximately T3/2 coupled with the lack of asymmetry seems to rule out this explanation for CO in oxygen.It has recently been suggested by Rakov in application to organic materials that the width of bands could be represented by an exponential of the form Δv1/2A exp (?E/RT)(1)where E is the potential barrier for reorientation of the molecules [8, 9]. Rakov lias further indicated that if Brownian motion is responsible for the observed band widths this E should be equivalent to the energy of viscous flow, Evis, which is defined by Glasstone, Laidler, and Eyring [10] through the relationship ηB exp (Ev is/RT)(2)where η is the viscosity of the liquid medium. Using the data in eq (1). This value of E is about a factor of two smaller than the Evis calculated for liquid oxygen in this temperature range from the available data on the viscosity of liquid oxygen [11]. It appears therefore that this theory does not fit the phenomena observed in this study.Recently Buckingham [12, 13, 14] presented a theory to account for solvent effects on vibrational transitions of diatomic molecules. One of the unique predictions of this theory is that the (s—1) overtone of a diatomic molecule should be s times as broad as the fundamental. The half widths observed in this study of the first overtone of carbon monoxide are decidedly smaller than the widths of the fundamental in these same solvent systems observed by Ewing [5]. This indicates the failure of Buckingham’s theory in predicting band widths for the simple system carbon monoxide in nitrogen and argon solutions. The solvent shifts (vvapvsol’n) observed for the carbon monoxide harmonic in nitrogen is 7.6 cm−1, which is about 2.5 times the solvent shift of 3 cm−1 for the fundamental observed by Ewing. Buckingham’s theory as well as the earlier theory of Kirkwood, Bauer, and Magat [15, 16] predicts that the solvent shift of the harmonic should be twice that of the fundamental.In conclusion, the first overtone of carbon monoxide has been observed in condensed phases of oxygen, nitrogen, and argon. Both the shape and half width are significantly changed in the transition of liquid solution to solid solution, while the band position is not appreciably altered in the phase change. (Changes were not observed for the methane-argon system [17].) The recent theory of Buckingham as well as the earlier theory ascribed to Kirkwood, Bauer, and Magat have been found not to apply to these systems. No explanation is apparent for the two overlapping bands observed for carbon monoxide in the clear crystalline argon solid. The explanation of Vu et al. [4] implies that a combination band involving the fundamental band and a lattice mode is more intense than the respective fundamental. This explanation is not consistent with the observation of one band for the vibration of methane in a clear crystalline argon solid [17]. The variation of the half width of the 0–2 band of CO in liquid oxygen in the temperature range of 57 to 82 °K cannot be readily explained with existing theories.  相似文献   

6.
Until recently it has been impossible to accurately determine the roots of polynomials of high degree, even for polynomials derived from the Z transform of time series where the dynamic range of the coefficients is generally less than 100 dB. In a companion paper, two new programs for solving such polynomials were discussed and applied to signature analysis of one-sided time series [1], We present here another technique, that of root projection (RP), together with a Gram-Schmidt method for implementing it on vectors of large dimension. This technique utilizes the roots of the Z transform of a one-sided time series to construct a weighted least squares modification of the time series whose Z transform has an appropriately modified root distribution. Such a modification can be employed in a manner which is very useful for filtering and deconvolution applications [2]. Examples given here include the use of boundary root projection for front end noise reduction and a generalization of Prony’s method.  相似文献   

7.
Using a diamond pressure cell and a polarizing microscope, visual observations were made on the transformations of silver and cuprous halides at calculated pressures up to 125 kilobars. A new birefringent phase was observed in silver iodide at 2400 bars. Four phases were observed in CuI and CuBr while CuCl appeared to have only three.Using a diamond pressure cell previously described [1]1 and a polarizing microscope, visual observations were made on the transformations of silver and cuprous halides at calculated pressures up to 125 kilobars. The object of this investigation was to determine the nature and optical characteristics of new phases resulting from polymorphic changes that may occur in any one of these halides. It has been previously demonstrated, that pressures in the diamond cell are greatest at the cell center and least at the edges. This gradient serves a very useful purpose in enabling one to observe several polymorphic phases occurring in the same field of view at the same time, the denser phases always occurring nearest the cell center. Phase changes occurring as a result of applied pressure were detected by observing changes in indices of refraction between adjoining phases using the well-known Becke line movement technique [2]. Under the microscope a thin white pencil line (Becke line) is observed moving towards the denser phase as the viewing tube is raised; the movement is reversed when the tube is lowered.Phase changes were also observed by detecting birefringence changes between crossed nicols and by noting changes in absorption in both white and monochromatic light. The various optical effects were observed under the microscope at magnifications of 160 and 400 diam. Pressure values as used in this paper will be confined to the bar2 which is defined as 106 dynes/cm2.Accurate pressure measurements are difficult to obtain in the diamond cell since friction and flow shear generate pressure gradients across the anvil faces. These gradients on a first compression may be very large especially when the sample is in a powdered form. An explanation for this is that, on the first compression, the movement of material is large compared with the movement for subsequent pressure cyclings. When the cell is newly charged with powdered material, there is a large volume of unoccupied interstitial space. On the first application of pressure, material is squeezed into these spaces and at the same time flows towards the cell edge. It is this movement at the beginning of compression that produces the greatest pressure at the center of the anvil. As compression is increased extrusion along the cell edge is prevented both by the internal friction of the material and the friction between it and the diamond surfaces. On the second and subsequent compressions the internal movement is relatively small, since the material has been previously compacted. Thus the pressure is distributed over a larger area and the buildup of very high pressure at the center is reduced. If pressures are calculated for the cell using the method in which the force is divided by cross-sectional area, one will obtain large errors, especially on the first compression of the material. The explanation for this is that the method assumes that pressure is equally distributed over the anvil surfaces whereas in practice gradients always exist. This point can be illustrated as follows: The maximum pressure calculated under this assumption for one of the diamond cells, was approximately 70 kilobars. A transition that occurs in AgI at 115 kilobars [4] was observed and photographed at a pressure of 60 kilobars calculated in this manner. These calculations were made on the first compression of the sample. On cycling the pressure, the transition did not appear at the maximum pressure of the cell, thus indicating the reduction of the highest pressure in the sample. By cycling the pressure in the cell, pressure measurement errors have been reduced to values of 5 percent or less in the range below 50 kilobars and to 10 percent in the higher pressure range.  相似文献   

8.
The International Practical Temperature Scale of 1948 is a text revision of the International Temperature Scale of 1948, the numerical values of temperatures remaining the same. The adjective “Practical” was added to the name by the International Committee on Weights and Measures. The scale continues to be based upon six fixed and reproducible equilibrium temperatures to which values have been assigned, and upon the same interpolation formulas relating temperatures to the indications of specified measuring instruments. Some changes have been made in the text to make the scale more reproducible than its predecessor. The triple point of water, with the value 0.01 °C replaces the former ice point as a defining fixed point of the scale. It is also recommended that the zinc point, with the value 419.505 °C, be used instead of the sulfur point. The recommendations include new information that has become available since 1948.An internationally accepted scale on which temperatures can be measured conveniently and accurately is necessary for science and industry. As early as 1911 the directors of the national laboratories of Germany, Great Britain, and the United States agreed to undertake the unification of the temperature scales in use in their respective countries. A practical scale, named the International Temperature Scale, was finally agreed upon, was recommended to the Seventh General Conference on Weights and Measures by its International Committee on Weights and Measures, and was adopted in 1927.1The General Conference on Weights and Measures is the official international body now representing 36 nations that subscribe to the Treaty of the Meter. The General Conference normally meets every six years, and at those times may adopt recommendations submitted by the International Committee. The International Committee is the executive body elected by the General Conference. It consists of 18 scientists, only one from any one nation, and it normally meets every two years. The International Committee now has six advisory committees of specialists most of whom represent large national laboratories. The Advisory Committee on Thermometry was authorized in 1933 and first met in 1939.In 1948 a revision of the International Temperature Scale was prepared by the Advisory Committee and proposed to the International Committee. The International Committee recommended this revision to the Ninth General Conference which adopted it.2 At this time the General Conference also adopted the designation of degree Celsius in place of degree Centigrade or Centesimal.3 The revised scale was designed to conform as nearly as practicable to the thermodynamic scale as then known, while incorporating certain refinements, based on experience, to make the scale more uniform and reproducible than its predecessor. In the revision there were only three changes which affected values of temperatures on the scale. One was to increase the value assigned to the silver point by 0.3 degree, merely to make the scale more uniform. Another was to specify Planck’s radiation formula instead of Wien’s formula so the scale would be consistent with the thermodynamic scale above the gold point. The third was to increase the value for the second radiation constant to bring it nearer to the value derived from atomic constants.In 1954 the Advisory Committee proposed a resolution redefining the Kelvin thermodynamic scale by assigning a value to the triple point of water. This kind of definition was what Kelvin, in 1854, had said “must be adopted ultimately.” This resolution was recommended by the International Committee and adopted by the Tenth General Conference.4 As soon as this resolution had been adopted it was pointed out that it would be necessary to revise the introduction of the text of the International Temperature Scale of 1948 to conform with the action just taken.In preparing a tentative proposal for a new text of the introduction it soon became evident that the other three parts of the text would also profit by a revision. For example, the triple point of water could now be made one of the defining fixed points of the scale and thus become the one defining fixed point common to both the international and the Kelvin scales. The Recommendations could include new information that had become available since 1948. At the higher temperatures some new determinations of differences between the international and thermodynamic scales could be included. The values reported for these differences, however, were still not certain enough to warrant a change of the scale itself. The new text, therefore, does not change the value of any temperature on the 1948 scale by as much as the experimental error of measurement.In 1958 the tenative proposal was discussed in detail at sessions of the Advisory Committee in June, and many suggested changes were agreed upon. It was proposed to the International Committee in October. Minor corrections were made during the next two years, and in 1960 the International Committee gave the scale its new name. The International Committee recommended this text revision to the Eleventh General Conference and it was adopted in October 1960.A translation of the official text 5 follows.  相似文献   

9.
Twenty-nine samples of high-purity nickel metals, reagent salts and minerals, collected from worldwide sources, have been examined by high-precision isotope ratio mass spectrometry for their nickel isotopic composition. These materials were compared directly with SRM 986, certified isotopie standard for nickel, using identical measurement techniques and the same instrumentation. This survey shows no statistically significant variations among the samples investigated, indicating that the certified atomic weight and associated uncertainty for SRM 986 is applicable to terrestrial nickel samples. The atomic weight calculated for SRM 986 is 58.69335±0.00015 [2]. The currently recommended IUPAC value for terrestrial nickel is 58.69±0.01.  相似文献   

10.
Equations are developed for plane-wave particle velocity produced in solid-against-liquid collisions. An explicit expression for the dimensionless coefficient α that appears in these equations is deduced.Collisions between liquid drops and the planar surfaces of solids have become important in the present era of high-speed flight. Except for the pressure that results when a drop of incompressible liquid collides with the planar surface of an unyielding solid [1],1 exact hydrodynamic treatments of the various aspects of this type of collision have not been developed. Plane-wave theory has been used in several approximate treatments [2, 3, 4]. One of the unknowns encountered in the use of plane-wave theory for solid-against-liquid collisions was the particle velocity in the compressed zones.During collision between a solid rod A having flat ends and moving with velocity V in the (+z)-direction of a stationary coordinate system (fig. 1) and a similar liquid rod B that is at rest, there is a radial flow of liquid at the impacted end of rod B. In order that the rods remain in contact while the compressional waves initiated by the collision move through them, the interface velocity (I) must obey the inequality V−v′ >v where v′, v are the particle velocities in the compressed zones.Open in a separate windowFigure 1Collision between a plate moving at velocity V and a liquid drop at rest idealized as collision between a solid rod A and a liquid rod B.We can then write α(V−v′)=v where α is a dimensionless coefficient having a value less than one, and v + αv = αV.(1)Using the relation that exists between stress and particle velocity for plane waves, the equality of stresses at the surfaces of contact is given by zv = zv, (2)where z is the acoustic impedance (product of sound speed and density). From eq (1) and (2), the particle velocities v, v′ are found to be v = αzV/(z + αz)(3) v = αzV/(z + αz), (4)and the plane-wave stress σ is given by σ = σ = αzzV/(z + αz).(5)The quantity that must be determined to make these equations useful is the coefficient α.One of the approximate treatments in which plane-wave theory was used for solid-against-liquid collisions [3] provides a means of deducing an explicit expression for the coefficient α. In this treatment the complicated situation of collision between a moving target plate and a relatively stationary liquid drop was idealized as the simple case of the collision of two rods with flat ends. If a plate is fired against a drop (fig. 1), a core of material extending through the plate under the contact area is slowed down with respect to the remainder of the plate and a similar core of material through the drop is set in motion. The cores were regarded as true cylinders free to move in the z-directions (fig. 1) but restrained laterally. The compressional waves that move through the cylinders were regarded as plane waves.With use of this simple model, an equation was developed that gives pit depth δ′ as a function of impingement velocity V for collisions of metal target plates with liquid drops [3]. For impingement velocities for which elastic recovery of the plate is complete, the pit depth was taken to be the product of a numerical constant, the particle velocity given to the cylindrical core of material under the collision area, and the time that the particle velocity exists. The particle velocity was taken to be zV/(z+z′), which is the plane-wave particle velocity for solid-against-solid collisions. The time during which the particle velocity exists was taken to be 2d/c where d is the diameter of the drop and c is the sound speed of the liquid of which it is composed. Therefore, δ′ = (constant) (d/c) [zV/(z+z′)].The pit-depth equation that was developed was applied first to collisions of mercury drops and waterdrops with target plates of copper, 1100–O aluminum, 2024–O aluminum, steel, and lead. The constant was found empirically to be 7.2. The equation was then found to apply without change of the constant to collisions between metal target plates and soft ductile metal spheres that flowed during and as a result of the collision.The same equation was later applied [4] to collision of steel spheres against target plates of 1100–O aluminum, 2024–O aluminum, and copper. It was found empirically that if the target plate was struck by a rigid hardened steel sphere that did not flow as a result of the collision the constant was 17.5.The constants found for the pit-depth equation for the case that a target plate collides with a liquid drop or soft ductile metal sphere and for the case that it collides with a rigid hardened steel sphere provide a means of obtaining an explicit expression for the coefficient α. The two cases differ only in the particle velocity given to the core of material through the target plate. The particle velocity v′ for solid-against-solid collisions was used in each case. The particle velocity v′ for solid-against-liquid collisions should have been used for the case that the target plate collided with a liquid drop or with a soft deforming metal sphere that would flow as a result of the collision.Because it is the particle velocity given to the core of target material under the collision area that is different, and because the constant 7.2 is 0.41 of the constant 17.5, it follows that αzV/(z′+αz) = 0.41 zV/(z′+z) from which α = 0.41/[1 + (0.59 z/z)].(6)Values of α calculated with use of eq (6) for collisions of waterdrops and mercury drops with target plates of aluminum, copper, lead, and glass are given in 2]. It was found experimentally [2] that 0.00118 sec were required for a glass plate to move through a 0.57-cm-diam waterdrop when the relative impingement velocity was 820 cm/sec (26.9 ft/sec). The velocity at which the plate moved through the drop was 484 cm/sec. It was assumed that no particle velocity was given to the cylinder of glass through the plate under the collision area. Then the velocity at which the plate moved through the drop was (1−α)V. To this degree of approximation (1−α)820 = 484 and α = 0.4.Table 1Some values of the coefficient α
Target
AluminumCopperLeadGlass
Drop





Water0.390.400.400.38
Mercury  .24  .32  .28  .22
Open in a separate windowIn consideration of this independent evaluation of the coefficient α for waterdrop collisions, it appears, in retrospect, that had the proper particle velocity been used in [3], the numerical constant found empirically for the equation to calculate the depth of pits produced by collision of a metal plate with liquid drops would have been the same as that with rigid steel spheres, namely, 17.5 [4].  相似文献   

11.
The purpose of this paper is to review in vivo NMR experiments [1, 2] on a transplantable tumor in mice and to discuss the feasibility of using noninvasive NMR for cancer detection in humans.  相似文献   

12.
The generally used practical scale of temperatures between 1° and 5.2° K is the He4 vapor pressure scale based on an accepted vapor pressure equation or table. In Sèvres (near Paris), October 1958, the International Committee on Weights and Measures recommended for international use the “1958 He4 Scale” based on a vapor pressure table arrived at through international cooperation and agreement. This table resulted from a consideration of all reliable He4 vapor pressure data obtained using gas thermometers, and paramagnetic susceptibility and carbon resistor thermometers. The theoretical vapor pressure equation from statistical thermodynamics was used with thermodynamic data on liquid He4 and the vapor equation of state to insure satisfactory agreement of the vapor pressure table with reliable thermodynamic data.The International Committee on Weights and Measures at a meeting in Sèvres (near Paris), France, September 29 to October 3, 1958, approved the “1958 He4 Vapor Pressure Scale of Temperatures” as an international standard for thermometry from 1° to 5.2° K. This was the culmination of several years of intensive research and cooperation on the helium vapor pressure scale at the Kamerlingh Onnes Laboratory in Leiden, Holland, and the U.S. Naval Research Laboratory in Washington.The vapor pressure of liquid He4 has for a long time been used as a standard for thermometry between 1° and 5.2° K. The first measurements of thermodynamic temperatures in the liquid He4 range were made with constant volume gas thermometers filled with He4. Simultaneous measurements of the vapor pressure of liquid helium in temperature equilibrium with the gas thermometer established a vapor pressure-temperature relation which then was used as the basis for determining thermodynamic temperatures from vapor pressure measurements. With these vapor pressure-gas thermometer measurements there were measurements of He4 vapor pressures made simultaneously with measurements of the He4 isotherms from which temperatures were obtained by extrapolating the isotherms to zero density (N/V→0) in accordance with the virial equation of state: pV/N = RT[1 + B(N/V) + C(N/V)2 + …](1)After the latent and specific heats of liquid He4 had been measured, the experimental vapor pressure-temperature relation was improved through the use of the theoretical vapor pressure (P) equation: lnP=i0L0RT+52lnT1RT0TSldT+1RT0PVldP+ϵ(2)where i0 ≡ ln (2πm)3/2k5/2/h3(3)and ? ≡ ln (PV/NRT)?2B (N/V)?(3/2) C (N/V)2(4)L0 is the heat of vaporization of liquid He4 at 0° K, Sl and Vl are the molar entropy and volume of liquid He4, m is the mass of a He4 atom, B and C are the virial coefficients in eq (1), and the other symbols have their usual meaning. Both theoretically calculated and directly measured vapor pressures were considered in arriving at the 1958 He4 Temperature Scale.Equation (2) presupposes that the thermodynamic properties entering the equation have been measured on the thermodynamic scale, otherwise the use of this equation for the calculation of P is not valid. In practice, however, these properties are measured on an empirical scale that only approximates the thermodynamic scale. In general this empirical scale has been a He4 vapor pressure scale based on gas thermometer measurements.As T is lowered, the fourth, fifth, and sixth terms in eq (2) become smaller and less important relative to the first three terms. At 1.5° K, the inclusion or exclusion of the sum of the fourth, fifth, and sixth terms in eq (2) affects the temperature calculated from a given value of P by only 0.0005 deg. It may be said then, that below 1.5° K, the vapor pressure of He4 is in effect really determined, within the present accuracy of the vapor pressure measurement, by a single empirical constant, the heat of vaporization of liquid He4 at 0° K. At present, Lo for He4 is normally calculated from vapor pressure data obtained with a gas thermometer. The magnitude of the last three terms in eq (2) increases rather rapidly with rising T, and above the λ-point (2.172° K) the accuracy of the evaluation of these terms is a very important consideration.In Amsterdam in 1948, on the occasion of a General Assembly of the International Union of Physics, a small group of low temperature physicists, meeting informally, agreed to use and recommend for temperature measurements between 1° and 5.2° K, a table of vapor pressures of He4, then in use in Leiden, which came to be known as the “1948 Scale” [1].5 This scale has sometimes been referred to as the “1949” Scale. From 1° to 1.6°K, the “1948 Scale” was based on vapor pressures calculated by Bleaney and Simon [2] using eq (2). From 1.6° to 5.2° K, the scale was based on measured vapor pressures and temperatures determined with gas thermometers. From 1.6° to 4.2° K, it was based primarily on the vapor pressure measurements of Schmidt and Keesom [3].Even in 1948, when the “1948 Scale” was agreed to, there was evidence in the measurements and calculations of Kistemaker [4] that the “1948 Scale” deviated significantly from the thermodynamic scale. However, it was thought at the time that, on general principles, indicated changes in an existing scale should be made only after these changes had been confirmed. With improvements in the precision and accuracy of physical measurements at low temperatures, irregularities appeared in the temperature variation of physical properties between 1° and 5° K that were in the main reproducible in different substances and properties and were, therefore, attributable to errors in the “1948 Scale” [5]. Stimulated by these results which corroborated Kistemaker’s work, the investigations of the He4 vapor pressure scale were undertaken that culminated in the “1958 He4 Scale.”Paramagnetic susceptibility and carbon resistor thermometers were later employed in investigations of the He4 vapor pressure-temperature relation [6]. These thermometers were used for the interpolation of temperatures between calibration points (temperatures) using an assumed relation connecting temperature and paramagnetic susceptibility or carbon resistance for the calculation of the temperatures. For suitably chosen paramagnetic salts, the Curie-Weiss Law was assumed to hold: χ=CT+Δ(5)where χ is the magnetic susceptibility and C and Δ are empirical constants. Measurements at two temperatures would suffice to determine these two empirical constants if the measurement were really of χ or a quantity directly proportional to χ. However, a calibration of the paramagnetic thermometer at a third calibration temperature is necessary because the arbitrariness in the size and arrangement of the paramagnetic salt samples and the induction coils that surround the salt sample for the susceptibility measurement make the measurement a linear function of χ. Interpolation equations for carbon resistor thermometers are not as simple as eq (5) and do not have a theoretical basis. Hence, vapor pressure data obtained with carbon resistor thermometers are of more limited usefulness for the determination of the He4 vapor pressure-temperature relation. Clement used carbon thermometer data to examine the derivative d (ln P)/d (1/T), [7].Important use has been made of He4 vapor pressure measurements made with magnetic susceptibility and carbon resistor thermometers in arriving at the “1958 He4 Scale.” These vapor pressure measurements were considered along with those made with gas thermometers and vapor pressures calculated using eq (2). Temperature measurements with magnetic and carbon resistor thermometers are much simpler to make than measurements with gas thermometers, and hence vapor pressure data obtained with magnetic and carbon resistor thermometers are more numerous. Also, the measurements made with these secondary thermometers are more precise (to be distinguished from accurate) which makes them especially useful for interpolation between the gas thermometer data.There are, accordingly, three practical methods for determining the He4 vapor pressure-temperature relation: (1) By use of the direct vapor pressure measurements made with gas thermometers, (2) through the use of eq (2) with some vapor pressure-gas thermometer data, and (3) through the use of vapor pressure measurements with secondary thermometers which have been calibrated using some gas thermometer data. If all the pertinent experimental data were accurate and all temperatures were on the thermodynamic scale, these three methods would yield results in good agreement with each other, and any one might be relied upon for the construction of the He4 vapor pressure-temperature table defining the scale. Because of experimental errors, however, the vapor pressures obtained by the different methods differ when carried to the limit of the sensitivity of the measurements. For He4 between 1° and 4.5° K, different choices of the methods and different selections of the experimental data used, weighting factors and corrections to the published data yield scales all within about 4 millidegrees of each other. The primary evidence for this is that 4 millidegrees is the maximum difference between the L55 Scale [8] obtained by method (2) and the 55E Scale [9] obtained by method (3). This then is a measure of the range (total spread) of uncertainty at present in the He4 vapor pressure scale of temperatures between 1° and 4.5° K.All published He4 vapor pressure measurements, and thermodynamic data needed for eq (2) were independently studied and correlated by H. Van Dijk and M. Durieux at the Kamerlingh Onnes Laboratory in Leiden [8] and by J. R. Clement and J. K. Logan at the U.S. Naval Research Laboratory in Washington [9]. As far as possible, the experimental data of the original investigators were recalculated on the basis of later knowledge of the temperature scale, fundamental constants, and the properties of He4. In some cases, limitations were imposed on these recalculations by the incomplete reporting of the experimental data by the original investigator.After working independently, van Dijk and Clement cooperated to compromise their differences. They met first in Leiden, August 1955 and later in Washington, summer of 1957. From January 22 to March 14, 1958, Logan worked at Leiden, and later represented Clement at a conference in Leiden, June 1958, at which agreement was reached on the “1958 He4 Scale.” This cooperation was an important factor in the improvement of the scale.Where the differences between the values obtained by handling the experimental data differently are largest (4 millidegrees), the “1958 Scale” falls between the extremes. At other places it is close to the mean of these values and at no place does it deviate by more than 2 millidegrees from the mean. The estimated uncertainty of the “1958 He4 Scale” is accordingly ±2 millidegrees between 1° and 4.5° K. At higher temperatures, the estimated uncertainty is larger.Now that the International Committee on Weights and Measures has recommended the “1958 He4 Scale” as an international standard it is presumed that henceforth the International Committee on Weights and Measures will take the initiative in improving the scale when changes are needed. Before the International Committee on Weights and Measures assumed responsibility for the He4 vapor pressure scale, the Commission on Very Low Temperature Physics in the International Union of Pure and Applied Physics concerned itself with the scale. This began with the informal meeting in Amsterdam in 1948 that resulted in the “1948 Scale.” At the Low Temperature Conferences sponsored by the Commission on Very Low Temperature Physics of the International Union of Physics at Paris in 1955, and at Madison, Wisconsin, in 1957, sessions were held at which the He4 vapor pressure scale of temperatures was discussed.The National Bureau of Standards sponsored meetings, for discussion of the helium vapor pressure scale of temperatures, held at the NBS during the spring meetings of the American Physical Society in Washington, 1955 and 1957. Also, the NBS encouraged cooperation in reaching national and international agreement on the scale. It initiated or promoted the meetings for discussion of the differences between the L55 and 55E Scales proposed respectively by Van Dijk and Durieux, and by Clement. These were the meetings held August 26 and 27, 1955 in Leiden (before the Low Temperature Conference in Paris) [10], July 30, 31, and August 1, 1957 in Washington (before the Low Temperature Conference in Madison) [11], and June 13, 14, and 16, 1958 in Leiden (before the meeting of the Advisory Committee on Thermometry of the International Committee on Weights and Measures in Sèvres) [12]. Also, the National Bureau of Standards promoted the arrangement which sent Dr. Logan of the U.S. Naval Research Laboratory to work in the Kamerlingh Onnes Laboratory from January 22, to March 14, 1958.The Scale agreed upon at Leiden, June 13 to 16, 1958 was presented to the Advisory Committee on Thermometry of the International Committee on Weights and Measures at its meeting in Sèvres, June 20 and 21, 1958. The recommendation of the Advisory Committee to the International Committee was as follows [12]:
  • “Le Comité Consultatif de Thermométrie,
  • “avant reconnu la nécessité d’établir dans le domaine des très basses températures une échelle de température unique,
  • “ayant constaté l’accord général des spécialistes dans ce domaine de la physique,
  • “recommande pour l’usage général l’ “Echelle 4He 1958,” basée sur la tension de vapeur de l’hélium, comme définie par la table annexée.
  • “Les valeur des températures dans cette échelle sont désignées par le symbole T58.”
The table of He4 vapor pressures that was sent to the International Committee with this recommendation was the table distributed at the Kamerlingh Onnes Conference on Low Temperature Physics at Leiden, June 23 to 28, 1958. It was published in the Proceedings of the Kamerlingh Onnes Conference [13].On the recommendation of its Advisory Committee on Thermometry, the International Committee on Weights and Measures approved the “1958 He4 Scale of Temperatures” at its meeting at Sèvres, September 29 to October 3, 1958.The table adopted by the International Committee on Weights and Measures was a table of vapor pressures at hundredth degree intervals. This table was expanded by Clement and Logan making table I of this paper with millidegree entries. Table I was inverted to give tables II and III which express T as a function of vapor pressures. Auxiliary tables were added including a table of the differences between the 1958 Scale and other earlier used scales. Linear interpolation is valid for all tables except at the lower temperature end of table IV.The assistance at Leiden of H. ter Harmsel and C. van Rijn, students of Dr. H. van Dijk at the Kamerlingh Onnes Laboratory, with the computations for the defining and auxiliary tables is gratefully acknowledged.Various members of the Cryogenics Branch of the Naval Research Laboratory at Washington assisted with numerous calculations which contributed toward the development of the present scale. This assistance, especially that of Dr. R. T. Swim, is gratefully acknowledged.  相似文献   

13.
The three spectral types of muscovite sheet mica, i.e., very pink ruby, light green, and dark green, were subjected to heat treatments at temperatures up to 600 °C. The changes in the apparent optic axial angle and in the absorption spectra (0.3 to 15 μ) were studied along with color.The differentiation of muscovite sheet according to these spectral types extends to the behavior of apparent optic axial angle and to certain regions of the spectrum under heat treatment. The pink associated absorption region (0.47 to 0.6 μ) can be enhanced or bleached away by appropriate thermal treatment, although the associated infrared multiplet at 3 to 3.5 μ is little affected. The absorption band at 12 μ increases in intensity with temperature of treatment. It is suspected that the 0.47 to 0.6 μ absorption is the result of color centers.It has been shown that measurable differences exist in apparent optic axial angle and absorption spectrum, as well as in color, for muscovite sheet micas. These differences indicate that there must be basic chemical and structural variations. They further provide a quantitative, though complex, categorization of the material [1].1The present paper reports the effect of heat treatment on color, apparent optic axial angle, and absorption spectrum for several of the representative categories of the material so established. The treatments were at temperatures of 600 °C and less, usually considered to be below the decomposition point.  相似文献   

14.
The spectrum of the neutral bromine atom, Br I, has been newly investigated by using electrodeless discharge tubes as light sources. The observations have led to a list of wavelengths and estimated intensities for 1253 spectral lines in the range 1067 to 24100 Å. The number of known energy levels has been increased to 123 even and 128 odd levels, as compared with the 27 even and 33 odd levels previously known. All predicted energy levels of the 4s24p4ns, up, nd, nf electron configurations from 0 to ~93250 K have been discovered. The observations in the vacuum ultraviolet establish that the positions of all the levels lying above those of the ground configuration as given in the compilation Atomic Energy Levels, Vol. II (1952) should be increased by 6.7 K. All but 26 faint lines of Br I have been classified. A total of 67 levels has been ascribed to the 4s2 4p* nf configurations. It is demonstrated that the nf configurations exhibit almost pure pair coupling. The very regular (3P2)nf[511/2 series yields for the principal ionization energy of Br I the value 95284.8 K.  相似文献   

15.
Heat flow in adiabatic calorimeters of various shapes and materials is described in terms of linear partial differential equations. From these equations it is deduced that in the intermittent heating method the heat exchange between the calorimeter and the adiabatic shield due to transients at the beginning and end of the heating period can be made to cancel. The remaining heat exchange is the same for intermittent or continuous heating methods and can be treated as the sum of effects due to gradients set up by heat flow (1) from the shield to the environment and (2) from the shield and calorimeter heaters to raise the temperatures of the shield and calorimeter, respectively. The first effect can be accounted for by measurements during fore and after periods in intermittent calorimetry and by varying the heating rate in continuous calorimetry. Under certain conditions the second effect can be accounted for by measurements with the empty calorimeter. Variation in heating rate fails as a test for the magnitude of the second effect.  相似文献   

16.
17.
This paper considers the resolution limits of those analyzers and oscillatory systems whose performance may be represented by a second-order differential equation. The “signal uncertainty” product Δf·Δt is shown to be controlled by the ability of a system to indicate changes in energy content. The discussion refers the functioning of the system to a signal space whose coordinates are energy, frequency, and time. In this signal space, the product of the resolution limits, U = (ΔE/E0) (Δf/f0) (Δt/T0) is the volume of a region within which no change of state in the system may be observed. Whereas the area element Δf·Δt is freely deformable, no operations upon either Δf or Δt can further the reduction of the energy resolution limit. Thus U is irreducibly fixed by the limiting value of ΔE/E0. By considering the effects of noise upon ΔE/E0, and thus upon U, the paper demonstrates the rise of statistical features as signal-to-noise ratios decrease.Functional relationships derived from ΔE/E0 and U are tabulated. These equations facilitate computation of the limits of observable changes of state in a system, and they provide guidance for the design of experiments to apportion the uncertainties of measurement of transient phenomena as advantageously as possible. A reference bibliography and appendices giving somewhat detailed proofs are included.The basis of this paper is the consideration that the indication of most instruments used in measurement represents either the storage of energy or the flow of power. The least changes that the instruments can indicate, therefore, are controlled by the smallest discernible change in energy storage or power flow.The subject of the resolution limits of measuring instruments in terms of the least amounts of frequency change and the least time interval in which a change may be detected have been treated by several authors, among whom one may cite as examples Gabor, Kharkevich, and Brillouin—and, while this paper was being revised, Pimonow.1 The present author has also discussed this relationship for scanning analyzers, and indicated that there were circumstances in which limitations were introduced by the presence of a least discernible increment of power or energy.2 These papers (ref. 2) are quoted, in addition to the prior work, by Pimonow.Gabor pointed out, by analogy to quantum theory, that there was a “quantum” of information that could be described by the product of differentials representing the least discriminable increment of frequency that could be observed in an increment of time. This relation arose from the application of the Fourier transform to relate an increment of time to its corresponding increment in the frequency domain. The product Δf ? Δt ≈ 1was defined by Gabor as the “Logon.” The fact that, as he says, the product is “of the order of unity” is a consequence of the particular normalization he used in computing the Fourier transform for Gaussian pulses. A similar relation was presented by Brillouin, but as he computed Δt in terms of the half-powers of brief, symmetrical pulses, he found a somewhat different normalization factor, and obtained ΔfΔt=14π.Kharkevich adopted a somewhat more general expression for this equation, also in terms of a normalized Fourier transform, by first writing Δf ? ΔtAand computing A for pulses of various forms. He remarked that A might differ if other criteria were chosen, but related A only to the form of the signal. Further, he pointed out that A is independent of the, damping of the system.The studies by Gabor. Kharkevich, and Brillouin were all carried out for essentially noiseless systems. This paper, on the other hand, does not normalize for unit energy, but considers the energy storage and dissipation in systems whose performance may be described by a linear differential equation of the second order. Thus, by dealing with the energy stored as well as with the time and frequency we are able to study the response of a system to signals other than variously shaped pulses of unit energy, and to signals in noise as well as to noiseless systems.We can also consider, in this way, the case which has been omitted from the previous work: the response of the system which may in itself have a “least count”3 or inherent internal noise.We introduce the means for taking into account the presence of noise with a signal, internal noise in an analyzer, or the least discernible indication of the analyzer (which may be a step limitation—such as a digital step, or a reading limit) by discussing the limits of the analyzer’s performance as being fixed by the least change in energy storage, ΔE, that can be resolved under the circumstances of analysis. Several conditons may combine to fix the value of ΔE. For example, an analyzer having appreciable self-noise may be used to detect a signal in noise. As a rule, the internal and external noise sources in that case would be incoherent, and the sum of the noise energies stored in the analyzer from those sources would fix the value of ΔE.The system for which this discussion is carried out is a system whose working may be described by a linear differential equation of the second order. This behavior is common to many physical systems occurring in nature, and to many instruments used to observe natural phenomena. All of these systems share the same properties, because they are properties inherent in the differential equation that describes them. By virtue of the second-order term, they may be seen to be capable of storing oscillatory energy reversibly. They will respond with a sinusoidal output after excitation by shock or noise. Under sinusoidal excitation, they will respond selectively to excitation of various frequencies.Systems to be discussed in this paper are those in which the storage of energy occurs in the coordinates describing the system: these systems are described by a second-order differential equation with constant coefficients; i.e., the system parameters are not affected by the energy storage process. By invoking the Boltzmann-Ehrenfest adiabatic principle,4 it is also possible to apply many of these equations to systems in which energy is stored by a change of parameters. However, this matter has not been investigated in detail.Although most of the systems to which the second-order differential equation is applicable take part in time-varying phenomena, there is nothing inherent that restricts the equation to functions of time. Certain spatial distributions also may be described by the equation—such as the magnetization on magnetic tape, some types of optical images, and some diffraction effects. Thus, although this paper will deal with application of the equation to time-varying phenomena, its conclusions are also applicable, with a judicious choice of variables, to spatial distributions.For the sake of a coherent structure upon which to base this paper, we choose a mechanical system of inertia, M, dissipation (proportional to velocity) D, and coefficient of restitution, k. This system has a single degree of freedom, along the coordinate x, and its force-free behavior is given by solutions of the homogeneous equation: Mx¨+Dx¨+kx=0.When energy is stored in the system, it is dissipated at a mean rate which bears a constant relationship to the amount of energy stored. This constant is a function of the parameters of the system. In terms of dissipation, the constant is frequently expressed by the relative damping, γ, the ratio of the damping of the system to the critical damping for no oscillation. A reciprocal quantity, the “figure of merit,” Q, is commonly used in communication problems. These two constants are related through the equation: γ=12Q.Because we are more concerned here with storage than with dissipation, the quantity Q will be used in the discussion.The conventional definition for Q applies when the system is driven at its resonance frequency; at other frequencies the ratio of the storage of energy to the rate of dissipation depends upon the driving frequency. When the system is free of excitation, the conventional definition of Q again applies. This value of Q will be denoted as Q0, to distinguish it from the more general definition of Q to be applied in the appendix. Defining as the natural frequency of the system the quantity f0, which is the natural frequency of the system in the absence of damping, f0=12πkMand Q0=PeakenergystoredatthenaturalfrequencyEnergydissipatedpercycleatthenaturalperiodIn terms of the parameters of the system, this definition of Q0 is equivalent to the ratio: Q0=2πf0MD=k2πf0D=kMD.The differential equations for the transient and driven response of the system can be expressed in terms of Q0, D, and f0. For the force-free equation: x¨2πf0+x˙Q0+2πf0x=0and, for the driven response of the system to a sinusoidal force of amplitude A and frequency f: x¨2πf0+x˙Q0+2πf0x=AQ0Dej2πft.Letting the ratio of the driving frequency to the natural undamped frequency be represented by ?f/f0we can write down the phase relation between the driving force and the resulting motion. The phase angle, θ, when the steady-state condition has been attained is given by θ=tan1ϕQ0(1ϕ2)and the energy stored in the steady-state condition is given by: Es=A22DQ02πf01Q02(1ϕ2)+ϕ2.The application of the Fourier transform can be considered to be tantamount to referring the behavior of the system to one or the other of two mutually perpendicular planes: The steady-state condition is described by the representation of the state of the system in the energy versus frequency plane and it of necessity deals with the steady state because the variable, time, is not involved. The transient behavior of the system is represented in the energy versus time plane. It is easier to understand that this is orthogonal to the frequency representation if we omit the normalization often used, of choosing the unit of time in terms of the natural period of the system. Nevertheless, it is more convenient to express the behavior of a system in terms relative to its natural parameters, and for the sake of simplicity of equations, much of the discussion will be related to the natural properties of the system.Three properties serve to specify the behavior of an analyzer described by a second-order differential equation: its undamped natural frequency, f0, its figure of merit, Q0, and its “least count,” ΔE, the least energy change that can be resolved by the system. The changes in energy content of the system may be thought of as taking place within the signal space bounded by the energy-time and energy-frequency planes, and the representation of the behavior of the system as a function of time or of frequency may be considered as projections upon the principal planes. Since the actual use of the system as an analyzer is never wholly steady-state or completely broadband, the actual process of analysis may be considered as taking place in some plane within the signal space bounded by the principal planes. Depending upon the information sought, the plane of the analyzer will be close to one or the other of the principal planes.Ordinarily, an analyzer indicates a running time average over the energy, Es, stored in it. For steady-state signals the indication becomes proportional to the input power; for signals of very brief duration the analyzer responds ballistically and thus gives an indirect indication of energy. The limit of resolution is fixed by the least change in energy storage, ΔE, that can be resolved under the circumstances of analysis. In this discussion we shall be dealing with incremental ratios. As the differential will always be considered jointly with the total energy stored over the same time interval, it will be possible in general to discuss ratios of incremental power, ΔW, to input power, W0, or incremental energy, ΔE, to energy stored, Es, interchangeably.As a function of time, the building up and decay of the energy stored within the system are exponential processes. Therefore it proves convenient to describe the behavior of the system in terms of an exponential variable. Thus, to express the changes in energy storage, we choose an exponential coefficient, α. The energy resolution limit, ΔE/E0, may be expressed in terms of α through the following definition: If the initial amount of energy stored in the analyzer is E0, and the minimum change of energy that can be discerned is ΔE, then in terms of the exponential variable α the equivalent statement is that the energy content must decrease to an amount (eα) times its original value for the change to be at least equal to the minimum change discernible. Expressed as an equation, this limit is given by ΔE= (1 — eα)E0 for energy, and in the many cases in which we are dealing with energy flow through the system, alternatively as ΔW = (1 — eα) W0 for power. From the definition of α in terms of ΔE and E0, it is evident that: α=ln(1ΔEE0).However, it is not along the time axis alone that α proves such a convenient function. Because of the close relation between exponential and angle functions, it also yields simple equations for the behavior of the system as a function of frequency. For the foregoing reasons, we shall describe the manner in which the independent variable, the energy increment ΔE, influences the resolution limits in frequency and time in terms of the variable α, and we will then return to consideration of what the equations describing these resolution limits mean in terms of ΔE. We will also discuss special types of noise conditions that may give rise to the irreducible increment that ΔE represents.The conditions under which α sets the resolution limit along the time axis are derived from considering the system to contain an amount of energy E0 at time t = 0, and at that instant and for some time subsequent to that, to be free of any driving force. The resolution limit along the time axis is fixed by the least time interval, Δt, during which the system is capable of changing its energy storage by the factor, eα. This corresponds to dissipating at least the discernible energy increment, ΔE, during the time Δt. The resolution limit stated in terms of the natural period of the system, T0, is thus given by Δt/T0.When a system of this sort is used as an analyzer, the time interval over which the observation takes place must be of sufficient length for some change to be indicated. Thus the observation interval, Δτ, must equal or exceed the least time interval Δt. (In the previous papers cited in reference 2, the observation interval Δτ was used in place of the least time increment Δt. Use of the least time increment converts several previously found inequalities to equations.)It is a very close approximation (see appendix 1) to consider only the exponential factor in the decay of energy in the system. The oscillatory terms arising from the sinusoidal nature of the dissipation process are always relatively minor. Thus, regardless of the precise initial conditions, EE0|(t=Δt)=eαexp(2πQ0ΔtT0)and, therefore, the resolution limit along the time axis, expressed in terms of the natural undamped period of the system is ΔtT0αQ02π.From this equation one can see that the ratio of α to Δt is twice the real coordinate of the poles of the system on a Nyquist diagram.The system is selective with respect to the sinusoidal frequency of the driving force which acts as a source for the energy it stores. This makes it suitable for the detection of sinusoidal frequencies falling within its range of response, or a group of similar systems with different f0’s may be used in the analysis of the components of complex signals. For this purpose, the observation takes place in a plane approximating the energy-frequency plane. Along the frequency axis, one may speak of a frequency resolution limit in this sense: If one knows the natural frequency f0 to which an analyzer is tuned, then a maximum indication of energy storage for a steady-state sinusoidal signal corresponds to a signal in the vicinity of f0. Until the energy storage has changed by an amount in excess of ΔE or, when expressed as a relative proportion, a factor in excess of ΔE/E0, the departure from maximum indication is not observable. The change in frequency required to produce this effect, Δf, corresponds to the frequency resolution limit Δf/f0.As one can see from the equation for the energy stored in the steady-state condition, the response of the system to a sinusoidal frequency other than its natural undamped frequency is diminished to a fraction, F, of the maximum energy that would be stored at the natural frequency. Expressed in terms of the ratio of the frequency of the driving force to the natural undamped frequency of the system (f/f0), the fraction is F=1ϕ2+Q02(1ϕ2)2.The frequency limits for the region Δf surrounding f0 are found by solving for the condition eα=F. Solving for the upper and lower frequency limits, fb and fa, for which the energy stored in the resonating system is a fraction eα of the peak response yields the expression: fb2fa2f02=2Q0(eα1)+14Q02.And to a very close approximation, the frequency resolution limit comes out as: Δff0=(eα1)Q0when Δf is defined as (fbfa).The foregoing discussion may now be summarized in geometrical form by reference to a three-dimensional figure in signal space. The limits of resolution of an analyzer may be represented by an irreducible region in a three-dimensional space that must be exceeded before any information about a signal can be found. This space is shown in figure 1. As one can see from the equations, the figure of merit, Q0, enters into the resolution limits for time and frequency in a complementary way. Thus, it determines the relative proportions between the resolution limits Δf/f0 and Δt/T0. One may therefore apportion the relative uncertainties in frequency and time to suit the requirements of an experiment whenever one can control the Q of a system. This process is discussed more fully in “Uniform Transient Error” (see footnote 2). But the product of the resolution limits, the “uncertainty equation” depends in irreducible fashion upon α.Open in a separate windowFigure 1Resolution limits making up the indication limit, U.T0 is the period corresponding to f0, the natural undamped frequency of the system. E0 is the maximum sinusoidal energy stored.Thus the signal uncertainty equation, expressed in terms of α, turns out to be ΔfΔt=12π(eα1)where α is related to ΔE through the definition of the least change ΔE that may be observed in the total stored energy E0.The basic form of the resolution equations results from substitution for the exponential coefficient, α, in the equations already derived: ΔtT0=Q02πln(1ΔEE0) ΔfF0=1Q0ΔEE0(1ΔEE0) ΔfΔt=1πΔEE011ΔEE0ln11ΔEE0The volume corresponding to the irreducible limits in signal space, a quantity here defined as the “indication limit,” can be computed from the signal uncertainty equation. It is defined by the triple product of the resolution limits along the three axes, U=ΔEE0ΔtT0Δff0,and it is just the volume of the elementary figure shown in figure 1. Its computation might at first glance appear to be somewhat redundant to the signal uncertainty equation. In fact, its functional form permits factoring the expression for U into components that have an interesting connotation in physical measurements.Thus, since: U=ΔEE0Δff0ΔtT0ΔWW0Δff0ΔtT0the magnitude of U expressed in terms of the variable, α, is U=αeα2π(eα1)3/2and U is given in terms of the energy resolution limit, ΔE/E0, by: U=12π(ΔEE01ΔEE0)3/2[(1ΔEE0)ln(1ΔEE0)].It should be noted that the indication limit U and the signal uncertainty relation Δf·Δt depend only upon the energy resolution limit. The energy resolution limit is an independent variable and cannot be reduced by operations altering the function of the system along the f and t axes: for instance, changing the Q0 of the system.For a system operating at its optimum, the irreducible energy or power increment for the system would be set by the noise energy stored in the system. This noise energy may be due to Brownian motion in the system, for example. Noise of external origin may be present with the driving signal. In the general case the intrinsic and extrinsic noise powers will not be coherent, and the total noise energy stored in the system may usually be considered as the sum of contributions.The prior papers relating the resolution limits to the relative amounts of signal and noise present (see footnote 2) were based upon a derivation subject to the restriction that the signal-to-noise ratio be high. In those papers, the relationship found was ΔfΔτ(S/N)3/22πwhere S and N were signal and noise powers, respectively, and where the use of the observation interval Δτ rather than the limiting time increment Δt made the relation an inequality.Such a restriction was, in fact, not required. A tractable and useful expression for the product Δf·Δt can be derived, valid for all values of S/N.In the case where both signal and noise energy are stored, the total energy present in the system is E0=S+N. An increment in the signal energy stored (or in the input signal power) can be detected only if it equals or exceeds the minimum energy increment the system is capable of indicating. From this definition and the definition of the exponential factor, α: ΔSS+NΔEE0=ΔWW0=(1eα).And, where the least energy increment is controlled by the noise energy stored; ΔE→N: or, very nearly ΔEE0=NS+N.Thus, where the limit of detection is set by the noise energy: α=lnN+SSfrom which the signal uncertainty becomes: ΔfΔt=12πNS·ln(1+NS).For high values of S/N, this expression approaches the limit (1/2π) (S/N) −3/2, a result found previously (see footnote 2).An especially interesting interpretation can be made from the form of the expression for the indication limit, U, when it is written in terms of the signal-to-noise ratio. U=(S/N)3/22π(S/N1+S/NlnS/N1+S/N).The first factor can be recognized to be the expression for the signal uncertainty, Δf·Δt, when the signal-to-noise ratio is high. The term in parentheses also has a recognizable functional form, and in fact it is possible to relate it to the limiting probability of information transfer.To facilitate discussion, the factor N may be cleared from the fractions in the term, giving it the form: —S/(S+N) ln S/(S+N). As one can see from the series expansion for the natural logarithm, this product approaches the value N/S for large values of S relative to N, and the indication limit then is merely the trivial product of the reciprocal of the signal-to-noise ratio multiplied by the signal uncertainty function. It is quite another matter as S/N→0. Then the indication limit may be considered to consist of two meaningful terms: one is the same signal uncertainty function that has already been derived for high signal-to-noise ratios (see footnote 2) (i.e., for ΔE small re E0); the other is a modulating function that we will now proceed to relate to statistical matters.Where one is dealing with the statistical presence of noise and signal, the long-time average signal-tonoise ratio may be described in terms of expected values. The following definition of expected value is taken from a textbook on statistics.5 “The expected value of a random variable or any function of a random variable is obtained by finding the average value of the function over all possible values of the variable…. This is the expected value, or mean value of x. It is clear that the same result would have been obtained had we merely multiplied all possible values of x by their probabilities and added the results … we might reasonably expect the average value of x in a great number of trials to be somewhere near the expected value of x.”A special case that is quite common experimentally is one in which the level distributions of signal and noise are precisely the same. If one has either a signal or noise, the probability of the signal being P, then it follows that the probability of the noise is (1–P). If the signal and noise have the same level distribution, G, the modulating term SN+SlnSN+SPGPG+(1P)GlnPGPG+(1P)Gand thus the term that modulates the signal uncertainty function can be seen to reduce to the form —PlnP, where P is the probability of signal occurrence. From very simple considerations, therefore, the limit of detection is shown to be related directly here to a limiting probability of information transfer—a quantity usually derived in information theory by considering signals and noise to be made up of equal-sized unit impulses.It is interesting that this point was arrived at in the reverse direction by Woodward and Davies.6 They started with the PlnP term from Shannon’s information function and demonstrated from considering the signals and noise in radar detection that the quantity P was related to the signal-to-noise ratio for radar signals.However, the modulating term in its original form, in terms of S and N, may be seen to represent a generalization of the function defined in information theory as the channel capacity, H. This form is more closely related to the ordinary formulation describing the entropy of a system in terms of the probability distribution of energy states within it.7The question of whether a signal is detected in the presence of noise depends upon what the investigator chooses to consider a reasonable limiting probability in deciding whether a signal has been detected. A preponderance of only 1 percent above random distribution would correspond to a much smaller signal-to-noise ratio than would 90 percent. In fact, the common definition that gives the limit of detection as a signal-to-noise ratio of unity corresponds directly to setting the criterion for the limiting probability of detection at 50 percent.For the case N=S, the “—PlnP” term becomes ?½ ln ½ and the indication limit becomes: U1=12π(12ln12)=ln24πfor which the signal uncertainty function is: ΔfΔt=ln24π.These are the limiting forms also where signals and noise are transmitted as “bits.”Thus far we have discussed noise from the standpoint of the noise energy stored in the system. This energy is, so far as the actual storage in the system is concerned, not distinguishable from energy stored that might be derived from purely sinusoidal excitation. If all excitation were withdrawn, and the system left in the force-free state, the energy in it would be dissipated in the usual exponential decay, and the oscillations during the decay would be essentially of frequency f0, providing the system had moderate energy storage capacity (Q0>1).Therefore, unless we have some other means for distinguishing among the sources of the energy stored in the system—such as, for example, knowledge of the spectral character of pulsed signals applied to the system—or knowledge of the amount of energy found present in the system when it is considered to be free of any known source, we are left to regard as signal that part of the energy stored in the system that was produced by a sinusoidal signal of power Si. The remaining driving sources, of more or less broad spectral distribution, would in general be classified as noise.If the system is being driven at its natural frequency, its output signal will be Q0/2π times its input, for Q0 is 2π times the ratio of the energy stored to the energy dissipated during the cycle. At any other driving frequency, the energy stored will be weighted by the response, ρ, which relates the energy stored to the power supplied to the system. ρ=Q0/2πf0f2/f02+Q02(1f2/f02)2.Suppose the noise within the system arises from a source whose spectral distribution is given by the power density function, Nf. The system will store energy with a weighting factor of ρ.The total energy stored in the system due to excitation by noise will then be found from the integral: N=0Nfρdω.Although we have discussed the behavior of the system in terms of cycles per second, the quantity we have designated as Q0 is defined directly in terms of the ratio of the energy stored to the energy dissipated per radian. As the parameter of the function we are integrating is Q0, we must choose the dimensionally similar variable in order to carry out the integration correctly. Thus we must integrate with respect to ω rather than f. This point is given in detail because it is an instance of the dimensionality of angles recently pointed out by C. H. Page.8This integral may be evaluated easily for noise of constant energy per unit bandwidth in cycles per second; the result is then9 N=πNf2and it is independent of Q0 because the energy storage capacity of the system and its bandpass for noise are affected in a complementary fashion by changes in the figure of merit.For several other types of noise, an approximate equivalent white noise coefficient, nf, can be defined, for which the foregoing equations remain applicable. Given a noise whose spectral distribution is v(f), a mean value “equivalent” white noise per unit bandwidth may be computed from the equation: nf=fafbv(f)dffafbdf.Obviously if v(f) equals a constant, then nf is the familiar “White Noise” coefficient. However, for several other types of noise the mean-value integration yields an equivalent nf, which may be treated as a constant, with rather low residual error resulting from this approach. This can be seen from the fact that the system is selective with respect to the frequency components of the power sources from which it stores energy. Thus the restriction is only that the spectral distribution be changing slowly in the frequency region immediately surrounding f0. For the following noise distributions, the residual terms discarded amount, in the worst case, to no more than 12½ percent of the approximate value:For the noise distributions shown in the left-hand column, the respective mean values are shown in the right-hand column: v(f)=kfnf=k2(fb+fa) v(f)=k/f(constantenergy/octave)nf=klnfb/fa(fbfa)where, for this purpose, fb and fa are taken as the upper and lower half-energy limits of the response weighting function, ρ.One can use the foregoing to predict the ratio of energy stored from the sinusoidal excitation to the energy stored from the source of noise. Thus the effective signal-to-noise ratio in terms of energy stored in the system is given by: S/N=2Siρπnff0.The signal-to-noise ratio will be a maximum if the frequency of the sinusoidal excitation is just equal to the natural undamped frequency of the system. (For this condition, ρ = Q0/2π.) The maximum ratio is: S0/N=SiQ0π2nff0.The foregoing derivation has meaning also with respect to any system described by a second-order differential equation in which some event of brief duration occurs. A quantity directly analogous to the indication limit may be computed. In this instance, the expressions relate the resolution limits that bound the occurrence of the event, specifying the least increments of energy, frequency and time within which the event can take place.Further, the limiting energy increment, ΔE, need not be formed by noise. For example, in a pulse- height system it would correspond to the smallest step in pulse height that might be indicated. As another example, an atomic system described by a second-order differential equation might have ΔE subject to quantum limitations.  相似文献   

18.
The thermodynamic ionization constants of the six dichloroanilines and the six dichlorophenols in aqueous solution at 25 °C have been determined by the spectrophotometric method. The pK values found are recorded in
pKΔpK(1)ΔpKA(2)



DichloroanilineDichlorophenolDinitrophenolDichlorophenolDichloroaniline
2,3–1.761(0.005)7.696(0.006)0.65−0.050.16−0.026
2,4–2.016(.004)7.892 (0.005)  .25    .06  .01  −.006
2,5–1.529(.003)7.508 (.005)  .40    .15  .03  −.057
2,6–0.422 (.004)6.791(.003)  .71    .27  .28     .045
3, 4–2.968(.005)8.585 (.006)  .13  −.04−.07     .013
3, 5–2.383(.003)8.185(.004)  .11    .07   .06     .037
Open in a separate windowaΔpK(1)=pK (calc.)−pK (obs.), pK (calc.) being calculated from pK values of the monosubstituted compounds. ΔpKA (2)≡pKA (calc.)−pKA (obs.), pKA (calc.) being calculated by eq (6).An approximately linear relation is found to exist between the pKA value of a dichloroaniline and the pKP value of the corresponding dichlorophenol. The relation is pKA = ?9.047 + 1.401 pKP.This equation yields pKA values which differ from the observed by not more than 0.06 pK unit and, on the average, by 0.03 pK unit; it applies even when both substituents are in the ortho position.  相似文献   

19.
Single crystals of Hg-12(n?1)n and infinite layer-CaCuO2 obtained at gas pressure P=10kbar     
J. Karpinski  K. Conder  H. Schwer  J. Löhle  L. Lesne  C. Rossel  A. Morawski  A. Paszewin  T. Lada 《Journal of Superconductivity》1995,8(4):515-518
Single crystals of Hgl-xPbxBa2Can-lCunO2n+2+(x=0,0.2,0.5; n=2,3,4,5) and infinite layer CaCuO2 compounds have been grown using a high gas pressure. Resistivity measurements have been performed in fields up to 10 T. The Hg0.8Pb0.2-1234 single crystals of a size up to 0.5×0.5 mm2 have a T c onset of 130 K. Single crystals of CaCuO2 of a size up to 2×1 mm2 have a T c onset between 70 and 100 K. X-ray structural refinements have been performed on the CaCuO2 and HgPb-12(n–1)n single crystals.  相似文献   

20.
Effect of Sintering Duration on the Thermal Conductivity of (Bi, Pb)-2223 Superconducting Pellets Between 10 and 150 K     
T. K. Dey 《Journal of Superconductivity》1998,11(2):279-284
The influence of sintering duration on the electrical resistivities and thermal conductivities of (Bi0.8Pb0.2)2Sr2Ca2Cu3O9.8+ pellets with 0.11 < < 0.54 is reported between 10 and 150 K. The results indicate a gradual transformation of the 2212 phase to the 2223 phase and this transformation starts within 5 h of sintering in air at 840°C. The thermal conductivity of the pellets sintered for shorter durations display two maxima at T c0 and around 110 K, respectively. The shape and magnitude of these maxima depend on the relative amount of the 2212 and 2223 phases present in the pellets. While the magnitude of the total thermal conductivity over the measured temperature range is strongly influenced by the duration of sintering, the phonon-dominant nature of heat transport is retained. The relative contribution of the electronic part (E) to the total thermal conductivity () remains small and does not change appreciably with sintering time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号