首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Employing data from a sample of 1,161 small firms, the paper draws broad comparisons between patterns of innovation expenditure and output, innovation networking, knowledge intensity and competition within Knowledge‐Intensive Business Services (KIBS; N = 563) and manufacturing firms (N = 598). In so doing, KIBS are further disaggregated along lines proposed by Miles et al. (1995 Miles, I., Kastrinos, N., Flanagan, K., Bilderbeek, R., den Hertog, P., Huitink, W. and Bouman, M. 1995. Knowledge Intensive Business Services: Their Role as Users, Carriers and Sources of Innovation EIMS Publication No. 15, Innovation Programme, DGXIII, Luxembourg [Google Scholar]). That is, as technology‐based KIBS (t‐KIBS; N = 264) and professional KIBS (p‐KIBS; N = 299). However, detailing such broad patterns is preliminary. The principal interest of the paper is in identifying the factors associated with higher levels of innovativeness, within each sector, and the extent to which such “success” factors vary across sectors. The results of the analysis appear to offer support for some widely held beliefs about the relative roles of “softer” and “harder” sources of knowledge and technology within services and manufacturing (Tether, 2004 Tether, B. 2004. Do Services Innovate (Differently)?, Manchester: University of Manchester. CRIC Discussion Paper 66 [Google Scholar]). However, some important qualifications are also apparent.  相似文献   

2.
The main purpose of this corrigendum is to indicate and rectify the same mistakes made by Schrady (1967 Schrady, D. A. 1967. “A Deterministic Inventory Model for Repairable Items.” Naval Research Logistics Quarterly 14: 391398.[Crossref] [Google Scholar]), Nahmias and Rivera (1979 Nahmias, S., and H. Rivera. 1979. “A Deterministic Model for a Repairable Item Inventory System with a Finite Repair Rate.” International Journal of Production Research 17: 215221.[Taylor &; Francis Online], [Web of Science ®] [Google Scholar]), and Teunter (2004 Teunter, R. H. 2004. “Lot-Sizing for Inventory Systems with Product Recovery.” Computers and Industrial Engineering 46: 431441.[Crossref], [Web of Science ®] [Google Scholar]) in the course of solving their respective models in order that subsequent researchers will not follow the same. To this end, we derive the corresponding correct global-optimal formulae for the substitution-policy model (1,?n), with infinite or finite recovery (or called repair) rate, using differential calculus, as well as providing a closed-form expression to identify the optimal positive integral value of n recovery set-ups. In addition, we also rectify the formulae and solution procedure for numerically solving the constrained non-linear programme.  相似文献   

3.
A new algorithm is proposed for topology optimization based on a fluid dynamics analogy. It possesses characteristics similar to most well-known methods, such as the Evolutionary Structural Optimization (ESO)/Bidirectional Evolutionary Structural Optimization (BESO) method due to Xie and Steven (1993 Xie, Y. M., and G.P. Steven. 1993. “A Simple Evolutionary Procedure for Structural Optimisation.” Computers and Structures 49 (5): 885896. doi: 10.1016/0045-7949(93)90035-C[Crossref], [Web of Science ®] [Google Scholar], “A Simple Evolutionary Procedure for Structural Optimisation.” Computers and Structures 49 (5): 885–896.), which works with discrete values, and the Solid Isotropic Material with Penalization (SIMP) method due to Bendsøe (1989 Bendsøe, M.P. 1989. “Optimal Shape Design as a Material Distribution Problem.” Structural Optimization 1 (4): 193202. doi: 10.1007/BF01650949[Crossref] [Google Scholar], “Optimal Shape Design as aMaterial Distribution Problem.” Structural Optimization 1 (4): 193–202.) and Zhou and Rozvany (1991 Zhou, M., and G.I.N. Rozvany. 1991. “The COC Algorithm—Part II: Topological, Geometry and Generalized Shape Optimization.” Computer Methods in Applied Mechanics and Engineering 89 (1–3): 309336. doi: 10.1016/0045-7825(91)90046-9[Crossref], [Web of Science ®] [Google Scholar], “The COCAlgorithm–Part II: Topological, Geometry and Generalized Shape Optimization.” Computer Methods in Applied Mechanics and Engineering 89 (1–3): 309–336.) (using Optimality Criterion (OC) or Method of Moving Asymptotes (MMA)), which works with intermediate values, as it is able to work both with discrete and intermediate densities, but always yields a solution with discrete densities. It can be proven mathematically that the new method is a generalization of the BESO method and using appropriate parameters it will operate exactly as the BESO method. The new method is less sensitive to rounding errors of the matrix solver as compared to the BESO method and is able to give alternative topologies to well-known problems. The article presents the basic idea and the optimization algorithm, and compares the results of three cantilever optimizations to the results of the SIMP and BESO methods.  相似文献   

4.
《Photographies》2013,6(2):221-238
Photography has always had a precarious relation to cultural value: as Walter Benjamin put it, those who argued for photography as an art were bringing it to a tribunal it was in the process of overthrowing. This article examines the case of Polaroid, a company and technology that, after Kodak and prior to digital, contributed most to the mass‐amateurization of photography, and therefore, one might expect, to its cultural devaluation. It considers the specific properties of the technology, the often skeptical reception Polaroid cameras and film received from the professional photographic press, and Polaroid's own strategies of self‐presentation, and finds that in each case a contradictory picture emerges. Like fast food, the Polaroid image is defined by its speed of appearance – the proximity of its production and consumption – and is accordingly devalued; and yet at the same time it produces a single, unique print. The professional photographic press, self‐appointed arbiters of photographic value, were often rapturous about the technical breakthroughs achieved by Polaroid, but dismissive of the potential non‐amateur applications and anxious about the implications for the ‘expert’ photographer of a camera that replaced the expert's functions. For obvious marketing reasons, Polaroid itself was always keen to emphasize what the experts scorned in its products (simplicity of operation), and yet, equally, consistently positioned itself at the “luxury” end of the camera market and carried out an ambitious cultural program that emphasized the “aesthetic” potential of Polaroid photography. The article concludes that this highly ambivalent status of Polaroid technology in relation to cultural value means that it shares basic features with kitsch, a fact that has been exploited by, among others, William Wegman Wegman, William. 1982. Man's Best Friend, New York: Harry N. Abrams.  [Google Scholar], and has been amplified by the current decline and imminent disappearance of Polaroid photography.  相似文献   

5.
Usual superconductors are classified into two categories: of type-1 when the ratio of the magnetic field penetration length (λ) to coherence length (ξ) $\kappa=\lambda/\xi <1/\sqrt{2}$ and of type-2 when $\kappa >1/\sqrt{2}$ . The boundary case $\kappa =1/\sqrt{2}$ is also considered to be a special situation, frequently termed as “Bogomolnyi limit”. Here we discuss multicomponent systems which can possess three or more fundamental length scales and allow a separate superconducting state, which was recently termed “type-1.5”. In that state, a system has the following hierarchy of coherence and penetration lengths $\xi_{1}<\sqrt{2}\lambda<\xi_{2}$ . We also briefly overview the works on single-component regime $\kappa \approx 1/\sqrt{2}$ and comment on recent discussion by Brandt and Das in the proceedings of the previous conference in this series.  相似文献   

6.
Ice is being used in certain deep mines to transport refrigeration to underground areas. Research has been carried out previously into the pipeline conveying characteristics of ice with air, but there remains a lack of knowledge about some aspects of this complex flow. Previous articles (Sheer, 1995 Sheer , T. J. 1995 . Pneumatic conveying of ice particles through mine-shaft pipelines . Powder Technol. 85 : 203219 . [Google Scholar]; Sheer et al., 2001 Sheer , T. J. , R. Ramsden , & M. Butterworth . 2001. The design of pipelines for transporting ice into deep mines. In Handbook of Conveying and Handling of Particulate Solids , eds. A. Levy and H. Kalman . New York : Elsevier Science. pp. 425433. [Google Scholar]) have described experimental results on the pneumatic conveying of ice in particulate form (“hard” ice) through successive long horizontal and vertical sections of pipelines into mines. More recent research has been carried out to determine the conveying characteristics of “slush” ice that resembles wet snow, with an ice mass fraction range of 65–75%. Laboratory pneumatic conveying tests with slush ice were conducted through three horizontal plastic pipelines with inner diameters of 43, 54, and 69 mm, each pipeline being approximately 50 m long and including various bends. The tests yielded numerical and photographic data that were used to investigate the conveying characteristics of slush ice (including flow regime transition to plug flow and pressure gradients) and to compare them with the previous results for particulate ice. It was found that the conveying characteristics of the slush depend strongly on the water content. Correlations are proposed for multiphase friction factors.  相似文献   

7.
I propose the index $\hbar$ (“hbar”), defined as the number of papers of an individual that have citation count larger than or equal to the $\hbar$ of all coauthors of each paper, as a useful index to characterize the scientific output of a researcher that takes into account the effect of multiple authorship. The bar is higher for $\hbar.$   相似文献   

8.
This paper is a comment on the survey paper by Biau and Scornet (TEST, 2016. doi: 10.1007/s11749-016-0481-7) about random forests. We focus on the problem of quantifying the impact of each ingredient of random forests on their performance. We show that such a quantification is possible for a simple pure forest, leading to conclusions that could apply more generally. Then, we consider “hold-out” random forests, which are a good middle point between “toy” pure forests and Breiman’s original random forests.  相似文献   

9.
Expectile, first introduced by Newey and Powell in 1987 Newey, W. K., and Powell, J. L. (1987), “Asymmetric Least Squares Estimation and Testing,” Econometrica, 55, 819847.[Crossref], [Web of Science ®] [Google Scholar] in the econometrics literature, has recently become increasingly popular in risk management and capital allocation for financial institutions due to its desirable properties such as coherence and elicitability. The current standard tool for expectile regression analysis is the multiple linear expectile regression proposed by Newey and Powell in 1987 Newey, W. K., and Powell, J. L. (1987), “Asymmetric Least Squares Estimation and Testing,” Econometrica, 55, 819847.[Crossref], [Web of Science ®] [Google Scholar]. The growing applications of expectile regression motivate us to develop a much more flexible nonparametric multiple expectile regression in a reproducing kernel Hilbert space. The resulting estimator is called KERE, which has multiple advantages over the classical multiple linear expectile regression by incorporating nonlinearity, nonadditivity, and complex interactions in the final estimator. The kernel learning theory of KERE is established. We develop an efficient algorithm inspired by majorization-minimization principle for solving the entire solution path of KERE. It is shown that the algorithm converges at least at a linear rate. Extensive simulations are conducted to show the very competitive finite sample performance of KERE. We further demonstrate the application of KERE by using personal computer price data. Supplementary materials for this article are available online.  相似文献   

10.
Extension of the PS model (Gao et al. [1 H. Gao, T. Y. Zhang, and P. Tong, “Local and global energy release rates for an electrically yielded crack in a piezoelectric ceramic,” J. Mech. Phy. Solids, vol. 45, pp. 491510, 1997.[Crossref], [Web of Science ®] [Google Scholar]]) in piezoelectric materials and the SEMPS model (Fan and Zhao [2 C. Y. Fan and M. H. Zhao, “Nonlinear fracture of 2-D magnetoelectroelastic media: analytical and numerical solutions,” Int. J. Solids Struct., vol. 48, pp. 23832392, 2011.[Crossref], [Web of Science ®] [Google Scholar]]) in MEE materials, is proposed for two semi-permeable cracks in a MEE medium. It is assumed that the magnetic yielding occurs at the continuation of the cracks due to the prescribed loads. We have model these crack continuations as the zones with cohesive saturation limit magnetic induction. Stroh's formalism and complex variable techniques are used to formulate the problem. Closed form analytical expressions are derived for various fracture parameters. A numerical case study is presented for BaTiO3 ? CoFe2O4 ceramic cracked plate.  相似文献   

11.
This paper considers a two-stage assembly flow shop problem where m parallel machines are in the first stage and an assembly machine is in the second stage. The objective is to minimise a weighted sum of makespan and mean completion time for n available jobs. As this problem is proven to be NP-hard, therefore, we employed an imperialist competitive algorithm (ICA) as solution approach. In the past literature, Torabzadeh and Zandieh (2010 Torabzadeh, E., and M. Zandieh. 2010. “Cloud theory-based Simulated Annealing Approach for Scheduling in the Two-stage Assembly Flow Shop.” Advances in Engineering Software 41: 12381243.[Crossref], [Web of Science ®] [Google Scholar]) showed that cloud theory-based simulated annealing algorithm (CSA) is an appropriate meta-heuristic to solve the problem. Thus, to justify the claim for ICA capability, we compare our proposed ICA with the reported CSA. A new parameters tuning tool, neural network, for ICA is also introduced. The computational results clarify that ICA performs better than CSA in quality of solutions.  相似文献   

12.
Potassium nitrite is very sensitive to temperature, humidity, and the atmosphere, so few studies have been made in this field for the thermodynamic properties of molten salt with nitrite salt. In this article, the liquidus curves of NaCl– $\mathrm{{NaNO}}_{2}$ NaNO 2 , KCl– $\mathrm{{KNO}}_{2}$ KNO 2 , and $\mathrm {NaNO}_{2}$ NaNO 2 $\mathrm{{KNO}}_{2}$ KNO 2 are calculated by a simple “hard-sphere” ionic interaction model. The calculated liquidus temperatures show good agreement with experimental values, which implies an ideal mixing enthalpy and entropy for the liquid binary systems. In addition to the phase equilibrium data and experimental thermochemical properties of molten salt systems, the activities of these binary systems are determined by the phase diagrams and the analytical integration of the classical Gibbs–Duhem equation.  相似文献   

13.
Marcel Ausloos 《Scientometrics》2014,101(3):1565-1586
Each co-author (CA) of any scientist can be given a rank \((r)\) of importance according to the number \((J)\) of joint publications which the authors have together. In this paper, the Zipf–Mandelbrot–Pareto law, i.e. \( J \propto 1/(\nu +r)^{\zeta }\) is shown to reproduce the empirical relationship between \(J\) and \(r\) and shown to be preferable to a mere power law, \( J \propto 1/r^{\alpha } \) . The CA core value, i.e. the core number of CAs, is unaffected, of course. The demonstration is made on data for two authors, with a high number of joint publications, recently considered by Bougrine (Scientometrics, 98(2): 1047–1064, 2014) and for seven authors, distinguishing between their “journal” and “proceedings” publications as suggested by Miskiewicz (Physica A, 392(20), 5119–5131, 2013). The rank-size statistics is discussed and the \(\alpha \) and \(\zeta \) exponents are compared. The correlation coefficient is much improved ( \(\sim \) 0.99, instead of 0.92). There are marked deviations of such a co-authorship popularity law depending on sub-fields. On one hand, this suggests an interpretation of the parameter \(\nu \) . On the other hand, it suggests a novel model on the (likely time dependent) structural and publishing properties of research teams. Thus, one can propose a scenario for how a research team is formed and grows. This is based on a hierarchy utility concept, justifying the empirical Zipf–Mandelbrot–Pareto law, assuming a simple form for the CA publication/cost ratio, \(c_r = c_0\, log_2 (\nu +r)\) . In conclusion, such a law and model can suggest practical applications on measures of research teams. In Appendices, the frequency-size cumulative distribution function is discussed for two sub-fields, with other technicalities  相似文献   

14.
On the basis of Lee–Low–Pines unitary transformation, the influence of magnetic field and LO phonon effects on the energy of spin polarization states of strong-coupling bipolarons in a quantum dot (QD) is studied by using the variational method of Pekar type. The variations of the ground state energy $E_0$ and the first excited state the energy $E_1$ of bipolarons in a two-dimensional QD with the confinement strength of QDs $\omega _0$ , dielectric constant ratio $\eta $ , electron–phonon coupling strength $\alpha $ and cyclotron resonance frequency of the magnetic field $\omega _{c}$ are derived when the influence of the spin and external magnetic field is taken into account. The results show that both energies of the ground and first excited states ( $E_0$ and $E_1)$ consist of four parts: the single-particle energy of electrons $E_\mathrm{e}$ , Coulomb interaction energy between two electrons $E_\mathrm{c}$ , interaction energy between the electron spin and magnetic field $E_\mathrm{S}$ and interaction energy between the electron and phonon $E_{\mathrm{e-ph}}$ ; the energy level of the first excited state $E_1$ splits into two lines as $E_1^{(1+1)}$ and $E_1^{(1-1)}$ due to the interaction between the single-particle “orbital” motion and magnetic field, and each energy level of the ground and first excited states splits into three “fine structures” caused by the interaction between the electron spin and magnetic field; the value of $E_{\mathrm{e-ph}}$ is always less than zero and its absolute value increases with increasing $\omega _0$ , $\alpha $ and $\omega _c$ ; the effect of the interaction between the electron and phonon is favorable to forming the binding bipolaron, but the existence of the confinement potential and Coulomb repulsive energy between electrons goes against that; the bipolaron with energy $E_1^{(1-1)}$ is easier and more stable in the binding state than that with $E_1^{(1+1)}$ .  相似文献   

15.
Platinum resistance thermometers (PRTs) are capable of providing reliable measurements at the millikelvin level, and are widely used in both industry and research applications. However, the intrinsic thermal noise associated with their resistance requires the use of a measurement current of typically around a milliampere to determine their resistance. Unfortunately, this same current also dissipates heat into the thermometer element, causing the well-known “self-heating” effect of typically a few millikelvins. Performing measurements in terms of the ratio to the resistance at the ice point provides some level of cancelation of this error around this temperature: If the thermal resistance between the sensor and environment were constant, this cancelation would work over a much wider temperature range. However, there is little evidence on the effectiveness of this strategy in practice. This paper reports on an extensive set of systematic measurements of the self-heating of six standard platinum resistance thermometers (SPRTs) and six industrial platinum resistance thermometers (IPRTs) of different designs, as a function of temperature, over the range from \(-190~^{\circ }\mathrm{C}\) to \(420\,^{\circ }\mathrm{C}\) , in a range of intercomparison baths and blocks. The measurements show that PRT self-heating varies from being almost constant with temperature to being nearly proportional to temperature. The assumption of a roughly temperature-independent thermal resistance is thus not justified in general. The results allow estimation of appropriate uncertainty terms for SPRT and IPRT self-heating for the two scenarios of “working in \(R\) ” and “working in \(W\) .”  相似文献   

16.
It is increasingly recognized that many industrial and engineering experiments use split-plot or other multi-stratum structures. Much recent work has concentrated on finding optimum, or near-optimum, designs for estimating the fixed effects parameters in multi-stratum designs. However, often inference, such as hypothesis testing or interval estimation, will also be required and for inference to be unbiased in the presence of model uncertainty requires pure error estimates of the variance components. Most optimal designs provide few, if any, pure error degrees of freedom. Gilmour and Trinca (2012 Gilmour, S. G., and Trinca, L. A. (2012), “Optimum Design of Experiments for Statistical Inference” (with discussion), Applied Statistics, 61, 345401.[Crossref], [Web of Science ®] [Google Scholar]) introduced design optimality criteria for inference in the context of completely randomized and block designs. Here these criteria are used stratum-by-stratum to obtain multi-stratum designs. It is shown that these designs have better properties for performing inference than standard optimum designs. Compound criteria, which combine the inference criteria with traditional point estimation criteria, are also used and the designs obtained are shown to compromise between point estimation and inference. Designs are obtained for two real split-plot experiments and an illustrative split–split-plot structure. Supplementary materials for this article are available online.  相似文献   

17.
We report the results of directional point-contact measurements in Mg(B $_{1-x}$ C $_{x})_{2}$ single crystals. The amplitudes of the gaps, $\Delta_{\pi}$ and $\Delta_{\sigma}$ , were determined for each C content by fitting the experimental low-temperature normalized conductance curves of our “soft” point contacts with the BTK model generalized to the two-band case. We found that, on increasing the carbon content, $\Delta_{\sigma}$ decreases almost linearly with $T_{c}$ and $\Delta_{\pi}$ slightly increases until, at $x=0.132$ (where $T_{c}=19$ K), they assume the same value $\Delta =3.2 \pm 0.9$ meV. This result is confirmed by the temperature and magnetic-field dependence of the conductance curves at this C content, which do not show any evidence of two distinct gap values. In particular, the Δ versus T curve follows very well a standard BCS curve, with a gap ratio $2\Delta /k_{B} T_{c}=3.9$ . These experimental findings are compared to the theoretical predictions of the two-band model in the Eliashberg formulation.  相似文献   

18.
The problem of designing a water quality monitoring network for river systems is to find the optimal location of a finite number of monitoring devices that minimizes the expected detection time of a contaminant spill event while guaranteeing good detection reliability. When uncertainties in spill and rain events are considered, both the expected detection time and detection reliability need to be estimated by stochastic simulation. This problem is formulated as a stochastic discrete optimization via simulation (OvS) problem on the expected detection time with a stochastic constraint on detection reliability; and it is solved with an OvS algorithm combined with a recently proposed method called penalty function with memory (PFM). The performance of the algorithm is tested on the Altamaha River and compared with that of a genetic algorithm due to Telci, Nam, Guan and Aral (2009) Telci, I. T., K. Nam, J. Guan, and M.M. Aral, 2009. “Optimal Water Quality Monitoring Network Design for River Systems.” Journal of Environmental Management, 90 (3–4): 29872998. doi: 10.1016/j.jenvman.2009.04.011[Crossref], [PubMed], [Web of Science ®] [Google Scholar].  相似文献   

19.
The ideal gas heat capacity of sodium atoms in the vapor phase is calculated to high temperatures using statistical mechanics. Since there are, in principle, an infinite number of atomic energy levels, the partition function and the heat capacity will grow very large unless the summation over energy levels is constrained as temperature increases. At higher temperatures, the increasing size of the atoms, which is a consequence of the increased population of highly excited energy levels, is used as a mechanism for limiting the summation over energy levels. The “ \( {IP-kT}\) ” and “Bethe” procedures for cutting off the summation over energy levels will be discussed, and the results obtained using the two methods will be compared. In addition, although experimental information is available about lower atomic energy levels and some theoretical calculations are available for excited energy levels, information is lacking for most individual atomic states associated with highly excited energy levels. A “fill” procedure for approximating the energy of the unknown states will be discussed. Sodium vapor will also be considered to be a real gas that obeys the virial equation of state. The first non-ideal term in the power series expansion of the heat capacity in terms of virial coefficients involves the second virial coefficient, \(B(T)\) . This depends on the interaction potential energy between two sodium atoms, i.e., the potential energy curves for the sodium dimer. Accurate interaction potential energies can be obtained from either experimental or theoretical information for the lowest ten electronic states of the sodium dimer. These are used to calculate \(B(T)\) for each state, and the averaged value of \(B(T)\) for all ten states is used to calculate the non-ideal contribution to the heat capacity of sodium atoms as a function of temperature.  相似文献   

20.
The semi-adiabatic method, commonly referred to as the Langavant method, is widely applied for routine measurements of the hydration heat of cements. This standardized method is applicable to all cements and hydraulic binders, whatever their chemical composition, with the exception of quick-setting cements. The calorimeters used to perform these hydration heat measurements must be previously calibrated by electrical substitution, in order to determine their coefficient of total heat loss \(\alpha \) and their heat capacity \(\mu \) . LNE developed a facility enabling performance of the calibration of these Langavant calorimeters, in order to insure the traceability of the hydration heat measurements to basic quantities such as temperature, time, mass, and electrical quantities. Calibration results of a typical Langavant calorimeter are presented here. The measurement uncertainties of the parameters \(\alpha \) and \(\mu \) have been assessed according to the ISO/BIPM “Guide to the Expression of Uncertainty in Measurement.” The relative expanded uncertainties ( \(k = 2\) ) of the coefficient of total heat loss \(\alpha \) and the heat capacity \(\mu \) are estimated, respectively, to be about 0.7 % and 15 %.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号