首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. Johannes Kepler's system of mathematical archetypes played a primary role in his physical cosmology. Identified as the geometrical models making up the metaphysical blueprint of the material world, Kepler's archetypes underlay every aspect of his world picture. Despite their importance, however, it has remained unclear how Kepler conceived of the archetypes in corporeal terms, that is, how he saw archetypes as being embodied in the form of material phenomena. Kepler's solution, I suggest, is an efficient cause, a facultas animalis, or animate faculty, pervading both the celestial and the terrestrial realms. In addition to its ability to realise the archetypes in their physical form, the animate faculty allowed Kepler to account for heavenly and earthly occurrences in terms of the same geometrical principles. Faraway phenomena such as comets and new stars could thus be seen as essentially comparable to more accessible curiosities on the Earth.  相似文献   

2.
This paper shows a secured key-based (k, n) threshold cryptography scheme, where key as well as secret data is shared among set of participants. In this scheme, each share has some bytes missing and these missing bytes can be recovered from a set of exactly k shares, but not less than k. Therefore, each share contains partial secret information which is also encrypted by DNA encoding scheme. Additionally, this paper introduces a concept, where each share information size is varied. More precisely, each share contains a different percentage of secret information and which is strongly dependent on the key. Moreover key sensitivity analysis and statistical analysis prove the robustness against different cryptanalysis attacks and thus ensures high acceptability for information security requirements.  相似文献   

3.
In this paper we consider permutation flow shop scheduling problems with batch setup times. Each job has to be processed on each machine once and the technological routes are identical for all jobs. The set of jobs is divided into groups. There are given processing timest ij of jobi on machinej and setup timess rj on machinej when a job of ther-th group is processed after a job of another group. It is assumed that the same job order has to be chosen on each machine. We consider both the problems of minimizing the makespan and of minimizing the sum of completion times, where batch or item availability of the jobs is assumed. For these problems we give various constructive and iterative algorithms. The constructive algorithms are based on insertion techniques combined with beam search. We introduce suitable neighbourhood structures for such problems with batch setup times and describe iterative algorithms that are based on local search and reinsertion techniques. The developed algorithms have been tested on a large collection of problems with up to 80 jobs.Supported by Deutsche Forschungsgemeinschaft (Project ScheMA) and by the International Association for the Promotion of Cooperation with Scientists from the Independent States of the Former Soviet Union (Project INTAS-93-257)  相似文献   

4.
ABSTRACT

The paper analyzes the designer portfolio configurations employed by firms, in design-intensive industries, to implement different product design strategies. Using the fuzzy set qualitative comparative analysis methodology, the paper explores how decorative lamps manufacturers, that first adopted the new LED technology, assembled their designer portfolios. The study shows that, in the early phases of LED lamps, four different designer portfolio archetypes were adopted by firms, two of them related to a product language divergence strategy and two to a product language convergence strategy: international design-star archetype, crowd design-innovator archetype, local ambassador archetype, international bridge archetype. These four archetypes are discussed, contributing to a better understanding of the relationship between product design strategies and designers’ management in design-intensive industries.  相似文献   

5.
In the field of resource constrained scheduling, the papers in the literature are mainly focused on minimizing the maximum completion time of a set of tasks to be carried out, paying attention to respecting the maximum simultaneous availability of each resource type in the system. This work, instead, considers the issues of balancing the resource usage and minimizing the peak of the resources allocated each time in the schedule, while keeping the makespan low. To this aim we propose a local search algorithm which acts as a multi start greedy heuristic. Extensive experiments on various randomly generated test instances are provided. Furthermore, we present a comparison with lower bounds and known heuristics. Correspondence to: Massimiliano CaramiaWe wish to thank the anonymous referees for their useful comments which have led to this improved version of the paper.  相似文献   

6.
Both principal components analysis (PCA) and orthogonal regression deal with finding a p-dimensional linear manifold minimizing a scale of the orthogonal distances of the m-dimensional data points to the manifold. The main conceptual difference is that in PCA p is estimated from the data, to attain a small proportion of unexplained variability, whereas in orthogonal regression p equals m ? 1. The two main approaches to robust PCA are using the eigenvectors of a robust covariance matrix and searching for the projections that maximize or minimize a robust (univariate) dispersion measure. This article is more akin to second approach. But rather than finding the components one by one, we directly undertake the problem of finding, for a given p, a p-dimensional linear manifold minimizing a robust scale of the orthogonal distances of the data points to the manifold. The scale may be either a smooth M-scale or a “trimmed” scale. An iterative algorithm is developed that is shown to converge to a local minimum. A strategy based on random search is used to approximate a global minimum. The procedure is shown to be faster than other high-breakdown-point competitors, especially for large m. The case whereas p = m ? 1 yields orthogonal regression. For PCA, a computationally efficient method to choose p is given. Comparisons based on both simulated and real data show that the proposed procedure is more robust than its competitors.  相似文献   

7.
A simple non-iterative procedure is described for obtaining missing value estimates by solving a set of simultaneous linear equations that can be written directly. The method is derived for the two-way crossed classification and results for the P-way crossed classification are also given. Following the procedure of Yates (1933), estimates of the missing values are obtained such that their residuals are zero, thereby minimizing the error mean square. For the P-way crossed classifications the error mean square may be a pooled mean square of higher ordered interactions. The procedure is applicable to other designs as well.  相似文献   

8.
This paper addresses a method for determining the optimum maintenance schedule for components in the wear-out phase. The interval between maintenance for the components is optimized by minimizing the total cost. This consists of maintenance cost, operational cost, downtime cost and penalty cost. A decision to replace a component must also be taken when a component cannot attain the minimum reliability and availability index requirement. Premium solver platform, a spreadsheet-modeling tool, is utilized to model the optimization problem. Constraints, which are the considerations to be fulfilled, become the director of this process. A minimum and a maximum value are set on each constraint so as to give the working area of the optimization process. The optimization process investigates n-equally spaced maintenance at an interval of Tr. The increase in operational and maintenance costs due to the deterioration of the components is taken into account. This paper also performs a case study and sensitivity analysis on a liquid ring primer of a ship's bilge system.  相似文献   

9.
Abstract

This work suggests a maximizing set and minimizing set based fuzzy multiple criteria decision‐making (MCDM) model, where criteria are classified into cost and benefit criteria. The final fuzzy evaluation value of each alternative is developed based on the concept of subtracting the summation of weighted normalized benefit ratings from that of weighted normalized cost ratings. Using interval arithmetic of fuzzy numbers can develop the membership functions for the final fuzzy evaluation values. Chen's maximizing set and minimizing set is then applied to defuzzify all the final fuzzy numbers for ranking alternatives. Formulas for the membership functions and ranking procedure of the final fuzzy numbers are clearly presented. The suggested method provides an extension to the fuzzy MCDM techniques available. A numerical example demonstrates the computational process of the proposed method.  相似文献   

10.
Abstract

This paper presents a heuristic for solving a single machine scheduling problem with the objective of minimizing the total absolute deviation. The job to be scheduled on the machine has a processing time, pi , and a preferred due date, di . The total absolute deviation is defined as the sum of the earliness or tardiness of each job on a schedule 5. This problem is proved to be NP‐complete by Garey et al. [8]. As a result, we developed a two‐phase procedure to provide a near‐optimal solution to this problem. The two‐phase procedure includes the following steps: First, a greedy heuristic is applied to the set of jobs, N, to generate a “good” initial sequence. According to this initial sequence, we run Garey's local optimization algorithm to provide an initial schedule. Then, a pairwise switching algorithm is adopted to further reduce the total deviation of the schedule. The effectiveness of the two‐phase procedure is empirically evaluated and has been found to indicate that the solutions obtained from this heuristic procedure are often better than other heuristic approaches.  相似文献   

11.
Books Received     
Regression models of the forms proposed by Scheffé and by Becker have been widely and usefully applied to describe the response surfaces of mixture systems. These models do not contain a constant term. It has been common practice to test the statistical significance of these mixture models by the same statistical procedures used for other regression models whose constant term is absent (e.g., because the regression must pass through the origin). In this paper we show that the common practice produces misleading reslllts for mixtures. The mixture models require a different set of F, R 2, and R A 2 statistics. The correct mixture statistics correspond to a physically consistent null hypothesis and are also consistent with the expression of the mixture model in the older “slack-variable” form. An illustrative example is included.  相似文献   

12.
Hojo  Hiroshi 《Behaviormetrika》1990,17(27):47-57

A generalized version of Takane and Carroll’s (1981) (Psychometrika, 46, 389–405) scaling model is developed to analyze rank order data. This model assigns to each stimulus n (the number of stimuli) different scale values each of which is to be utilized at its respective occasion of successive first choice in the ranking task. The n scale values assigned to a stimulus is predicted by using the cumulative binominal distribution function for an assumed set of n−1 Bernoulli trials of a simple judgment about the same attribute of that stimulus as is to be scaled from the ranking data. Examples of the application of the model are reported. They have demonstrated that the generalized model can account for all the data sets examined better than the original one.

  相似文献   

13.
This paper describes a novel application of singular value decomposition (SVD) of subsets of the phase-space trajectory for calculation of the attractor dimension of a small data set. A certain number of local centres (M) are chosen randomly on the attractor and an adequate number of nearest neighbours (q=50) are ordered around each centre. The local intrinsic dimension of a local centre is determined by the number of significant singular values and the attractor dimension (D 2) by the average of the local intrinsic dimensions of the local centres. The SVD method has been evaluated for model data and EEG. The results indicate that the SVD method is a reliable approach for estimation of attractor dimension at moderate signal to noise ratios. The paper emphasises the importance of SVD approach to EEG analysis.  相似文献   

14.
Through the use of a “predictor-corrector” equation an experimenter may quickly determine the least squares estimates of all the coefficients in a polynomial model after the conclusion of each run, given that an initial set of orthogonal estimates of the coefficients is available. The analysis of variance may similarly be constructed at the conclusion of each added run. The predictor-corrector equation is subject to mild restrictions which are fully met in the usual application of the 2 k and 2 kp factorial designs. Adaptation of the predictor-corrector equation to the case of complimentary blocks of two runs each is described. Examples are given.  相似文献   

15.
This paper presents a cost-effectiveness analysis of explicit Finite Difference (FD) methods for the numerical integration of the wave equation. Formal notions of computational cost (expressed in floating point operations) and numerical dispersion error are introduced. Restricting the analysis to leapfrog timemarching, for sake of simplicity, various spatial discrete differentiators are examined. For each scheme, by minimizing the cost at a given error threshold, a cost-effective operating poin (time sampling rate and number of gridpoints per shortest wavelength) is obtained, which is remarkably different from the stability limit. Different schemes, each operated at its cost-effective point, are then compared. High-order dispersion-bounded operators, in the sense of Holberg,1 are found to be competitive with the Pseudo-spectral (PS) method. New optimal schemes improving over the Holberg's spatial differentiators are introduced together with accurate expansions of the convolutional weights is terms of the design error threshold. It is also shown that the composition of two distinct Holberg's operators of consecutive orders, with opposite phase properties, minimizes dispersion and yields cost-effective schemes. Numerical experiments illustrate the suitability of the new methods for large-scale wave-equation seismic modelling.  相似文献   

16.
A new evaluation of the thermodynamic properties of tungsten has been made. A set of parameters describing the Gibbs energy of each individual phase as a function of temperature and pressure is given. The experimental information on the P, T phase diagram and the thermodynamic data are compared with calculations made using the presented set of parameters.  相似文献   

17.
This article presents and develops a genetic algorithm (GA) to generate D‐efficient designs for mixture‐process variable experiments. It is assumed the levels of a process variable are controlled during the process. The GA approach searches design points from a set of possible points over a continuous region and works without having a finite user‐defined candidate set. We compare the performance of designs generated by the GA with designs generated by two exchange algorithms (DETMAX and k‐exchange) in terms of D‐efficiencies and fraction of design space (FDS) plots which are used to evaluate a design's prediction variance properties. To illustrate the methodology, examples involving three and four mixture components and one process variable are proposed for creating the optimal designs. The results show that GA designs have superior prediction variance properties in comparison with the DETMAX and k‐exchange algorithm designs when the design space is the simplex or is a highly‐constrained subspace of the simplex.  相似文献   

18.
This paper presents a p-version least-squares finite element formulation for unsteady fluid dynamics problems where the effects of space and time are coupled. The dimensionless form of the differential equations describing the problem are first cast into a set of first-order differential equations by introducing auxiliary variables. This permits the use of C° element approximation. The element properties are derived by utilizing p-version approximation functions in both space and time and then minimizing the error functional given by the space–time integral of the sum of squares of the errors resulting from the set of first-order differential equations. This results in a true space–time coupled least-squares minimization procedure. A time marching procedure is developed in which the solution for the current time step provides the initial conditions for the next time step. The space–time coupled p-version approximation functions provide the ability to control truncation error which, in turn, permits very large time steps. What literally requires hundreds of time steps in uncoupled conventional time marching procedures can be accomplished in a single time step using the present space–time coupled approach. For non-linear problems the non-linear algebraic equations resulting from the least-squares process are solved using Newton's method with a line search. This procedure results in a symmetric Hessian matrix. Equilibrium iterations are carried out for each time step until the error functional and each component of the gradient of the error functional with respect to nodal degrees of freedom are below a certain prespecified tolerance. The generality, success and superiority of the present formulation procedure is demonstrated by presenting specific formulations and examples for the advection–diffusion and Burgers equations. The results are compared with the analytical solutions and those reported in the literature. The formulation presented here is ideally suited for space–time adaptive procedures. The element error functional values provide a mechanism for adaptive h, p or hp refinements. The work presented in this paper provides the basis for the extension of the space–time coupled least-squares minimization concept to two- and three-dimensional unsteady fluid flow.  相似文献   

19.
Least-squares estimation (LSE) based on Weibull probability plot (WPP) is the most basic method for estimating the Weibull parameters. The common procedure of this method is using the least-squares regression of Y on X, i.e. minimizing the sum of squares of the vertical residuals, to fit a straight line to the data points on WPP and then calculate the LS estimators. This method is known to be biased. In the existing literature the least-squares regression of X on Y, i.e. minimizing the sum of squares of the horizontal residuals, has been used by the Weibull researchers. This motivated us to carry out this comparison between the estimators of the two LS regression methods using intensive Monte Carlo simulations. Both complete and censored data are examined. Surprisingly, the result shows that LS Y on X performs better for small, complete samples, while the LS X on Y performs better in other cases in view of bias of the estimators. The two methods are also compared in terms of other model statistics. In general, when the shape parameter is less than one, LS Y on X provides a better model; otherwise, LS X on Y tends to be better.  相似文献   

20.
This work was carried out to determine solubility, solution thermodynamics, solvation behavior, and molecular interactions of a natural compound ferulic acid (FLA) in different ‘[polyethylene glycol-400 (PEG-400) + water]’ binary solvent mixtures at ‘T?=?298.2 K to 318.2?K’ and ‘p?=?0.1?MPa.’ The mole fraction solubilities (xe) of FLA were determined by liquid chromatographic technique using a static equilibrium technique. The obtained solubility data of FLA were regressed using ‘Van’t Hoff, Apelblat, Yalkowsky-Roseman and Jouyban-Acree models.’ The solubility of FLA (expressed in mole fraction) was enhanced with elevation in absolute temperature in each ‘PEG-400?+?water’ binary solvent mixture evaluated. The maximum xe values of FLA were recorded in neat PEG-400 (1.94?×?10?1) at ‘T?=?318.2 K.’ While, the minimum one was obtained in neat water (4.90?×?10?5) at ‘T?=?298.2 K.’ The molecular interactions between FLA-PEG-400 and FLA-water were obtained by determination of activity coefficients of FLA in different ‘PEG-400?+?water’ binary solvent mixtures. The physical data of activity coefficients recorded in this work suggested strong molecular interactions in FLA-PEG-400 in comparison with FLA-water. ‘Apparent thermodynamic analysis’ suggested an ‘endothermic and entropy-driven dissolution’ of FLA in each ‘PEG-400?+?water’ binary solvent mixture investigated.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号