Combinatorial auction is a useful trade manner for transportation service procurements in e-marketplaces. To enhance the competition of combinatorial auction, a novel auction mechanism of two-round bidding with bundling optimization is proposed. As the recommended the auction mechanism, the shipper/auctioneer integrates the objects into several bundles based on the bidding results of first round auction. Then, carriers/bidders bid for the object bundles in second round. The bundling optimization is described as a multi-objective model with two criteria on price complementation and combination consistency. A Quantum Evolutionary Algorithm (QEA) with β-based rotation gate and the encoding scheme based on non-zero elements in complementary coefficient matrix is developed for the model solution. Comparing with a Contrast Genetic Algorithm, QEA can achieve better computational performances for small and middle size problems. 相似文献
A new spreadsheet-cell-object-oriented algorithm for the first-order reliability method is proposed and illustrated for cases with correlated nonnormals and explicit and implicit performance functions. The new approach differs from the writers earlier algorithm by obviating the need for computations of equivalent normal means and equivalent normal standard deviations. It obtains the solution faster and is more efficient, robust, and succinct. Other advantages include ease of initialization prior to constrained optimization, ease of randomization of initial values for checking robustness, and fewer required optimization constraints during spreadsheet-automated search for the design point. Two cases with implicit performance functions, namely an asymmetrically loaded beam on Winkler medium and a strut with complex supports are analyzed using the new approach and discussed. Comparisons are also made between the proposed approach and that based on Rosenblatt transformation. 相似文献
It was found that the discontinuity at the end of an impulse will lead to numerical inaccuracy as this discontinuity will result in an extra impulse and thus an extra displacement in the time history analysis. In addition, this extra impulse is proportional to the discontinuity value at the end of the impulse and the size of integration time step. To overcome this difficulty, an effective approach is proposed to reduce the extra impulse and hence the extra displacement. In fact, the novel approach proposed in this paper is to perform a single small time step immediately upon the termination of applied impulse, whereas other time steps can be conducted by using the step size determined from accuracy consideration in period. The feasibility of this approach is analytically explored. Further, analytical results are confirmed by numerical examples. Numerical studies also show that this approach can be applied to other step-by-step integration methods. It seems that to slightly complicate the programming of dynamic analysis codes is the only disadvantage of this approach. 相似文献
A procedure based on K?tter’s equation is developed for the evaluation of bearing capacity factor Nγ with Terzaghi’s mechanism. Application of K?tter’s equation makes the analysis statically determinate, in which the unique failure surface is identified using force equilibrium conditions. The computed Nγ values are found to be higher than Terzaghi’s value in the range 0.25–20%, with a diverging trend for higher values of angle of soil internal friction. A fairly good agreement is observed with other solutions which are based on finite difference coupled with associated flow rule, limit analysis, and limit equilibrium. Finally, the comparison with available experimental results vis-à-vis other solutions shows that, computed Nγ values are capable of making a reasonably good prediction. 相似文献
Lisp and its descendants are among the most important and widely used of programming languages. At the same time, parallelism in the architecture of computer systems is becoming commonplace. There is a pressing need to extend the technology of automatic parallelization that has become available to Fortran programmers of parallel machines, to the realm of Lisp programs and symbolic computing. In this paper we present a comprehensive approach to the compilation of Scheme programs for shared-memory multiprocessors. Our strategy has two principal components:interprocedural analysis andprogram restructuring. We introduceprocedure strings andstack configurations as a framework in which to reason about interprocedural side-effects and object lifetimes, and develop a system of interprocedural analysis, using abstract interpretation, that is used in the dependence analysis and memory management of Scheme programs. We introduce the transformations ofexit-loop translation andrecursion splitting to treat the control structures of iteration and recursion that arise commonly in Scheme programs. We propose an alternative representation for s-expressions that facilitates the parallel creation and access of lists. We have implemented these ideas in a parallelizing Scheme compiler and run-time system, and we complement the theory of our work with snapshots of programs during the restructuring process, and some preliminary performance results of the execution of object codes produced by the compiler.This work was supported in part by the National Science Foundation under Grant No. NSF MIP-8410110, the U.S. Department of Energy under Grant No. DE-FG02-85ER25001, the Office of Naval Research under Grant No. ONR N00014-88-K-0686, the U.S. Air Force Office of Scientific Research under Grant No. AFOSR-F49620-86-C-0136, and by a donation from the IBM Corportation. 相似文献
The hydrodynamics of a two-dimensional gas–solid fluidized bed reactor were studied experimentally and computationally. Computational fluid dynamics (CFD) simulation results from a commercial CFD software package, Fluent, were compared to those obtained by experiments conducted in a fluidized bed containing spherical glass beads of 250– in diameter. A multifluid Eulerian model incorporating the kinetic theory for solid particles was applied in order to simulate the gas–solid flow. Momentum exchange coefficients were calculated using the Syamlal–O’Brien, Gidaspow, and Wen–Yu drag functions. The solid-phase kinetic energy fluctuation was characterized by varying the restitution coefficient values from 0.9 to 0.99. The modeling predictions compared reasonably well with experimental bed expansion ratio measurements and qualitative gas–solid flow patterns. Pressure drops predicted by the simulations were in relatively close agreement with experimental measurements at superficial gas velocities higher than the minimum fluidization velocity, Umf. Furthermore, the predicted instantaneous and time-average local voidage profiles showed similarities with the experimental results. Further experimental and modeling efforts are required in a comparable time and space resolutions for the validation of CFD models for fluidized bed reactors. 相似文献
computing devices such as Turing machines resolve the dilemma between the necessary finitude of effective procedures and the potential infinity of a function's domain by distinguishing between a finite-state processing part, defined over finitely many representation types, and a memory sufficiently large to contain representation tokens for any of the function's arguments and values. Connectionist networks have been shown to be (at least) Turing-equivalent if provided with infinitely many nodes or infinite-precision activation values and weights. Physical computation, however, is necessarily finite.
The notion of a processing-memory system is introduced to discuss physical computing systems. Constitutive for a processing-memory system is that its causal structure supports the functional distinction between processing part and memory necessary for employing a type-token distinction for representations, which in turn allows for representations to be the objects of computational manipulation. Moreover, the processing part realized by such systems provides a criterion of identity for the function computed as well as helps to define competence and performance of a processing-memory system.
Networks, on the other hand, collapse the functional distinction between processing part and memory. Since preservation of this distinction is necessary for employing a type-token distinction for representation, connectionist information processing does not consist in the computational manipulation of representations. Moreover, since we no longer have a criterion of identity for the function processed other than the behaviour of the network itself, we are left without a competence-performance distinction for connectionist networks, 相似文献
Accelerated life testing (ALT) is widely used in high-reliability product estimation to get relevant information about an item's performance and its failure mechanisms. To analyse the observed ALT data, reliability practitioners need to select a suitable accelerated life model based on the nature of the stress and the physics involved. A statistical model consists of (i) a lifetime distribution that represents the scatter in product life and (ii) a relationship between life and stress. In practice, several accelerated life models could be used for the same failure mode and the choice of the best model is far from trivial. For this reason, an efficient selection procedure to discriminate between a set of competing accelerated life models is of great importance for practitioners. In this paper, accelerated life model selection is approached by using the Approximate Bayesian Computation (ABC) method and a likelihood-based approach for comparison purposes. To demonstrate the efficiency of the ABC method in calibrating and selecting accelerated life model, an extensive Monte Carlo simulation study is carried out using different distances to measure the discrepancy between the empirical and simulated times of failure data. Then, the ABC algorithm is applied to real accelerated fatigue life data in order to select the most likely model among five plausible models. It has been demonstrated that the ABC method outperforms the likelihood-based approach in terms of reliability predictions mainly at lower percentiles particularly useful in reliability engineering and risk assessment applications. Moreover, it has shown that ABC could mitigate the effects of model misspecification through an appropriate choice of the distance function. 相似文献