首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The quest for finding optimum solutions to engineering problems has been existing for a long time. In the last decade several optimization techniques have been applied to the structural design of composite wing structures. Generally many of these proposed procedures have dealt with different disciplines such as aerodynamics, structures, or dynamics separately. However an aeronautical design process is multidisciplinary since it involves strong couplings and interactions among, for instance, aerodynamics, dynamics, flight mechanics and structures. The main problem in a multidisciplinary aircraft design is usually the development of an efficient method to integrate structures or structural properties, which can be considered both as “global” and “local” design variables. This paper describes an integrated aerodynamic / dynamic / structural optimization procedure for a composite wing-box design. The procedure combines an aeroelastic optimization of a composite wing based on a general purpose optimizer such as the Sequential Quadratic Programming (SQP) and a composite optimization using Genetic Algorithm (GA). Both the optimizations are implemented through a hybrid multilevel decomposition technique.  相似文献   

2.
In recent years, on-demand transport systems (such as a demand-bus system) are focused as a new transport service in Japan. An on-demand vehicle visits pick-up and delivery points by door-to-door according to the occurrences of requests. This service can be regarded as a cooperative (or competitive) profit problem among transport vehicles. Thus, a decision-making for the problem is an important factor for the profits of vehicles (i.e., drivers). However, it is difficult to find an optimal solution of the problem, because there are some uncertain risks, e.g., the occurrence probability of requests and the selfishness of other rival vehicles. Therefore, this paper proposes a transport policy for on-demand vehicles to control the uncertain risks. First, we classify the profit of vehicles as “assured profit” and “potential profit”. Second, we propose a “profit policy” and “selection policy” based on the classification of the profits. Moreover, the selection policy can be classified into “greed”, “mixed”, “competitive”, and “cooperative”. These selection policies are represented by selection probabilities of the next visit points to cooperate or compete with other vehicles. Finally, we report simulation results and analyze the effectiveness of our proposal policies.  相似文献   

3.
 In this paper we deal with the computational complexity problem of checking the coherence of a partial probability assessment (called CPA). The CPA problem, like its analogous PSAT, is NP-complete so we look for an heuristic procedure to make tractable reasonable instances of the problem. Starting from the characteristic feature of de Finetti's approach (i.e. the explicit distinction between the probabilistic assessment and the logical relations among the sentences) we introduce several rules for a sequential “elimination” of Boolean variables from the domain of the assessment. The procedure resembles the well-known Davis-Putnam rules for the satisfiability, however we have, as a drawback, the introduction of constraints (among real variables) whose satisfiability must be checked. In simple examples we test the efficiency of the procedure respect to the “traditional” approach of solving a linear system with a huge coefficient matrix built from the atoms generated by the domain of the assessment.  相似文献   

4.
This paper proposes an adaptive neural network control method for a class of perturbed strict-feedback nonlinear systems with unknown time delays. Radial basis function neural networks are used to approximate unknown intermediate control signals. By constructing appropriate Lyapunov-Krasovskii functionals, the unknown time delay terms have been compensated. Dynamic surface control technique is used to overcome the problem of "explosion of complexity" in backstepping design procedure. In addition, the semiglobal uniform ultimate boundedness of all the signals in the closed-loop system is proved. A main advantage of the proposed controller is that both problems of "curse of dimensionality" and "explosion of complexity" are avoided simultaneously. Finally, simulation results are presented to demonstrate the effectiveness of the approach.  相似文献   

5.
In this paper we use recent advances in approximate dynamic programming to develop an approximate policy optimization procedure that uses Monte Carlo simulations for numerical solution of dynamic optimization problems in economics. The procedure is applied to the classical problem of “learning by doing” in regression models, for which the value and extent of active experimentation are demonstrated in a variety of numerical studies.  相似文献   

6.
When human beings converse, they alternate between talking and listening. Participating in such turntaking behaviors is more difficult for machines that use speech recognition to listen and speech output to talk. This paper describes an algorithm for managing such turn-taking through the use of a sliding capture window. The device is specific to discrete speech recognition technologies that do not have access to echo cancellation. As such, it addresses those inexpensive applications that suffer the most from turn-taking errors—providing a “speech button” that stabilizes the interface. Correcting for short-lived turn-taking errors can be thought of as “debouncing” the button. An informal study based on ten subjects using a voice dialing application illuminates the design.  相似文献   

7.
This paper deals with the problem of derivational redundancy in scientific explanation, i.e. the problem that there can be extremely many different explanatory derivations for a natural phenomenon while students and experts mostly come up with one and the same derivation for a phenomenon (modulo the order of applying laws). Given this agreement among humans, we need to have a story of how to select from the space of possible derivations of a phenomenon the derivation that humans come up with. In this paper we argue that the problem of derivational redundancy can be solved by a new notion of “shortest derivation”, by which we mean the derivation that can be constructed by the fewest (and therefore largest) partial derivations of previously derived phenomena that function as “exemplars”. We show how the exemplar-based framework known as “Data-Oriented Parsing” or “DOP” can be employed to select the shortest derivation in scientific explanation. DOP’s shortest derivation of a phenomenon maximizes what is called the “derivational similarity” between a phenomenon and a corpus of exemplars. A preliminary investigation with exemplars from classical and fluid mechanics shows that the shortest derivation closely corresponds to the derivations that humans construct. Our approach also proposes a concrete solution to Kuhn’s problem of how we know on which exemplar a phenomenon can be modeled. We argue that humans model a phenomenon on the exemplar that is derivationally most similar to the phenomenon, i.e. the exemplar from which the largest subtree(s) can be used to derive the phenomenon.  相似文献   

8.
We describe a mathematical and algorithmic study of the Lambertian “Shape-From-Shading” problem for orthographic and pinhole cameras. Our approach is based upon the notion of viscosity solutions of Hamilton-Jacobi equations. This approach provides a mathematical framework in which we can show that the problem is well-posed (we prove the existence of a solution and we characterize all the solutions). Our contribution is threefold. First, we model the camera both as orthographic and as perspective (pinhole), whereas most authors assume an orthographic projection (see Horn and Brooks (1989) for a survey of the SFS problem up to 1989 and Zhang et al. (1999), Kozera (1998), Durou et al. (2004) for more recent ones); thus we extend the applicability of shape from shading methods to more realistic acquisition models. In particular it extends the work of Prados et al. (2002a) and Rouy and Tourin (1992). We provide some novel mathematical formulations of this problem yielding new partial differential equations. Results about the existence and uniqueness of their solutions are also obtained. Second, by introducing a “generic” Hamiltonian, we define a general framework allowing to deal with both models (orthographic and perspective), thereby simplifying the formalization of the problem. Thanks to this unification, each algorithm we propose can compute numerical solutions corresponding to all the modeling. Third, our work allows us to come up with two new generic algorithms for computing numerical approximations of the “continuous solution of the “Shape-From-Shading” problem as well as a proof of their convergence toward that solution. Moreover, our two generic algorithms are able to deal with discontinuous images as well as images containing black shadows. First online version published in October, 2005  相似文献   

9.
For the singularly perturbed parabolic problem, a regularized asymptotics of the solution of the problem of optimal control was constructed. The solution asymptotics involves parabolic boundary-layer functions obeying a special function called the “complementary probability integral.”  相似文献   

10.
The vehicle routing problem with deliveries and pickups is a challenging extension to the vehicle routing problem that lately attracted growing attention in the literature. This paper investigates the relationship between two versions of this problem, called “mixed” and “simultaneous”. In particular, we wish to know whether a solution algorithm designed for the simultaneous case can solve the mixed case. To this end, we implement a metaheuristic based on reactive tabu search. The results suggest that this approach can yield good results.  相似文献   

11.
Consideration was given to the one-dimensional bin packing problem under the conditions for heterogeneity of the items put into bins and contiguity of choosing identical items for the next bin. The branch-and-bound method using the “next fit” principle and the “linear programming” method were proposed to solve it. The problem and its solution may be used to construct an improved lower bound in the problem of two-dimensional packing.  相似文献   

12.
When non-unique values are used as the identifier of entities, due to their homonym, confusion can occur. In particular, when (part of) “names” of entities are used as their identifier, the problem is often referred to as a name disambiguation problem, where goal is to sort out the erroneous entities due to name homonyms (e.g., If only last name is used as the identifier, one cannot distinguish “Masao Obama” from “Norio Obama”). In this paper, in particular, we study the scalability issue of the name disambiguation problem—when (1) a small number of entities with large contents or (2) a large number of entities get un-distinguishable due to homonyms. First, we carefully examine two of the state-of-the-art solutions to the name disambiguation problem and point out their limitations with respect to scalability. Then, we propose two scalable graph partitioning algorithms known as multi-level graph partitioning and multi-level graph partitioning and merging to solve the large-scale name disambiguation problem. Our claim is empirically validated via experimentation—our proposal shows orders of magnitude improvement in terms of performance while maintaining equivalent or reasonable accuracy compared to competing solutions.  相似文献   

13.
14.
We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences, or identifying salient patterns in images. The term “irregular” depends on the context in which the “regular” or “valid” are defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context. We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a new observed image region or a new video segment (“the query”) using chunks of data (“pieces of puzzle”) extracted from previous visual examples (“the database”). Regions in the observed data which can be composed using large contiguous chunks of data from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database (or can be composed, but only using small fragmented pieces) are regarded as unlikely/suspicious. The problem is posed as an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance. Patent Pending  相似文献   

15.
A mixed-integer knapsack problem solving method is suggested. With this purpose, first, the number of integer-valued variables and the domain of feasible solutions (containing the optimal solution) are decreased. Further, the received problem is solved using the “branch and bound” type method, where the narrow intervals for the functional and variables are used while each is branching. The numerous computing experiments fulfilled have shown that the suggested method operates more rapidly than the known “branch and bound” method.  相似文献   

16.
To address the reliability-based multidisciplinary design optimization (RBMDO) problem under mixed aleatory and epistemic uncertainties, an RBMDO procedure is proposed in this paper based on combined probability and evidence theory. The existing deterministic multistage-multilevel multidisciplinary design optimization (MDO) procedure MDF-CSSO, which combines the multiple discipline feasible (MDF) procedure and the concurrent subspace optimization (CSSO) procedure to mimic the general conceptual design process, is used as the basic framework. In the first stage, the surrogate based MDF is used to quickly identify the promising reliable regions. In the second stage, the surrogate based CSSO is used to organize the disciplinary optimization and system coordination, which allows the disciplinary specialists to investigate and optimize the design with the corresponding high-fidelity models independently and concurrently. In these two stages, the reliability-based optimization both in the system level and the disciplinary level are computationally expensive as it entails nested optimization and uncertainty analysis. To alleviate the computational burden, the sequential optimization and mixed uncertainty analysis (SOMUA) method is used to decompose the traditional double-level reliability-based optimization problem into separate deterministic optimization and mixed uncertainty analysis sub-problems, which are solved sequentially and iteratively until convergence is achieved. By integrating SOMUA into MDF-CSSO, the Mixed Uncertainty based RBMDO procedure MUMDF-CSSO is developed. The effectiveness of the proposed procedure is testified with one simple numerical example and one MDO benchmark test problem, followed by some conclusion remarks.  相似文献   

17.
A method for simulation-based multidisciplinary robust design optimization (MRDO) of problems affected by uncertainty is presented. The challenging aspects of simulation-based MRDO are both algorithmic and computational, since the solution of a MRDO problem typically requires simulation-based multidisciplinary analyses (MDA), uncertainty quantification (UQ) and optimization. Herein, the identification of the optimal design is achieved by a variable-accuracy, metamodel-based optimization, following a multidisciplinary feasible (MDF) architecture. The approach encompasses a variable (i) density of the design of experiments for the metamodel training, (ii) sample size for the UQ analysis by quasi Monte Carlo simulation and (iii) tolerance for the multidisciplinary consistency in MDA. The focus is on two-way steady fluid-structure interaction problem, assessed by partitioned solvers for the hydrodynamic and the structural analysis. Two analytical test problems are shown, along with the design of a racing-sailboat keel fin subject to the stochastic variation of the yaw angle. The method is validated versus a standard MDF approach to MRDO, taken as a benchmark and solved by fully coupled MDA, fully converged UQ, without metamodels. The method is evaluated in terms of optimal design performances and number of simulations required to achieve the optimal solution. For the current application, the optimal configuration shows performances very close to the benchmark solution. The convergence analysis to the optimum shows a promising reduction of the computational cost.  相似文献   

18.
The paper communicates simulation results (computations and visualizations) for the dynamics of ventricular fibrillation caused by irregular excitation in the frame of the monodomain model with an action potential model due to Aliev–Panfilov for a human 3D geometry. The numerical solution of this challenging multiscale reaction–diffusion problem is attacked by algorithms which are fully adaptive in both space and time (code library KARDOS). The obtained results clearly demonstrate an accurate resolution of the cardiac potential during the excitation and the plateau phases (in the regular cycle) as well as after a reentrant excitation (in the irregular cycle). Supported by the DFG Research Center Matheon “Mathematics for key technologies” in Berlin.  相似文献   

19.
In this paper we present an extension of logic programming (LP) that is suitable not only for the “rational” component of a single agent but also for the “reactive” component and that can encompass multi‐agent systems. We modify an earlier abductive proof procedure and embed it within an agent cycle. The proof procedure incorporates abduction, definitions and integrity constraints within a dynamic environment, where changes can be observed as inputs. The definitions allow rational planning behaviour and the integrity constraints allow reactive, condition‐action type behaviour. The agent cycle provides a resource‐bounded mechanism that allows the agent’s thinking to be interrupted for the agent to record and assimilate observations as input and execute actions as output, before resuming further thinking. We argue that these extensions of LP, accommodating multi‐theories embedded in a shared environment, provide the necessary multi‐agent functionality. We argue also that our work extends Shoham’s Agent0 and the BDI architecture. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

20.
Jacob L. Mey 《AI & Society》1992,6(2):180-185
Conclusion The conclusion to be drawn from the preceding observations and theorizing should be that we must be very much aware of what has been called “technological functionalism” (Pieper, 1986:11). While functionalism as such is not bad, the moment it succumbs to mere structural technicality, the functions stop functioning: forced “adaptivity” takes the place of “adaptable” interaction. That this problem is not due to a primordial blame, to be attached to the computer, becomes clear when one compares the computerized environment to other surroundings, such as, for example, the psychiatric treatment. In the psychiatric interview, as Davis (1986, 1988) has shown, the interest of the therapist is often limited to establishing a “contract” for treatment: for the therapist to function properly, there must be a therapy-defined (or therapy-definable) problem for him/her to attack, using the skills and experiences of the profession of which he/she is a representative and for which he/she has been properly trained. This function, however, may not coincide with the patient's needs: it may well be the case that the problem which originally caused the patient to approach the therapist for treatment, in the end turns out not to be the problem that both agree on as the objective of the therapeutic treatment. “Re-formulating” the patients' problem in terms suitable to the available resources and techniques is thus typically a case of adapting the human to the system: again, we're faced with technological functionalism in the shape of what has been called “forced adaptivity” (see Mey, 1986). We need to think seriously about the way we handle our computerized structures and their functions, and in particular about the design of supportive systems such as large databases, computer conferencing, or even simple electronic mail services. While adaptivity always is a case of manipulation, by which humans are tooled to the needs and strictures of the computer, adaptability, by contrast, allows such manipulation only inasmuch as those needs and strictures reflect, and are imposed by, the users' needs. The blind, mechanical force that makes us adapt ourselves to the machine should be replaced by the enlightened, humanizing force of the adaptable computer.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号