共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper presents a gradient-based randomized algorithm to design a guaranteed cost regulator for a plant with general parametric uncertainties. The algorithm either provides with high confidence a probabilistic solution that satisfies the design specification with high probability for a randomly sampled uncertainty or claims that the feasible set of the design parameters is too small to contain a ball with a given radius. In both cases, the number of iterations executed in the algorithm is of polynomial order of the problem size and is independent of the dimension of the uncertainty. 相似文献
2.
The estimation of the manufacturing cost of a part in all phases of the design stage is crucial to concurrent engineering. To better estimate the cost for a product, data must be available from both engineering systems and business systems. This paper presents a cost estimation system being developed to support design time cost estimation using the Federated Intelligent Product EnviRonment (FIPER), which is being developed as part of the National Institute of Standards and Technology (NIST) Advanced Technology Program (ATP). The FIPER research team is developing an architecture that interconnects design and analysis software tools in a peer level architecture to support multidisciplinary design optimization (MDO), design for six sigma (DFSS) and robust design. 相似文献
3.
Total cost is one of the most important factors for a heavy equipment product purchase decision. However, the different cost views and perspectives of performance expectations between the different involved stakeholders may cause customer relation problems between the manufacturers and customers. Beginning with the conventional manufacturers’ cost view, this paper presents the necessity and importance of expanding the heavy equipment manufacturers’ cost scope to include the post-manufacturing customer stage of their products. Then, this paper narrates a general mathematics Post-Manufacturing Product Cost (PMPC) model to analyze the total costs of heavy equipment in its utilization stage. A major emphasis of the PMPC model is placed on the strategy of improving the manufacturers product cost management and the strategy of customers purchasing decisions cost management and their interdependencies as related to their specific different perspectives on the product utilization patterns. 相似文献
4.
Color-based tracking is prone to failure in situations where visually similar targets are moving in a close proximity or occlude each other. To deal with the ambiguities in the visual information, we propose an additional color-independent visual model based on the target's local motion. This model is calculated from the optical flow induced by the target in consecutive images. By modifying a color-based particle filter to account for the target's local motion, the combined color/local-motion-based tracker is constructed. We compare the combined tracker to a purely color-based tracker on a challenging dataset from hand tracking, surveillance and sports. The experiments show that the proposed local-motion model largely resolves situations when the target is occluded by, or moves in front of, a visually similar object. 相似文献
5.
This paper presents a novel revision of the framework of Hybrid Probabilistic Logic Programming, along with a complete semantics
characterization, to enable the encoding of and reasoning about real-world applications. The language of Hybrid Probabilistic
Logic Programs framework is extended to allow the use of non-monotonic negation, and two alternative semantical characterizations
are defined: stable probabilistic model semantics and probabilistic well-founded semantics. These semantics generalize the
stable model semantics and well-founded semantics of traditional normal logic programs, and they reduce to the semantics of
Hybrid Probabilistic Logic programs for programs without negation. It is the first time that two different semantics for Hybrid
Probabilistic Programs with non-monotonic negation as well as their relationships are described. This proposal provides the
foundational grounds for developing computational methods for implementing the proposed semantics. Furthermore, it makes it
clearer how to characterize non-monotonic negation in probabilistic logic programming frameworks for commonsense reasoning.
An erratum to this article can be found at 相似文献
6.
In this paper, we present a software tool, RTS (real time simulator), that analyses the time cost behaviour of parallel computations through simulation. It is assumed in RTS that the computer system which supports the executions of parallel computations has a limited number of processors all processors have the same speed and they communicate with each other through a shared memory. In RTS, the time cost of a parallel computation is defined as a function of the input, the algorithm, the data structure, the processor speed, the number of processors, the processor power allocation, the communication and the execution environment. How RTS models the time cost is first discussed in the paper. In the model, a locking technique is used to manipulate the access to the shared memory, processing power is equally allocated among all the operations that are currently being performed in parallel in the computer system, and the number of operations in the execution environment of a parallel computation changes from time to time. How RTS works and how the simulation is used to do time cost analysis are also discussed. 相似文献
7.
Predicting the Life Cycle Cost (LCC) of a proposed new product during its concept development phase is required for two reasons. First, it is necessary to demonstrate to either a potential customer (e.g. Government financed programs) or to corporate management that the cost of owning the new product and its value to the owner justify further development. Second, LCC is the basis for trade studies between various engineering alternatives that must be made early in the program in order to avoid wasteful research in nonproductive areas. The most significant portion of LCC is usually the Operating and Support (O&S) cost and yet this is the most difficult cost to predict. Operating and support costs include all costs incurred by their owner between initial purchase and discard or salvage. These costs must be predicted by parametric methods and inflated and/or discounted to their applicable years by means of Engineering Economic Analysis techniques. Separate models must be made for each engineering alternative and the costs converted to a common base (i.e. “now” dollars) for comparison. Martin Marietta, working under contract to the U.S. Navy's Advanced Antiair warfare Working Group (AAWG), has developed a simple O&S cost model to solve this complex problem. The model consists of a three dimensional matrix using LOTUS 1-2-3 software on an IBM PC or PC compatible computer. The model is flexible and detailed enough to be useful in many diverse applications and simple enough to be exercised quickly and at minimum cost. 相似文献
8.
This paper addresses automatic image annotation problem and its application to multi-modal image retrieval. The contribution of our work is three-fold. (1) We propose a probabilistic semantic model in which the visual features and the textual words are connected via a hidden layer which constitutes the semantic concepts to be discovered to explicitly exploit the synergy among the modalities. (2) The association of visual features and textual words is determined in a Bayesian framework such that the confidence of the association can be provided. (3) Extensive evaluation on a large-scale, visually and semantically diverse image collection crawled from Web is reported to evaluate the prototype system based on the model. In the proposed probabilistic model, a hidden concept layer which connects the visual feature and the word layer is discovered by fitting a generative model to the training image and annotation words through an Expectation-Maximization (EM) based iterative learning procedure. The evaluation of the prototype system on 17,000 images and 7736 automatically extracted annotation words from crawled Web pages for multi-modal image retrieval has indicated that the proposed semantic model and the developed Bayesian framework are superior to a state-of-the-art peer system in the literature. 相似文献
9.
Research and practice in decision support systems have often been said to focus too much on individual decision-making, when decisions actually are made by groups. Another shortfall of current research is the absence of any established theory or framework on which to base it. A third shortfall is the lack of connections between theory and actual implementation in terms of information technology. The first two problems in particular are addressed in this study. Decision-making is considered to be a group activity— rather than an individual activity—in which as a matter of fact a contract between the decision-makers is established. The contracts incur transaction costs, which may or may not be covered by the extra value gained by the contract. Transaction costs in the contract formulation phase should be eliminated, and information technology in its various forms is a principal means by which to achieve this end. Different kinds of technologies support different kinds of contracts. This is why decision-makers should understand the nature of their decision-making situation and select the information technology tools most suitable for the situation. The different factors causing transaction costs in decision-making— contracting—are identified, and the means to eliminate them by information technology are presented. This study is based on a transaction cost perspective of organizations. Information technology is seen as a primary means to lower transaction costs. Thus, the necessary theoretical framework so often missing in information technology research is provided. The results of the study stem from empirical research, the aim of which was to investigate and to understand information technology from the viewpoint of transaction costs. 相似文献
10.
In this paper we present a cost model to analyze impacts of Internet malware in order to estimate the cost of incidents and risk caused by them. The model is useful in determining parameters needed to estimate recovery efficiency, probabilistic risk distributions, and cost of malware incidents. Many users tend to underestimate the cost of curiosity coming with stealth malware such as email-attachments, freeware/shareware, spyware (including keyloggers, password thieves, phishing-ware, network sniffers, stealth backdoors, and rootkits), popups, and peer-to-peer fileshares. We define two sets of functions to describe evolution of attacks and potential loss caused by malware, where the evolution functions analyze infection patterns, while the loss functions provide risk-impact analysis of failed systems. Due to a wide range of applications, such analyses have drawn the attention of many engineers and researchers. Analysis of malware propagation itself has little to contribute unless tied to analysis of system performance, economic loss, and risks. 相似文献
11.
Recently, the study of incorporating probability theory and fuzzy logic has received much interest. To endow the traditional fuzzy rule-based systems (FRBs) with probabilistic features to handle randomness, this paper presents a probabilistic fuzzy neural network (ProFNN) by introducing the probability of input linguistic terms and providing linguistic meaning into the connectionist architecture. ProFNN integrates the probabilistic information of fuzzy rules into the antecedent parts and quantifies the impacts of the rules on the consequent parts using mutual subsethood, which work in conjunction with volume defuzzification in a gradient descent learning frame work. Despite the increase in the number of parameters, ProFNN provides a promising solution to deal with randomness and fuzziness in a single frame. To evaluate the performance and applicability of the proposed approach, ProFNN is carried out on various benchmarking problems and compared with other existing models with a performance better than most of them. 相似文献
12.
One of the most popular strategies in business today is global outsourcing. A lot of companies outsource their IT (Information Technology). In this paper, an IT outsourcing cost estimation model based on Fuzzy Decision Tree (FDT) is presented. The model can combine inductive learning capability of FD1 with the expressive power of fuzzy sets to predict the relative error in the form of a fuzzy set and analyze the source of error using decision tree rule. Finally, the validity of the IT outsourcing cost estimation model is validated with historical project data. 相似文献
13.
This paper presents an informatics framework to apply feature-based engineering concept for cost estimation supported with data mining algorithms. The purpose of this research work is to provide a practical procedure for more accurate cost estimation by using the commonly available manufacturing process data associated with ERP systems. The proposed method combines linear regression and data-mining techniques, leverages the unique strengths of the both, and creates a mechanism to discover cost features. The final estimation function takes the user’s confidence level over each member technique into consideration such that the application of the method can phase in gradually in reality by building up the data mining capability. A case study demonstrates the proposed framework and compares the results from empirical cost prediction and data mining. The case study results indicate that the combined method is flexible and promising for determining the costs of the example welding features. With the result comparison between the empirical prediction and five different data mining algorithms, the ANN algorithm shows to be the most accurate for welding operations. 相似文献
14.
Automated regression suites are essential in developing large applications, while maintaining reasonable quality and timetables. The main argument against the automation of regression suites, in addition to the cost of creation and maintenance, is the observation that if you run the same test many times, it becomes increasingly less likely to find bugs. To alleviate such problems, a new regression suite practice, using random test generators to create regression suites on-the-fly, is becoming more common. In this practice, instead of maintaining tests, we generate test suites on-the-fly by choosing several specifications and generating a number of tests from each specification. 相似文献
15.
In a manufacturing environment containing complex consumption relationships and quality influences, the application of traditional activity-based costing (ABC) method is limited. In this paper, a new improved process-based model for cost estimation and pricing is presented. Through utilizing the input–output analysis method, the complex indirect consumption relationships (such as reciprocal relationships) of a manufacturing system are expressed. By solving these relationships, the consumption characteristics of all production activities (mainly presented by the activity rates) are extracted. Then with the consumption characteristics, the quality characteristics and usage amounts of these activities, the cost prices of products are estimated for their pricing. A case study is given based on the compressor products of a manufacturing company, and its effectiveness is shown. As the cost influences of complex consumption relationships and quality factors are fully considered, the proposed approach has a higher estimation accuracy than the traditional ABC method. 相似文献
16.
CRISP-DM is the standard to develop Data Mining projects. CRISP-DM proposes processes and tasks that you have to carry out to develop a Data Mining project. A task proposed by CRISP-DM is the cost estimation of the Data Mining project. 相似文献
17.
In this paper, we introduce a Bayesian approach, inspired by probabilistic principal component analysis (PPCA) (Tipping and Bishop in J Royal Stat Soc Ser B 61(3):611–622, 1999), to detect objects in complex scenes using appearance-based models. The originality of the proposed framework is to explicitly take into account general forms of the underlying distributions, both for the in-eigenspace distribution and for the observation model. The approach combines linear data reduction techniques (to preserve computational efficiency), non-linear constraints on the in-eigenspace distribution (to model complex variabilities) and non-linear (robust) observation models (to cope with clutter, outliers and occlusions). The resulting statistical representation generalises most existing PCA-based models (Tipping and Bishop in J Royal Stat Soc Ser B 61(3):611–622, 1999; Black and Jepson in Int J Comput Vis 26(1):63–84, 1998; Moghaddam and Pentland in IEEE Trans Pattern Anal Machine Intell 19(7):696–710, 1997) and leads to the definition of a new family of non-linear probabilistic detectors. The performance of the approach is assessed using receiver operating characteristic (ROC) analysis on several representative databases, showing a major improvement in detection performances with respect to the standard methods that have been the references up to now.This revised version was published online in November 2004 with corrections to the section numbers. 相似文献
18.
We propose a probabilistic variant of the pi-calculus as a framework to specify randomized security protocols and their intended properties. In order to express and verify the correctness of the protocols, we develop a probabilistic version of the testing semantics. We then illustrate these concepts on an extended example: the Partial Secret Exchange, a protocol which uses a randomized primitive, the Oblivious Transfer, to achieve fairness of information exchange between two parties. 相似文献
19.
This paper describes an empirical study undertaken to investigate the quantitative aspects of the phenomenon of requirements elaboration which deals with transformation of high-level goals into low-level requirements. Prior knowledge of the magnitude of requirements elaboration is instrumental in developing early estimates of a project’s cost and schedule. This study examines the data on two different types of goals and requirements - capability and level of service (LOS) - of 20 real-client, graduate-student, team projects done at USC. Metrics for data collection and analyses are described along with the utility of results they produce. Besides revealing a marked difference between the elaboration of capability goals and the elaboration of LOS goals, these results provide some initial relationships between the nature of projects and their ratios of elaboration of capability goals into capability or functional requirements. 相似文献
20.
Summary. We set out a modal logic for reasoning about multilevel security of probabilistic systems. This logic contains expressions
for time, probability, and knowledge. Making use of the Halpern-Tuttle framework for reasoning about knowledge and probability,
we give a semantics for our logic and prove it is sound. We give two syntactic definitions of perfect multilevel security
and show that their semantic interpretations are equivalent to earlier, independently motivated characterizations. We also
discuss the relation between these characterizations of security and between their usefulness in security analysis. 相似文献
|