首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Attappady Black goat is a native goat breed of Kerala in India and is mainly known for its valuable meat and skin. In this work, a comparative study of connectionist network [also known as artificial neural network (ANN)] and multiple regression is made to predict the body weight from body measurements in Attappady Black goats. A multilayer feed forward network with backpropagation of error learning mechanism was used to predict the body weight. Data collected from 824 Attappady Black goats in the age group of 0–12 months consisting of 370 males and 454 females were used for the study. The whole data set was partitioned into two data sets, namely training data set comprising of 75 per cent data (277 and 340 records in males and females, respectively) to build the neural network model and test data set comprising of 25 per cent (93 and 114 records in males and females, respectively) to test the model. Three different morphometric measurements viz. chest girth, body length and height at withers were used as input variables, and body weight was considered as output variable. Multiple regression analysis (MRA) was also done using the same training and testing data sets. The prediction efficiency of both models was compared using the R 2 value and root mean square error (RMSE). The correlation coefficients between the actual and predicted body weights in case of ANN were found to be positive and highly significant and ranged from 90.27 to 93.69%. The low value of RMSE and high value of R 2 in case of connectionist network (RMSE: male—1.9005, female—1.8434; R 2: male—87.34, female—85.70) in comparison with MRA model (RMSE: male—2.0798, female—2.0836; R 2: male—84.84, female—81.74) show that connectionist network model is a better tool to predict body weight in goats than MRA.  相似文献   

2.
3.
Model-checking is becoming an accepted technique for debugging hardware and software systems. Debugging is based on the “Check/Analyze/Fix” loop: check the system against a desired property, producing a counterexample when the property fails to hold; analyze the generated counterexample to locate the source of the error; fix the flawed artifact—the property or the model. The success of model-checking non-trivial systems critically depends on making this Check/Analyze/Fix loop as tight as possible. In this paper, we concentrate on the Analyze part of the debugging loop. To this end, we present a framework for generating, structuring and exploring counterexamples, implemented in a tool called KEGVis. The framework is based on the idea that the most general type of evidence to why a property holds or fails to hold is a proof. Such proofs can be presented to the user in the form of proof-like counterexamples, without sacrificing any of the intuitiveness and close relation to the model that users have learned to expect from model-checkers. Moreover, proof generation is flexible, and can be controlled by strategies, whether built into the tool or specified by the user, thus enabling generation of the most “interesting” counterexample and its interactive exploration. Moreover, proofs can be used to generate and display all relevant evidence together, a technique referred to as abstract counterexamples. Overall, our framework can be used for explaining the reason why the property failed or succeeded, determining whether the property was correct (“specification debugging”), and for general model exploration.  相似文献   

4.
The goal of this article is to compare some optimised implementations on current high performance platforms in order to highlight architectural trends in the field of embedded architectures and to get an estimation of what should be the components of a next generation vision system. We present some implementations of robust motion detection algorithms on three architectures: a general purpose RISC processor—the PowerPC G4—a parallel artificial retina dedicated to low level image processing—Pvlsar34—and the Associative Mesh, a specialized architecture based on associative net. To handle the different aspects and constraints of embedded systems, execution time and power consumption of these architectures are compared.
Alain MérigotEmail:
  相似文献   

5.
POP: Patchwork of Parts Models for Object Recognition   总被引:2,自引:0,他引:2  
We formulate a deformable template model for objects with an efficient mechanism for computation and parameter estimation. The data consists of binary oriented edge features, robust to photometric variation and small local deformations. The template is defined in terms of probability arrays for each edge type. A primary contribution of this paper is the definition of the instantiation of an object in terms of shifts of a moderate number local submodels—parts—which are subsequently recombined using a patchwork operation, to define a coherent statistical model of the data. Object classes are modeled as mixtures of patchwork of parts POP models that are discovered sequentially as more class data is observed. We define the notion of the support associated to an instantiation, and use this to formulate statistical models for multi-object configurations including possible occlusions. All decisions on the labeling of the objects in the image are based on comparing likelihoods. The combination of a deformable model with an efficient estimation procedure yields competitive results in a variety of applications with very small training sets, without need to train decision boundaries—only data from the class being trained is used. Experiments are presented on the MNIST database, reading zipcodes, and face detection.  相似文献   

6.
Visual Knowledge Representation and Intelligent Image Segmentation   总被引:1,自引:0,他引:1       下载免费PDF全文
Automatic medical image analysis shows that image segmentation is a crucial task for any practical AI system in this field.On the basis of evaluation of the existing segmentation methods,a new image segmentation method is presented.To seek the perfct solution to knowledge representation in low level machine vision,a new knowledge representation approach--“Notebbok”approach is proposed and the processing of visual knowledge is discussed at all levels.To integrate the computer vision theory with Gestalt psychology and knowledge engineering,a new integrated method for intelligent image segmentation of sonargraphs- “Generalized-pattern guided segmentation”is proposed.With the methods and techniques mentioned above,the medical diagnosis expert system for sonargraphs can be built The work on the preliminary experiments is also introduced.  相似文献   

7.
When model-checking reports that a property holds on a model, vacuity detection increases user confidence in this result by checking that the property is satisfied in the intended way. While vacuity detection is effective, it is a relatively expensive technique requiring many additional model-checking runs. We address the problem of efficient vacuity detection for Bounded Model Checking (BMC) of linear temporal logic properties, presenting three partial vacuity detection methods based on the efficient analysis of the resolution proof produced by a successful BMC run. In particular, we define a characteristic of resolution proofs— peripherality—and prove that if a variable is a source of vacuity, then there exists a resolution proof in which this variable is peripheral. Our vacuity detection tool, VaqTree, uses these methods to detect vacuous variables, decreasing the total number of model-checking runs required to detect all sources of vacuity.  相似文献   

8.
Intelligent agents can play a pivotal role in providing both software systems and augmented interfaces, to individual users from all walks of life, to utilise the Internet 24 h a day, 7 days a week (24×7), including interaction with other users, over both wireless and broadband infrastructures. However, traditional approaches to user modelling are not adequate for this purpose, as they mainly account for a generic, approximate, idealised user. New user models are therefore required to be adaptable for each individual and flexible enough to represent the diversity of all users using information technology. Such models should be able to cover all aspects of an individual’s life—those aspects of most interest to the individual user themselves. This paper describes a novel intelligent agent architecture and methodology both called ShadowBoard, based on a complex user model drawn from analytical psychology. An equally novel software tool, called the DigitalFriend based on ShadowBoard, is also introduced. This paper illustrates how aspects of user cognition can be outsourced, using, for example, an internationalised book price quoting agent. The Locales Framework from Computer Supported Co-operative Work is then used to understand the problematic aspects of interaction involved in complex social spaces, identifying specific needs for technology intervention in such social spaces, and to understand how interactions amongst mobile users with different abilities might be technically assisted in such spaces. In this context, the single user-centred multi-agent technology demonstrated in the DigitalFriend is adapted to a multi-user system dubbed ShadowPlaces. The aim of ShadowPlaces is to outsource some of the interaction necessary, for a group of mobile individuals with different abilities to interact cooperatively and effectively in a social world, supported by wireless networks and backed by broadband Internet services. An overview of the user model, the architecture and methodology (ShadowBoard) and the resulting software tool (the DigitalFriend) is presented, and progress on ShadowPlaces—the multi-user version—is outlined.
Connor GrahamEmail:
  相似文献   

9.
When an image is viewed at varying resolutions, it is known to create discrete perceptual jumps or transitions amid the continuous intensity changes. In this paper, we study a perceptual scale-space theory which differs from the traditional image scale-space theory in two aspects. (i) In representation, the perceptual scale-space adopts a full generative model. From a Gaussian pyramid it computes a sketch pyramid where each layer is a primal sketch representation (Guo et al. in Comput. Vis. Image Underst. 106(1):5–19, 2007)—an attribute graph whose elements are image primitives for the image structures. Each primal sketch graph generates the image in the Gaussian pyramid, and the changes between the primal sketch graphs in adjacent layers are represented by a set of basic and composite graph operators to account for the perceptual transitions. (ii) In computation, the sketch pyramid and graph operators are inferred, as hidden variables, from the images through Bayesian inference by stochastic algorithm, in contrast to the deterministic transforms or feature extraction, such as computing zero-crossings, extremal points, and inflection points in the image scale-space. Studying the perceptual transitions under the Bayesian framework makes it convenient to use the statistical modeling and learning tools for (a) modeling the Gestalt properties of the sketch graph, such as continuity and parallelism etc; (b) learning the most frequent graph operators, i.e. perceptual transitions, in image scaling; and (c) learning the prior probabilities of the graph operators conditioning on their local neighboring sketch graph structures. In experiments, we learn the parameters and decision thresholds through human experiments, and we show that the sketch pyramid is a more parsimonious representation than a multi-resolution Gaussian/Wavelet pyramid. We also demonstrate an application on adaptive image display—showing a large image in a small screen (say PDA) through a selective tour of its image pyramid. In this application, the sketch pyramid provides a means for calculating information gain in zooming-in different areas of an image by counting a number of operators expanding the primal sketches, such that the maximum information is displayed in a given number of frames. A short version was published in ICCV05 (Wang et al. 2005).  相似文献   

10.
This paper presents the main features of an extension to Prolog toward modularity and concurrency—calledCommunicating Prolog Units (CPU)—whose main aim is to allow logic programming to be used as an effective tool for system programming and prototyping. While Prolog supports only a single set of clauses and sequential computations, CPU allows programmers to define different theories (P-unis) and parallel processes interacting via P-units, according to a model very similar to Linda’s generative communication. The possibility of expressingmeta-rules to specify where and how object-level (sub)golas have to be proved, not only enhances modularity, but also increases the expressive power and flexibility of CPU systems.  相似文献   

11.
Moments constitute a well-known tool in the field of image analysis and pattern recognition, but they suffer from the drawback of high computational cost. Efforts for the reduction of the required computational complexity have been reported, mainly focused on binary images, but recently some approaches for gray images have been also presented. In this study, we propose a simple but effective approach for the computation of gray image moments. The gray image is decomposed in a set of binary images. Some of these binary images are substituted by an ideal image, which is called “half-intensity” image. The remaining binary images are represented using the image block representation concept and their moments are computed fast using block techniques. The proposed method computes approximated moment values with an error of 2–3% from the exact values and operates in real time (i.e., video rate). The procedure is parameterized by the number m of “half-intensity” images used, which controls the approximation error and the speed gain of the method. The computational complexity is O(kL 2), where k is the number of blocks and L is the moment order.  相似文献   

12.
An oscillatory network model with controllable coupling and self-organized synchronization-based performance was developed for image processing. The model demonstrates the following capabilities: (a) brightness segmentation of real grey-level images; (b) colored image segmentation; (c) selective image segmentation—extraction of the subset of image fragments with brightness values contained in an arbitrary given interval. An additional capability—successive selection of spatially separated fragments of a visual scene—has been achieved via further model extension. The fragment selection (under minor natural restrictions on mutual fragment locations) is based on in-phase internal synchronization of oscillator ensembles, corresponding to all the fragments, and distinct phase shifts between different ensembles.  相似文献   

13.
Linux malware can pose a significant threat—its (Linux) penetration is exponentially increasing—because little is known or understood about Linux OS vulnerabilities. We believe that now is the right time to devise non-signature based zero-day (previously unknown) malware detection strategies before Linux intruders take us by surprise. Therefore, in this paper, we first do a forensic analysis of Linux executable and linkable format (ELF) files. Our forensic analysis provides insight into different features that have the potential to discriminate malicious executables from benign ones. As a result, we can select a features’ set of 383 features that are extracted from an ELF headers. We quantify the classification potential of features using information gain and then remove redundant features by employing preprocessing filters. Finally, we do an extensive evaluation among classical rule-based machine learning classifiers—RIPPER, PART, C4.5 Rules, and decision tree J48—and bio-inspired classifiers—cAnt Miner, UCS, XCS, and GAssist—to select the best classifier for our system. We have evaluated our approach on an available collection of 709 Linux malware samples from vx heavens and offensive computing. Our experiments show that ELF-Miner provides more than 99% detection accuracy with less than 0.1% false alarm rate.  相似文献   

14.
The present study deals with multiscale simulation of the fluid flows in nano/mesoscale channels. A hybrid molecular dynamics (MD)-continuum simulation with the principle of crude constrained Lagrangian dynamics for data exchange between continuum and MD regions is performed to resolve the Couette and Poiseuille flows. Unlike the smaller channel heights, H < 50σ (σ is the molecular length scale, σ ≈ 0.34 nm for liquid Ar), considered in the previous works, this study deals with nano/mesoscale channels with height falling into the range of 44σ ≤ H ≤ 400σ, i.e., O(10)–O(102) nm. The major concerns are: (1) to alleviate statistic fluctuations so as to improve convergence characteristics of the hybrid simulation—a novel treatment for evaluation of force exerted on individual particle is proposed and its effectiveness is demonstrated; (2) to explore the appropriate sizes of the pure MD region and the overlap region for hybrid MD-continuum simulations—the results disclosed that, the pure MD region of at least 12σ and the overlap region of the height 10σ have to be used in this class of hybrid MD-continuum simulations; and (3) to investigate the influences of channel height on the predictions of the flow field and the slip length—a slip length correlation is formulated and the effects of channel size on the flow field and the slip length are discussed. An erratum to this article can be found at  相似文献   

15.
We consider summation of consecutive values (φ(v), φ(v + 1), ..., φ(w) of a meromorphic function φ(z), where v, w ∈ ℤ. We assume that φ(z) satisfies a linear difference equation L(y) = 0 with polynomial coefficients, and that a summing operator for L exists (such an operator can be found—if it exists—by the Accurate Summation algorithm, or, alternatively, by Gosper’s algorithm when ordL = 1). The notion of bottom summation which covers the case where φ(z) has poles in ℤ is introduced. The text was submitted by the authors in English.  相似文献   

16.
This paper concerns automatically verifying safety properties of concurrent programs. In our work, the safety property of interest is to check for multi-location data races in concurrent Java programs, where a multi-location data race arises when a program is supposed to maintain an invariant over multiple data locations, but accesses/updates are not protected correctly by locks. The main technical challenge that we address is how to generate a program model that retains (at least some of) the synchronization operations of the concrete program, when the concrete program uses dynamic memory allocation. Static analysis of programs typically begins with an abstraction step that generates an abstract program that operates on a finite set of abstract objects. In the presence of dynamic memory allocation, the finite number of abstract objects of the abstract program must represent the unbounded number of concrete objects that the concrete program may allocate, and thus by the pigeon-hole principle some of the abstract objects must be summary objects—they represent more than one concrete object. Because abstract summary objects represent multiple concrete objects, the program analyzer typically must perform weak updates on the abstract state of a summary object, where a weak update accumulates information. Because weak updates accumulate rather than overwrite, the analyzer is only able to determine weak judgements on the abstract state, i.e., that some property possibly holds, and not that it definitely holds. The problem with weak judgements is that determining whether an interleaved execution respects program synchronization requires the ability to determine strong judgements, i.e., that some lock is definitely held, and thus the analyzer needs to be able to perform strong updates—an overwrite of the abstract state—to enable strong judgements. We present the random-isolation abstraction as a new principle for enabling strong updates of special abstract objects. The idea is to associate with a program allocation site two abstract objects, r\sharp{r^{\sharp}} and o\sharp{o^{\sharp}} , where r\sharp{r^{\sharp}} is a non-summary object and o\sharp{o^{\sharp}} is a summary object. Abstract object r\sharp{r^{\sharp}} models a distinguished concrete object that is chosen at random in each program execution. Because r\sharp{r^{\sharp}} is a non-summary object—i.e, it models only one concrete object—strong updates are able to be performed on its abstract state. Because which concrete object r\sharp{r^{\sharp}} models is chosen randomly, a proof that a safety property holds for r\sharp{r^{\sharp}} generalizes to all objects modeled by o\sharp{o^{\sharp}} . We implemented the random isolation abstraction in a tool called Empire, which verifies atomic-set serializability of concurrent Java programs (atomic-set serializability is one notion of multi-location data-race freedom). Random isolation allows Empire to track lock states in ways that would not otherwise have been possible with conventional approaches.  相似文献   

17.
We describe Gauss–Newton-type methods for fitting implicitly defined curves and surfaces to given unorganized data points. The methods are suitable not only for least-squares approximation, but they can also deal with general error functions, such as approximations to the 1 or norm of the vector of residuals. Two different definitions of the residuals will be discussed, which lead to two different classes of methods: direct methods and data-based ones. In addition we discuss the continuous versions of the methods, which furnish geometric interpretations as evolution processes. It is shown that the data-based methods—which are less costly, as they work without the computation of the closest points—can efficiently deal with error functions that are adapted to noisy and uncertain data. In addition, we observe that the interpretation as evolution process allows to deal with the issues of regularization and with additional constraints.
Bert JüttlerEmail:
  相似文献   

18.
With the increasing use of wireless communication devices and the ability to track people and objects cheaply and easily, the amount of spatio-temporal data is growing substantially. Many of these applications cannot easily locate the exact position of objects, but they can determine the region in which each object is contained. Furthermore, the regions are fixed and may vary greatly in size. Examples include mobile/cell phone networks, RFID tag readers and satellite tracking. This demands techniques to mine such data. These techniques must also correct for the bias produced by different sized regions. We provide a comprehensive definition of Spatio-Temporal Association Rules (STARs) that describe how objects move between regions over time. We also present other patterns that are useful for mobility data; stationary regions and high traffic regions. The latter consists of sources, sinks and thoroughfares. These patterns describe important temporal characteristics of regions and we show that they can be considered as special STARs. We define spatial support to effectively deal with the problem of different sized regions. We provide an efficient algorithm—STAR-Miner—to find these patterns by exploiting several pruning properties. Responsible editors: Charles Perng and Tao Li.  相似文献   

19.
Sun 《Algorithmica》2008,36(1):89-111
Abstract. We show that the SUM-INDEX function can be computed by a 3-party simultaneous protocol in which one player sends only O(n ɛ ) bits and the other sends O(n 1-C(ɛ) ) bits (0<C(ɛ)<1 ). This implies that, in the Valiant—Nisan—Wigderson approach for proving circuit lower bounds, the SUM-INDEX function is not suitable as a target function.  相似文献   

20.
We study the feasibility and cost of implementing Ω—a fundamental failure detector at the core of many algorithms—in systems with weak reliability and synchrony assumptions. Intuitively, Ω allows processes to eventually elect a common leader. We first give an algorithm that implements Ω in a weak system S where (a) except for some unknown timely process s, all processes may be arbitrarily slow or may crash, and (b) only the output links of s are eventually timely (all other links can be arbitrarily slow and lossy). Previously known algorithms for Ω worked only in systems that are strictly stronger than S in terms of reliability or synchrony assumptions.We next show that algorithms that implement Ω in system S are necessarily expensive in terms of communication complexity: all correct processes (except possibly one) must send messages forever; moreover, a quad-ratic number of links must carry messages forever. This result holds even for algorithms that tolerate at most one crash. Finally, we show that with a small additional assumption to system S—the existence of some unknown correct process whose links can be arbitrarily slow and lossy but fair—there is a communication-efficient algorithm for Ω such that eventually only one process (the elected leader) sends messages. Some recent experimental results indicate that two of the algorithms for Ω described in this paper can be used in dynamically-changing systems and work well in practice [Schiper, Toueg in Proceedings of the 38th International Conference on Dependable Systems and Networks, pp. 207–216 (2008)]. This paper was originally invited to the special issue of Distributed Computing based on selected papers presented at the 22nd ACM Symposium on Principles of Distributed Computing (PODC 2003). It appears separately due to publication delays. Research supported in part by the National Science and Engineering Research Council of Canada.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号