首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The current paper details results from the Girls and ICT survey phase of a three year study investigating factors associated with low participation rates by females in education pathways leading to professional level information and communications technology (ICT) professions. The study is funded through the Australian Research Council’s (ARC) Linkage Grants Scheme. It involves a research partnership between Education Queensland (EQ), industry partner Technology One and academic researchers at (affiliation removed for review purposes). Respondents to the survey were 1453 senior high school girls. Comparisons were drawn between Takers (n = 131) and Non Takers (n = 1322) of advanced level computing subjects. Significant differences between the groups were found on four questions: “The subjects are interesting”; “I am very interested in computers”; “The subject will be helpful to me in my chosen career path after school”; and “It suited my timetable”. The research has demonstrated that senior high school girls tend to perceive advanced computing subjects as boring and they express a strong aversion to computers.  相似文献   

2.
We present an algorithm for learning from unlabeled text, based on the Vector Space Model (VSM) of information retrieval, that can solve verbal analogy questions of the kind found in the SAT college entrance exam. A verbal analogy has the form A:B::C:D, meaning “A is to B as C is to D”; for example, mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B, and the problem is to select the most analogous word pair, C:D, from a set of five choices. The VSM algorithm correctly answers 47% of a collection of 374 college-level analogy questions (random guessing would yield 20% correct; the average college-bound senior high school student answers about 57% correctly). We motivate this research by applying it to a difficult problem in natural language processing, determining semantic relations in noun-modifier pairs. The problem is to classify a noun-modifier pair, such as “laser printer”, according to the semantic relation between the noun (printer) and the modifier (laser). We use a supervised nearest-neighbour algorithm that assigns a class to a given noun-modifier pair by finding the most analogous noun-modifier pair in the training data. With 30 classes of semantic relations, on a collection of 600 labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5% (random guessing: 3.3%). With 5 classes of semantic relations, the F value is 43.2% (random: 20%). The performance is state-of-the-art for both verbal analogies and noun-modifier relations.Editors: Dan Roth and Pascale Fung  相似文献   

3.
In this paper, we approach the problem of automatically designing fuzzy diagnosis rules for rotating machinery, which can give an appropriate evaluation of the vibration data measured in the target machines. In particular, we explain the implementation to this aim and analyze the advantages and drawbacks of two soft computing techniques: knowledge-based networks (KBN) and genetic algorithms (GA). An application of both techniques is evaluated on the same case study, giving special emphasis to their performance in terms of classification success and computation time.A reduced version of this paper first appeared under the title “A comparative assessment on the application of knowledge-based networks and genetic algorithms to the design of fuzzy diagnosis systems for rotating machinery”, published in the book “Soft Computing in Industry—Recent Appliactions” (Springer Engineering).  相似文献   

4.
This paper describes a practical application of transformation-based analysis and code generation. An overview is given of an approach for automatically constructing Java stress tests whose execution exercises all “interesting” class initialization sequence possibilities for a given class hierarchy.  相似文献   

5.
In traditional distributed power control (DPC) algorithms, every user in the system is treated in the same way, i.e., the same power control algorithm is applied to every user in the system. In this paper, we divide the users into different groups depending on their channel conditions and use different DPC accordingly. Our motivation comes from the fact that different DPC algorithms have its own advantages and drawbacks, and our aim in this paper is to “combine” the advantages of different DPC algorithms, and we use soft computing techniques for that. In the simulations results, we choose Foschini and Miljanic Algorithm in [3], which has relatively fast convergence but is not robust against time-varying link gain changes and CIR estimation errors, and fixed step algorithm of Kim [3], which is robust but its convergence is slow. By “combining” these two algorithms using soft computing techniques, the resulting algorithm has fast convergence and is robust. Acknowledgments This work was supported in part by GETA (Finnish Academy Graduate School on Electronics, Telecommunications and Automation), Finland.  相似文献   

6.
This paper explores several variants of the Chandy-Misra Null Message algorithm for distributed simulation. The Chandy-Misra algorithm is one of a class of “conservative” algorithms that maintains the correct order of simulation throughout the execution of the model by means of constraints on simulation time advance. The algorithms developed in this paper incorporate an “event-oriented” view of the physical process and message-passing. The effects of the computational workload to compute each event is related to speedup attained over an equivalent sequential simulation. The effects of network topology are investigated, and performance is evaluated for the variants on transmission of null messages. The performance analysis is supported with empirical results based on an implementation of the algorithm on an Intel iPSC 32-node hypercube multiprocessor. Results show that speedups over sequential simulation of greater than N, using N processors, can be achieved in some circumstances.  相似文献   

7.
An investigation of a fault diagnostic technique for internal combustion engines using discrete wavelet transform (DWT) and neural network is presented in this paper. Generally, sound emission signal serves as a promising alternative to the condition monitoring and fault diagnosis in rotating machinery when the vibration signal is not available. Most of the conventional fault diagnosis techniques using sound emission and vibration signals are based on analyzing the signal amplitude in the time or frequency domain. Meanwhile, the continuous wavelet transform (CWT) technique was developed for obtaining both time-domain and frequency-domain information. Unfortunately, the CWT technique is often operated over a longer computing time. In the present study, a DWT technique which is combined with a feature selection of energy spectrum and fault classification using neural network for analyzing fault signal is proposed for improving the shortcomings without losing its original property. The features of the sound emission signal at different resolution levels are extracted by multi-resolution analysis and Parseval’s theorem [Gaing, Z. L. (2004). Wavelet-based neural network for power disturbance recognition and classification. IEEE Transactions on Power Delivery 19, 1560–1568]. The algorithm is obtained from previous work by Daubechies [Daubechies, I. (1988). Orthonormal bases of compactly supported wavelets. Communication on Pure and Applied Mathematics 41, 909–996.], the“db4”, “db8” and “db20” wavelet functions are adopted to perform the proposed DWT technique. Then, these features are used for fault recognition using a neural network. The experimental results indicated that the proposed system using the sound emission signal is effective and can be used for fault diagnosis of various engine operating conditions.  相似文献   

8.
By collecting statistics over runtime executions of a program we can answer complex queries, such as “what is the average number of packet retransmissions” in a communication protocol, or “how often does process P1 enter the critical section while process P2 waits” in a mutual exclusion algorithm. We present an extension to linear-time temporal logic that combines the temporal specification with the collection of statistical data. By translating formulas of this language to alternating automata we obtain a simple and efficient query evaluation algorithm. We illustrate our approach with examples and experimental results.  相似文献   

9.
Genetic programming (GP) can learn complex concepts by searching for the target concept through evolution of a population of candidate hypothesis programs. However, unlike some learning techniques, such as Artificial Neural Networks (ANNs), GP does not have a principled procedure for changing parts of a learned structure based on that structure's performance on the training data. GP is missing a clear, locally optimal update procedure, the equivalent of gradient-descent backpropagation for ANNs. This article introduces a new algorithm, “internal reinforcement”, for defining and using performance feedback on program evolution. This internal reinforcement principled mechanism is developed within a new connectionist representation for evolving parameterized programs, namely “neural programming”. We present the algorithms for the generation of credit and blame assignment in the process of learning programs using neural programming and internal reinforcement. The article includes a comprehensive overview of genetic programming and empirical experiments that demonstrate the increased learning rate obtained by using our principled program evolution approach.  相似文献   

10.
Sonia   《Automatica》2009,45(9):2010-2017
We present a class of modified circumcenter algorithms that allow a group of agents to achieve “practical rendezvous” when they are only able to take noisy measurements of their neighbors. Assuming a uniform detection probability in a disk of radius σ about each neighbor’s true position, we show how initially connected agents converge to a practical stability ball. More precisely, a deterministic analysis allows us to guarantee convergence to such a ball under r-disk graph connectivity in 1D under the condition that r/σ be sufficiently large. A stochastic analysis leads to a similar convergence result in probability, but for any r/σ>1, and under a sequence of switching graphs that contains a connected graph within bounded time intervals. We include several simulations to discuss the performance of the proposed algorithms.  相似文献   

11.
Some computationally hard problems, e.g., deduction in logical knowledge bases– are such that part of an instance is known well before the rest of it, and remains the same for several subsequent instances of the problem. In these cases, it is useful to preprocess off-line this known part so as to simplify the remaining on-line problem. In this paper we investigate such a technique in the context of intractable, i.e., NP-hard, problems. Recent results in the literature show that not all NP-hard problems behave in the same way: for some of them preprocessing yields polynomial-time on-line simplified problems (we call them compilable), while for other ones their compilability implies some consequences that are considered unlikely. Our primary goal is to provide a sound methodology that can be used to either prove or disprove that a problem is compilable. To this end, we define new models of computation, complexity classes, and reductions. We find complete problems for such classes, “completeness” meaning they are “the less likely to be compilable.” We also investigate preprocessing that does not yield polynomial-time on-line algorithms, but generically “decreases” complexity. This leads us to define “hierarchies of compilability,” that are the analog of the polynomial hierarchy. A detailed comparison of our framework to the idea of “parameterized tractability” shows the differences between the two approaches.  相似文献   

12.
The 21st century is seeing technological advances that make it possible to build more robust and sophisticated decision support systems than ever before. But the effectiveness of these systems may be limited if we do not consider more eclectic (or romantic) options. This paper exemplifies the potential that lies in the novel application and combination of methods, in this case to evaluating stock market purchasing opportunities using the “technical analysis” school of stock market prediction. Members of the technical analysis school predict market prices and movements based on the dynamics of market price and volume, rather than on economic fundamentals such as earnings and market share. The results of this paper support the effectiveness of the technical analysis approach through use of the “bull flag” price and volume pattern heuristic. The romantic approach to decision support exemplified in this paper is made possible by the recent development of: (1) high-performance desktop computing, (2) the methods and techniques of machine learning and soft computing, including neural networks and genetic algorithms, and (3) approaches recently developed that combine diverse classification and forecasting systems. The contribution of this paper lies in the novel application and combination of the decision-making methods and in the nature and superior quality of the results achieved.  相似文献   

13.
The article formulates, analyses and suggests solutions to some optimal covering problems that often arise in association with geographical coordinates, albeit their mathematical form may be detached from this background. It shows that under a general formulation no favorable mathematical properties can be deduced beyond the “center of gravity”, and thus that computational algorithms are seemingly the best resort for resolving the problem. An algorithm that is based on the steepest descent approach and that exploits the “center of gravity” property is devised, and is shown as a detector of local minima points. A release from the “locality trap” is provided by a stochastic algorithm.Every part of the paper is accompanied by illustrative examples, either of an analytical nature on a segment of the real line or of a numerical nature by computer programs.  相似文献   

14.
A novel technique for maximum “a posteriori” (MAP) adaptation of maximum entropy (MaxEnt) and maximum entropy Markov models (MEMM) is presented.The technique is applied to the problem of automatically capitalizing uniformly cased text. Automatic capitalization is a practically relevant problem: speech recognition output needs to be capitalized; also, modern word processors perform capitalization among other text proofing algorithms such as spelling correction and grammar checking. Capitalization can be also used as a preprocessing step in named entity extraction or machine translation.A “background” capitalizer trained on 20 M words of Wall Street Journal (WSJ) text from 1987 is adapted to two Broadcast News (BN) test sets – one containing ABC Primetime Live text and the other NPR Morning News/CNN Morning Edition text – from 1996.The “in-domain” performance of the WSJ capitalizer is 45% better relative to the 1-gram baseline, when evaluated on a test set drawn from WSJ 1994. When evaluating on the mismatched “out-of-domain” test data, the 1-gram baseline is outperformed by 60% relative; the improvement brought by the adaptation technique using a very small amount of matched BN data – 25–70k words – is about 20–25% relative. Overall, automatic capitalization error rate of 1.4% is achieved on BN data.The performance gain obtained by employing our adaptation technique using a tiny amount of out-of-domain training data on top of the background data is striking: as little as 0.14 M words of in-domain data brings more improvement than using 10 times more background training data (from 2 M words to 20 M words).  相似文献   

15.
Towards an algebraic theory of information integration   总被引:4,自引:0,他引:4  
Information integration systems provide uniform interfaces to varieties of heterogeneous information sources. For query answering in such systems, the current generation of query answering algorithms in local-as-view (source-centric) information integration systems all produce what has been thought of as “the best obtainable” answer, given the circumstances that the source-centric approach introduces incomplete information into the virtual global relations. However, this “best obtainable” answer does not include all information that can be extracted from the sources because it does not allow partial information. Neither does the “best obtainable” answer allow for composition of queries, meaning that querying a result of a previous query will not be equivalentto the composition of the two queries. In this paper, we provide a foundation for information integration, based on the algebraic theory of incomplete information. Our framework allows us to define the semantics of partial facts and introduce the notion of the exact answer—that is the answer that includes partial facts. We show that querying under the exact answer semantics is compositional. We also present two methods for actually computing the exact answer. The first method is tableau-based, and it is a generalization of the “inverse-rules” approach. The second, much more efficient method, is a generalization of the rewriting approach, and it is based on partial containment mappings introduced in the paper.  相似文献   

16.
Shortest distance and reliability of probabilistic networks   总被引:1,自引:0,他引:1  
When the “length” of a link is not deterministic and is governed by a stochastic process, the “shortest” path between two points in the network is not necessarily always composed of the same links and depends on the state of the network. For example, in communication and transportation networks, the travel time on a link is not deterministic and the fastest path between two points is not fixed. This paper presents an algorithm to compute the expected shortest travel time between two nodes in the network when the travel time on each link has a given independent discrete probability distribution. The algorithm assumes the knowledge of all the paths between two nodes and methods to determine the paths are referenced.In reliability (i.e. the probability that two given points are connected by a path) computations, associated with each link is a probability of “failure” and a probability of “success”. Since “failure” implies infinite travel time, the algorithm simultaneously computes reliability. The paper also discusses the algorithm's capability to simultaneously compute some other performance measures which are useful in the analysis of emergency services operating on a network.  相似文献   

17.
Borodin, Linial, and Saks introduced a general model for online systems calledmetrical task systems(1992,J. Assoc. Comput. Mach.39(4), 745–763). In this paper, the unfair two state problem, a natural generalization of the two state metrical task system problem, is studied. A randomized algorithm for this problem is presented, and it is shown that this algorithm is optimal. Using the analysis of the unfair two state problem, a proof of a decomposition theorem similar to that of Blum, Karloff, Rabani, and Saks (1992, “Proc. 33rd Symposium on Foundations of Computer Science,” pp. 197–207) is presented. This theorem allows one to design divide and conquer algorithms for specific metrical task systems. Our theorem gives the same bounds asymptotically, but it has less restrictive boundary conditions.  相似文献   

18.
This paper concerns two fundamental but somewhat neglected issues, both related to the design and analysis of randomized on-line algorithms. Motivated by early results in game theory we define several types of randomized on-line algorithms, discuss known conditions for their equivalence, and give a natural example distinguishing between two kinds of randomizations. In particular, we show thatmixedrandomized memoryless paging algorithms can achieve strictly better competitive performance thanbehavioralrandomized algorithms. Next we summarize known—and derive new—“Yao principle” theorems for lower bounding competitive ratios of randomized on-line algorithms. This leads to four different theorems for bounded/unbounded and minimization/maximization problems.  相似文献   

19.
In this paper the air traffic flow control is approached as a constrained optimization problem on a multicommodity network. The proposed dynamic mathematical model, by means of a suitable time discretization, has been changed into a “static” one, in order to use static network flow algorithms while taking into account the “unsteady” nature of the air traffic congestion problems.The complexity of the model requires some preliminary effort, such as the identification of some characteristic parameters of the system.In this paper, the network theory is applied to evaluate the influence of the time discretization interval on the model significancy with respect to the actual traffic situation. In particular, a computational example concerning the Rome air traffic control region is presented and the relative results are discussed.  相似文献   

20.
We are addressing the novel problem of jointly evaluating multiple speech patterns for automatic speech recognition and training. We propose solutions based on both the non-parametric dynamic time warping (DTW) algorithm, and the parametric hidden Markov model (HMM). We show that a hybrid approach is quite effective for the application of noisy speech recognition. We extend the concept to HMM training wherein some patterns may be noisy or distorted. Utilizing the concept of “virtual pattern” developed for joint evaluation, we propose selective iterative training of HMMs. Evaluating these algorithms for burst/transient noisy speech and isolated word recognition, significant improvement in recognition accuracy is obtained using the new algorithms over those which do not utilize the joint evaluation strategy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号