首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   0篇
一般工业技术   3篇
冶金工业   1篇
自动化技术   7篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2009年   1篇
  2007年   2篇
  2006年   1篇
  2004年   1篇
  1999年   1篇
  1997年   1篇
  1995年   1篇
排序方式: 共有11条查询结果,搜索用时 531 毫秒
1.
Modeling capabilities for longitudinal data have progressed considerably, but questions remain on the extent to which method bias may negatively affect the validity of longitudinal survey data. The current study addresses the stability of individual response styles. We set up a longitudinal data collection in which the same respondents filled out 2 online questionnaires with nonoverlapping sets of heterogeneous items. Between data collections, there was a 1-year time gap. We simultaneously modeled 4 response styles that capture the major directional biases in questionnaire responses: acquiescence, disacquiescence, midpoint, and extreme response style. Drawing from latent state–trait theory, we specified a 2nd-order factor model with time-invariant and time-specific response style factors and a specifically designed covariance structure for the residual terms. The results indicate that response styles have an important stable component, a small part of which can be explained by demographics. The meaning and implications of these findings are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
2.
The BP-SOM architecture and learning rule   总被引:3,自引:0,他引:3  
For some problems, the back-propagation learning rule often used for training multilayer feedforward networks appears to have serious limitations. In this paper we describe BP-SOM, an alternative training procedure. In BP-SOM the traditional back-propagation learning rule is combined with unsupervised learning in self-organizing maps. While the multilayer feedforward network is trained, the hidden-unit activations of the feedforward network are used as training material for the accompanying self-organizing maps. After a few training cycles, the maps develop, to a certain extent, self-organization. The information in the maps is used in updating the connection weights of the feedforward network. The effect is that during BP-SOM learning, hidden-unit activations of patterns, associated with the same class, becomemore similar to each other. Results on two hard to learn classification tasks show that the BP-SOM architecture and learning rule offer a strong alternative for training multilayer feedforward networks with back-propagation.  相似文献   
3.
Advance technology development and wide use of the World Wide Web have made it possible for new product development organizations to access multi‐sources of data‐related customer complaints. However, the number of customer plaints of highly innovative consumer electronic products is still increasing; that is, product quality and reliability is at risk. This article aims to understand why existing solutions from literature as well as from industry to deal with these increasingly complex multiple data sources are not able to manage product quality and reliability. Three case studies in industry are discussed. On the basis of the case study results, this article also identifies a new research agenda that is needed to improve product quality and reliability under this circumstance. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
4.
A recent trend in technological innovation is towards the development of increasingly multifunctional and complex products to be used within rich socio‐cultural contexts such as the high‐end office, the digital home, and professional or personal healthcare. One important consequence of the development of strongly innovative products is a growing market uncertainty regarding ‘if’, ‘how’, and ‘when’ users can and will adopt such products. Often, it is not even clear to what extent these products are understood and interacted with in the intended manner. The mentioned problems have already become an evident concern in the field, where there is a significant rise in the numbers of seemingly sound products being complained about, signaling a lack of soft reliability. In this paper, we position soft reliability as a growing and critical industrial problem, whose solution requires new academic expertise from various disciplines. We illustrate potential root causes for soft reliability problems, such as discrepancy between the perceptions of users and designers. We discuss the necessary approach to effectively capture subjective feedback data from actual users, e.g. when they contact call centers. Furthermore, we present a novel observation and analysis approach that enables insight into actual product usage, and outline opportunities for combining such objective data with the subjective feedback provided by users. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
5.

Artificial Neural Networks (anns) are able, in general and in principle, to learn complex tasks. Interpretation of models induced by anns, however, is often extremely difficult due to the non linear and non-symbolic nature of the models. To enable better interpretation of the way knowledge is represented in anns, we present bp-som, a neural network architecture and learning algorithm. bp-som is a combination of a multi -layered feed-forward network (mfn) trained with the back-propagation learning rule (bp), and Kohonen’s self-organising maps (soms). The involvement of the som in learning leads to highly structured knowledge representations both at the hidden layer and on the soms. We focus on a particular phenomenon within trained bp-som networks, viz. that the som part acts as an organiser of the learning material into instance subsets that tend to be homogeneous with respect to both class labelling and subsets of attribute values. We show that the structured knowledge representation can either be exploited directly for rule extraction, or be used to explain a generic type of checksum solution found by the network for learning M-of-N tasks.

  相似文献   
6.
Effective information systems require the existence of explicit process models. A completely specified process design needs to be developed in order to enact a given business process. This development is time consuming and often subjective and incomplete. We propose a method that constructs the process model from process log data, by determining the relations between process tasks. To predict these relations, we employ machine learning technique to induce rule sets. These rule sets are induced from simulated process log data generated by varying process characteristics such as noise and log size. Tests reveal that the induced rule sets have a high predictive accuracy on new data. The effects of noise and imbalance of execution priorities during the discovery of the relations between process tasks are also discussed. Knowing the causal, exclusive, and parallel relations, a process model expressed in the Petri net formalism can be built. We illustrate our approach with real world data in a case study.
Antal Van Den BoschEmail:
  相似文献   
7.
We describe the IGTree learning algorithm, which compresses an instance base into a tree structure. The concept of information gain is used as a heuristic function for performing this compression. IGTree produces trees that, compared to other lazy learning approaches, reduce storage requirements and the time required to compute classifications. Furthermore, we obtained similar or better generalization accuracy with IGTree when trained on two complex linguistic tasks, viz. letter–phoneme transliteration and part-of-speech-tagging, when compared to alternative lazy learning and decision tree approaches (viz., IB1, information-gain-weighted IB1, and C4.5). A third experiment, with the task of word hyphenation, demonstrates that when the mutual differences in information gain of features is too small, IGTree as well as information-gain-weighted IB1 perform worse than IB1. These results indicate that IGTree is a useful algorithm for problems characterized by the availability of a large number of training instances described by symbolic features with sufficiently differing information gain values.  相似文献   
8.
Workflow mining: discovering process models from event logs   总被引:17,自引:0,他引:17  
Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and, typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we have developed techniques for discovering workflow models. The starting point for such techniques is a so-called "workflow log" containing information about the workflow process as it is actually being executed. We present a new algorithm to extract a process model from such a log and represent it in terms of a Petri net. However, we also demonstrate that it is not possible to discover arbitrary workflow processes. We explore a class of workflow processes that can be discovered. We show that the /spl alpha/-algorithm can successfully mine any workflow represented by a so-called SWF-net.  相似文献   
9.
A critical problem in software development is the monitoring, control and improvement in the processes of software developers. Software processes are often not explicitly modeled, and manuals to support the development work contain abstract guidelines and procedures. Consequently, there are huge differences between ‘actual’ and ‘official’ processes: “the actual process is what you do, with all its omissions, mistakes, and oversights. The official process is what the book, i.e., a quality manual, says you are supposed to do” (Humphrey in A discipline for software engineering. Addison-Wesley, New York, 1995). Software developers lack support to identify, analyze and better understand their processes. Consequently, process improvements are often not based on an in-depth understanding of the ‘actual’ processes, but on organization-wide improvement programs or ad hoc initiatives of individual developers. In this paper, we show that, based on particular data from software development projects, the underlying software development processes can be extracted and that automatically more realistic process models can be constructed. This is called software process mining (Rubin et al. in Process mining framework for software processes. Software process dynamics and agility. Springer Berlin, Heidelberg, 2007). The goal of process mining is to better understand the development processes, to compare constructed process models with the ‘official’ guidelines and procedures in quality manuals and, subsequently, to improve development processes. This paper reports on process mining case studies in a large industrial company in The Netherlands. The subject of the process mining is a particular process: the change control board (CCB) process. The results of process mining are fed back to practice in order to subsequently improve the CCB process.  相似文献   
10.
Genetic process mining: an experimental evaluation   总被引:4,自引:0,他引:4  
One of the aims of process mining is to retrieve a process model from an event log. The discovered models can be used as objective starting points during the deployment of process-aware information systems (Dumas et al., eds., Process-Aware Information Systems: Bridging People and Software Through Process Technology. Wiley, New York, 2005) and/or as a feedback mechanism to check prescribed models against enacted ones. However, current techniques have problems when mining processes that contain non-trivial constructs and/or when dealing with the presence of noise in the logs. Most of the problems happen because many current techniques are based on local information in the event log. To overcome these problems, we try to use genetic algorithms to mine process models. The main motivation is to benefit from the global search performed by this kind of algorithms. The non-trivial constructs are tackled by choosing an internal representation that supports them. The problem of noise is naturally tackled by the genetic algorithm because, per definition, these algorithms are robust to noise. The main challenge in a genetic approach is the definition of a good fitness measure because it guides the global search performed by the genetic algorithm. This paper explains how the genetic algorithm works. Experiments with synthetic and real-life logs show that the fitness measure indeed leads to the mining of process models that are complete (can reproduce all the behavior in the log) and precise (do not allow for extra behavior that cannot be derived from the event log). The genetic algorithm is implemented as a plug-in in the ProM framework.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号