Modeling capabilities for longitudinal data have progressed considerably, but questions remain on the extent to which method bias may negatively affect the validity of longitudinal survey data. The current study addresses the stability of individual response styles. We set up a longitudinal data collection in which the same respondents filled out 2 online questionnaires with nonoverlapping sets of heterogeneous items. Between data collections, there was a 1-year time gap. We simultaneously modeled 4 response styles that capture the major directional biases in questionnaire responses: acquiescence, disacquiescence, midpoint, and extreme response style. Drawing from latent state–trait theory, we specified a 2nd-order factor model with time-invariant and time-specific response style factors and a specifically designed covariance structure for the residual terms. The results indicate that response styles have an important stable component, a small part of which can be explained by demographics. The meaning and implications of these findings are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
For some problems, the back-propagation learning rule often used for training multilayer feedforward networks appears to have
serious limitations. In this paper we describe BP-SOM, an alternative training procedure. In BP-SOM the traditional back-propagation
learning rule is combined with unsupervised learning in self-organizing maps. While the multilayer feedforward network is
trained, the hidden-unit activations of the feedforward network are used as training material for the accompanying self-organizing
maps. After a few training cycles, the maps develop, to a certain extent, self-organization. The information in the maps is
used in updating the connection weights of the feedforward network. The effect is that during BP-SOM learning, hidden-unit
activations of patterns, associated with the same class, becomemore similar to each other. Results on two hard to learn classification tasks show that the BP-SOM architecture and learning rule offer
a strong alternative for training multilayer feedforward networks with back-propagation. 相似文献
Artificial Neural Networks (anns) are able, in general and in principle, to learn complex tasks. Interpretation of models induced by anns, however, is often extremely difficult due to the non linear and non-symbolic nature of the models. To enable better interpretation of the way knowledge is represented in anns, we present bp-som, a neural network architecture and learning algorithm. bp-som is a combination of a multi -layered feed-forward network (mfn) trained with the back-propagation learning rule (bp), and Kohonen’s self-organising maps (soms). The involvement of the som in learning leads to highly structured knowledge representations both at the hidden layer and on the soms. We focus on a particular phenomenon within trained bp-som networks, viz. that the som part acts as an organiser of the learning material into instance subsets that tend to be homogeneous with respect to both class labelling and subsets of attribute values. We show that the structured knowledge representation can either be exploited directly for rule extraction, or be used to explain a generic type of checksum solution found by the network for learning M-of-N tasks.
Effective information systems require the existence of explicit process models. A completely specified process design needs
to be developed in order to enact a given business process. This development is time consuming and often subjective and incomplete.
We propose a method that constructs the process model from process log data, by determining the relations between process
tasks. To predict these relations, we employ machine learning technique to induce rule sets. These rule sets are induced from
simulated process log data generated by varying process characteristics such as noise and log size. Tests reveal that the
induced rule sets have a high predictive accuracy on new data. The effects of noise and imbalance of execution priorities
during the discovery of the relations between process tasks are also discussed. Knowing the causal, exclusive, and parallel
relations, a process model expressed in the Petri net formalism can be built. We illustrate our approach with real world data
in a case study.
We describe the IGTree learning algorithm, which compresses an instance base into a tree structure. The concept of information gain is used as a heuristic function for performing this compression. IGTree produces trees that, compared to other lazy learning approaches, reduce storage requirements and the time required to compute classifications. Furthermore, we obtained similar or better generalization accuracy with IGTree when trained on two complex linguistic tasks, viz. letter–phoneme transliteration and part-of-speech-tagging, when compared to alternative lazy learning and decision tree approaches (viz., IB1, information-gain-weighted IB1, and C4.5). A third experiment, with the task of word hyphenation, demonstrates that when the mutual differences in information gain of features is too small, IGTree as well as information-gain-weighted IB1 perform worse than IB1. These results indicate that IGTree is a useful algorithm for problems characterized by the availability of a large number of training instances described by symbolic features with sufficiently differing information gain values. 相似文献
Contemporary workflow management systems are driven by explicit process models, i.e., a completely specified workflow design is required in order to enact a given workflow process. Creating a workflow design is a complicated time-consuming process and, typically, there are discrepancies between the actual workflow processes and the processes as perceived by the management. Therefore, we have developed techniques for discovering workflow models. The starting point for such techniques is a so-called "workflow log" containing information about the workflow process as it is actually being executed. We present a new algorithm to extract a process model from such a log and represent it in terms of a Petri net. However, we also demonstrate that it is not possible to discover arbitrary workflow processes. We explore a class of workflow processes that can be discovered. We show that the /spl alpha/-algorithm can successfully mine any workflow represented by a so-called SWF-net. 相似文献
A critical problem in software development is the monitoring, control and improvement in the processes of software developers.
Software processes are often not explicitly modeled, and manuals to support the development work contain abstract guidelines
and procedures. Consequently, there are huge differences between ‘actual’ and ‘official’ processes: “the actual process is
what you do, with all its omissions, mistakes, and oversights. The official process is what the book, i.e., a quality manual,
says you are supposed to do” (Humphrey in A discipline for software engineering. Addison-Wesley, New York, 1995). Software developers lack support to identify, analyze and better understand their processes. Consequently, process improvements
are often not based on an in-depth understanding of the ‘actual’ processes, but on organization-wide improvement programs
or ad hoc initiatives of individual developers. In this paper, we show that, based on particular data from software development
projects, the underlying software development processes can be extracted and that automatically more realistic process models
can be constructed. This is called software process mining (Rubin et al. in Process mining framework for software processes.
Software process dynamics and agility. Springer Berlin, Heidelberg, 2007). The goal of process mining is to better understand the development processes, to compare constructed process models with
the ‘official’ guidelines and procedures in quality manuals and, subsequently, to improve development processes. This paper
reports on process mining case studies in a large industrial company in The Netherlands. The subject of the process mining
is a particular process: the change control board (CCB) process. The results of process mining are fed back to practice in
order to subsequently improve the CCB process. 相似文献
One of the aims of process mining is to retrieve a process model from an event log. The discovered models can be used as objective starting points during the deployment of process-aware information systems (Dumas et al., eds., Process-Aware Information
Systems: Bridging People and Software Through Process Technology. Wiley, New York, 2005) and/or as a feedback mechanism to
check prescribed models against enacted ones. However, current techniques have problems when mining processes that contain
non-trivial constructs and/or when dealing with the presence of noise in the logs. Most of the problems happen because many
current techniques are based on local information in the event log. To overcome these problems, we try to use genetic algorithms to mine process models. The main
motivation is to benefit from the global search performed by this kind of algorithms. The non-trivial constructs are tackled by choosing an internal representation that
supports them. The problem of noise is naturally tackled by the genetic algorithm because, per definition, these algorithms
are robust to noise. The main challenge in a genetic approach is the definition of a good fitness measure because it guides
the global search performed by the genetic algorithm. This paper explains how the genetic algorithm works. Experiments with
synthetic and real-life logs show that the fitness measure indeed leads to the mining of process models that are complete (can reproduce all the behavior in the log) and precise (do not allow for extra behavior that cannot be derived from the event log). The genetic algorithm is implemented as a plug-in
in the ProM framework. 相似文献