首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   34篇
  免费   0篇
化学工业   9篇
机械仪表   3篇
建筑科学   1篇
能源动力   1篇
轻工业   2篇
水利工程   1篇
无线电   4篇
一般工业技术   6篇
自动化技术   7篇
  2023年   2篇
  2022年   4篇
  2020年   1篇
  2019年   3篇
  2018年   1篇
  2015年   2篇
  2013年   2篇
  2012年   2篇
  2011年   4篇
  2010年   1篇
  2009年   2篇
  2008年   1篇
  2006年   2篇
  2004年   2篇
  2002年   1篇
  2001年   1篇
  1991年   1篇
  1990年   1篇
  1986年   1篇
排序方式: 共有34条查询结果,搜索用时 15 毫秒
1.
Workflow Management Systems (WFMS) coordinate execution of logically related multiple tasks in an organization. Each workflow that is executed on such a system is an instance of some workflow schema. A workflow schema is defined using a set of tasks that are coordinated using dependencies. Workflows generated from the same schema may differ with respect to the tasks executed. An important issue that must be addressed while designing a workflow is to decide what tasks are needed for the workflow to complete — we refer to this set of tasks as the completion set. Since different tasks are executed in different workflow instances, a workflow schema may be associated with multiple completion sets. Incorrect specification of completion sets may prohibit some workflow from completing. This, in turn, will cause the workflow to hold on to the resources and raise availability problems. Manually generating these sets for large workflow schemas can be an error-prone and tedious process.Our goal is to automate this process. We investigate the factors that affect the completion of a workflow. Specifically, we study the impact of control-flow dependencies on completion sets and show how this knowledge can be used for automatically generating these sets. We provide an algorithm that can be used by application developers to generate the completion sets associated with a workflow schema. Generating all possible completion sets for a large workflow is computationally intensive. Towards this end, we show how to approximately estimate the number of completion sets. If this number exceeds some threshold specified by the user, then we do not generate all completion sets.  相似文献   
2.
Many kinds of information are hidden in email data, such as the information being exchanged, the time of exchange, and the user IDs participating in the exchange. Analyzing the email data can reveal valuable information about the social networks of a single user or multiple users, the topics being discussed, and so on. In this paper, we describe a novel approach for temporally analyzing the communication patterns embedded in email data based on time series segmentation. The approach computes egocentric communication patterns of a single user, as well as sociocentric communication patterns involving multiple users. Time series segmentation is used to uncover patterns that may span multiple time points and to study how these patterns change over time. To find egocentric patterns, the email communication of a user is represented as an item-set time series. An optimal segmentation of the item-set time series is constructed, from which patterns are extracted. To find sociocentric patterns, the email data is represented as an item-setgroup time series. Patterns involving multiple users are then extracted from an optimal segmentation of the item-setgroup time series. The proposed approach is applied to the Enron email data set, which produced very promising results.  相似文献   
3.
Identifying time periods with a burst of activities related to a topic has been an important problem in analyzing time-stamped documents. In this paper, we propose an approach to extract a hot spot of a given topic in a time-stamped document set. Topics can be basic, containing a simple list of keywords, or complex. Logical relationships such as and, or, and not are used to build complex topics from basic topics. A concept of presence measure of a topic based on fuzzy set theory is introduced to compute the amount of information related to the topic in the document set. Each interval in the time period of the document set is associated with a numeric value which we call the discrepancy score. A high discrepancy score indicates that the documents in the time interval are more focused on the topic than those outside of the time interval. A hot spot of a given topic is defined as a time interval with the highest discrepancy score. We first describe a naive implementation for extracting hot spots. We then construct an algorithm called EHE (Efficient Hot Spot Extraction) using several efficient strategies to improve performance. We also introduce the notion of a topic DAG to facilitate an efficient computation of presence measures of complex topics. The proposed approach is illustrated by several experiments on a subset of the TDT-Pilot Corpus and DBLP conference data set. The experiments show that the proposed EHE algorithm significantly outperforms the naive one, and the extracted hot spots of given topics are meaningful.  相似文献   
4.
We propose a special type of time series, which we call an item-set time series, to facilitate the temporal analysis of software version histories, email logs, stock market data, etc. In an item-set time series, each observed data value is a set of discrete items. We formalize the concept of an item-set time series and present efficient algorithms for segmenting a given item-set time series. Segmentation of a time series partitions the time series into a sequence of segments where each segment is constructed by combining consecutive time points of the time series. Each segment is associated with an item set that is computed from the item sets of the time points in that segment, using a function which we call a measure function. We then define a concept called the segment difference, which measures the difference between the item set of a segment and the item sets of the time points in that segment. The segment difference values are required to construct an optimal segmentation of the time series. We describe novel and efficient algorithms to compute segment difference values for each of the measure functions described in the paper. We outline a dynamic programming based scheme to construct an optimal segmentation of the given item-set time series. We use the item-set time series segmentation techniques to analyze the temporal content of three different data sets–Enron email, stock market data, and a synthetic data set. The experimental results show that an optimal segmentation of item-set time series data captures much more temporal content than a segmentation constructed based on the number of time points in each segment, without examining the item set data at the time points, and can be used to analyze different types of temporal data.  相似文献   
5.
Extraction of sequences of events from news and other documents based on the publication times of these documents has been shown to be extremely effective in tracking past events. This paper addresses the issue of constructing an optimal information preserving decomposition of the time period associated with a given document set, i.e., a decomposition with the smallest number of subintervals, subject to no loss of information. We introduce the notion of the compressed interval decomposition, where each subinterval consists of consecutive time points having identical information content. We define optimality, and show that any optimal information preserving decomposition of the time period is a refinement of the compressed interval decomposition. We define several special classes of measure functions (functions that measure the prevalence of keywords in the document set and assign them numeric values), based on their effect on the information computed as document sets are combined. We give algorithms, appropriate for different classes of measure functions, for computing an optimal information preserving decomposition of a given document set. We studied the effectiveness of these algorithms by computing several compressed interval and information preserving decompositions for a subset of the Reuters–21578 document set. The experiments support the obvious conclusion that the temporal information gleaned from a document set is strongly dependent on the measure function used and on other user-defined parameters.
Daniel J. RosenkrantzEmail:
  相似文献   
6.
Sintering studies were conducted using kaolin, metakaolin, zeolite 4A, and various synthetic mixtures of Al2O3 and SiO2 in the presence of Li2CO3 and LiCl as fluxing agents. Various compositions of the above were prepared, and conventional sintering studies were conducted at temperatures of 900°–1450°C with soaking periods of 1–3 h. Kaolin, metakaolin, and amorphized kaolin in the presence of Li2CO3 showed nucleation centers of β-spodumene as pink specks, whereas synthetic mixtures of Al2O3 and SiO2 failed to behave in the same manner. To determine whether the pink specks formed were color centers or F centers, the samples were subjected to UV, IR, and X-ray irradiation; however, the samples showed no tenebrescence properties. External addition of iron as an impurity in a nonlayered system also resulted in pink speck formation. This observation indicated that impurities present in the natural kaolin were the cause of this phenomenon. Moreover, the LiCl-based samples did not result in pink specks, even though the kaolinitic samples contained iron as an impurity. Therefore, although β-spodumene was formed in aluminosilicates in the presence of Li2CO3 and LiCl, the pink variety of β-spodumene (kunzite) formation occurred only in the presence of lithium-rich aluminosilicates and in the presence of iron as an impurity. The phase identification and microstructure were explained based on XRD, DTA, and SEM studies.  相似文献   
7.
In the proposed model, we consider the scenario of a supply chain with a single vendor and a single buyer for a single product, taking into consideration the effect of deterioration and credit period incentives. We also consider the situation in which the vendor and the buyer decide upon an investment in ordering cost reduction and coordinate their inventory policies to minimize their joint average annual cost. We study and analyze the benefits of order cost reduction and credit period incentives in a coordinated supply chain. The numerical examples with an exponential ordering cost function are used to evaluate the benefit of the proposed coordinated strategy.  相似文献   
8.
Lead Selenide thin films were prepared by vacuum evaporation technique with different thickness ranges from 50 to 200 nm on glass substrates. The structural studies revealed that the prepared films are strongly oriented on (2 0 0) plane with rock-salt crystal structure. The various structural parameters such as grain size (D), lattice constant (a), micro strain (ε) and dislocation density (δ) were calculated. The surface morphology of the films was also analyzed. The optical absorption of the films starts with visible region and obtained energy gap of the films lies between 1.5 and 1.9 eV. The room temperature Photoluminescence spectrum shows the emission peak at visible region (380-405 nm) and the blue shift was observed with decreasing the film thickness. The electrical mobility, resistivity, carrier concentration and mean free path (L) of the free carriers of the films were studied for all the samples and compared.  相似文献   
9.

The face authentication is a challenging task to validate the user with uncontrolled environment like variations on expression, pose, illumination and occlusion. In order to address these issues, the proposed work provides solution by considering all these factors in inter and intra personal face authentication. During enrollment process, the facial region of still image for the authorized user is detected and features are extracted using local tetra pattern (LTrP) technique. The features are given as input to the neural network namely fuzzy adaptive learning control network (FALCON) for training and classification of features. During authentication process, an image that can vary with expression, pose, illumination and occlusion factors is taken as test image and the test image is applied with LTrP and FALCON to train the features of test image. Then, these trained features are compared with existing feature set by using new proposed multi factor face authentication algorithm to authenticate a person. This work is evaluated among 1150 face images which are collected from JAFFE, Yale, ORL and AR datasets. The overall performance of the work is evaluated by authenticating 1106 images from 1150 constrained images. The second phase of the research work finally produces highest recognition rate of 96% among conventional methods.

  相似文献   
10.
Petroleum refineries around the world have adopted different technological options to manage the solid wastes generated during the refining process and stocking of crude oil. These include physical, chemical and biological treatment methods. In this investigation bacterial mediated oil separation is effected. Two strains of Bacillus were isolated from petroleum-contaminated soils, and inoculated into slurry of sludge, and sludge-sand combinations. The bacteria could effect the separation of oil so as to form a floating scum within 48h with an efficiency of 97% at < or =5% level of sludge in the sludge-sand mixture. The activity was traced to the production of biosurfactants by bacteria.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号