首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   88篇
  免费   4篇
化学工业   11篇
金属工艺   1篇
机械仪表   1篇
建筑科学   1篇
能源动力   4篇
轻工业   3篇
水利工程   2篇
无线电   9篇
一般工业技术   11篇
冶金工业   1篇
自动化技术   48篇
  2022年   4篇
  2021年   1篇
  2019年   4篇
  2018年   2篇
  2017年   4篇
  2016年   4篇
  2015年   4篇
  2014年   11篇
  2013年   12篇
  2012年   10篇
  2011年   12篇
  2010年   4篇
  2009年   6篇
  2008年   1篇
  2007年   5篇
  2006年   1篇
  2004年   1篇
  2003年   1篇
  2002年   1篇
  2001年   2篇
  2000年   1篇
  1990年   1篇
排序方式: 共有92条查询结果,搜索用时 330 毫秒
21.
Atrial fibrillation (AF) is the most common cardiac arrhythmia and entails an increased risk of thromboembolic events. Prediction of the termination of an AF episode, based on noninvasive techniques, can benefit patients, doctors and health systems. The method described in this paper is based on two-lead surface electrocardiograms (ECGs): 1-min ECG recordings of AF episodes including N-type (not terminating within an hour after the end of the record), S-type (terminating 1 min after the end of the record) and T-type (terminating immediately after the end of the record). These records are organised into three learning sets (N, S and T) and two test sets (A and B). Starting from these ECGs, the atrial and ventricular activities were separated using beat classification and class averaged beat subtraction, followed by the evaluation of seven parameters representing atrial or ventricular activity. Stepwise discriminant analysis selected the set including dominant atrial frequency (DAF, index of atrial activity) and average HR (HRmean, index of ventricular activity) as optimal for discrimination between N/T-type episodes. The linear classifier, estimated on the 20 cases of the N and T learning sets, provided a performance of 90% on the 30 cases of a test set for the N/T-type discrimination. The same classifier led to correct classification in 89% of the 46 cases for N/S-type discrimination. The method has shown good results and seems to be suitable for clinical application, although a larger dataset would be very useful for improvement and validation of the algorithms and the development of an earlier predictor of paroxysmal AF spontaneous termination time.  相似文献   
22.
Ansari L  Fagas G  Colinge JP  Greer JC 《Nano letters》2012,12(5):2222-2227
Energy bandgaps are observed to increase with decreasing diameter due to quantum confinement in quasi-one-dimensional semiconductor nanostructures or nanowires. A similar effect is observed in semimetal nanowires for sufficiently small wire diameters: A bandgap is induced, and the semimetal nanowire becomes a semiconductor. We demonstrate that on the length scale on which the semimetal-semiconductor transition occurs, this enables the use of bandgap engineering to form a field-effect transistor near atomic dimensions and eliminates the need for doping in the transistor's source, channel, or drain. By removing the requirement to supply free carriers by introducing dopant impurities, quantum confinement allows for a materials engineering to overcome the primary obstacle to fabricating sub-5 nm transistors, enabling aggressive scaling to near atomic limits.  相似文献   
23.
We study the flows induced by different rework loops in serial manufacturing systems with inspection stations. Average values of these flows and queuing network formulas are used for performance evaluation and optimisation of production lines. An application is presented for solving jointly the problems of inventory control and inspection station allocation in a CONWIP production line.  相似文献   
24.
A two‐dimensional model of methane thermal decomposition reactors is developed which accounts for coupled radiative heat and polydisperse carbon particle nucleation, growth, and transport. The model uses the Navier–Stokes equations for the fluid dynamics, the radiative transfer equation for methane and particle species radiation absorption, the advection–diffusion equation for gas and particle species transport, and a sectional method for particle species nucleation, heterogenous growth, and coagulation. The model is applied to a tubular laminar flow reactor. The simulation results indicate the development of a reaction boundary layer inside the reactor, which results in significant variation of the local particle size distribution across the reactor. © 2011 American Institute of Chemical Engineers AIChE J, 58: 2545–2556, 2012  相似文献   
25.
Super-resolution in respiratory synchronized positron emission tomography   总被引:1,自引:0,他引:1  
Respiratory motion is a major source of reduced quality in positron emission tomography (PET). In order to minimize its effects, the use of respiratory synchronized acquisitions, leading to gated frames, has been suggested. Such frames, however, are of low signal-to-noise ratio (SNR) as they contain reduced statistics. Super-resolution (SR) techniques make use of the motion in a sequence of images in order to improve their quality. They aim at enhancing a low-resolution image belonging to a sequence of images representing different views of the same scene. In this work, a maximum a posteriori (MAP) super-resolution algorithm has been implemented and applied to respiratory gated PET images for motion compensation. An edge preserving Huber regularization term was used to ensure convergence. Motion fields were recovered using a B-spline based elastic registration algorithm. The performance of the SR algorithm was evaluated through the use of both simulated and clinical datasets by assessing image SNR, as well as the contrast, position and extent of the different lesions. Results were compared to summing the registered synchronized frames on both simulated and clinical datasets. The super-resolution image had higher SNR (by a factor of over 4 on average) and lesion contrast (by a factor of 2) than the single respiratory synchronized frame using the same reconstruction matrix size. In comparison to the motion corrected or the motion free images a similar SNR was obtained, while improvements of up to 20% in the recovered lesion size and contrast were measured. Finally, the recovered lesion locations on the SR images were systematically closer to the true simulated lesion positions. These observations concerning the SNR, lesion contrast and size were confirmed on two clinical datasets included in the study. In conclusion, the use of SR techniques applied to respiratory motion synchronized images lead to motion compensation combined with improved image SNR and contrast, without any increase in the overall acquisition times.  相似文献   
26.
Cross-Validation (CV), and out-of-sample performance-estimation protocols in general, are often employed both for (a) selecting the optimal combination of algorithms and values of hyper-parameters (called a configuration) for producing the final predictive model, and (b) estimating the predictive performance of the final model. However, the cross-validated performance of the best configuration is optimistically biased. We present an efficient bootstrap method that corrects for the bias, called Bootstrap Bias Corrected CV (BBC-CV). BBC-CV’s main idea is to bootstrap the whole process of selecting the best-performing configuration on the out-of-sample predictions of each configuration, without additional training of models. In comparison to the alternatives, namely the nested cross-validation (Varma and Simon in BMC Bioinform 7(1):91, 2006) and a method by Tibshirani and Tibshirani (Ann Appl Stat 822–829, 2009), BBC-CV is computationally more efficient, has smaller variance and bias, and is applicable to any metric of performance (accuracy, AUC, concordance index, mean squared error). Subsequently, we employ again the idea of bootstrapping the out-of-sample predictions to speed up the CV process. Specifically, using a bootstrap-based statistical criterion we stop training of models on new folds of inferior (with high probability) configurations. We name the method Bootstrap Bias Corrected with Dropping CV (BBCD-CV) that is both efficient and provides accurate performance estimates.  相似文献   
27.
Provenance information of digital objects maintained by digital libraries and archives is crucial for authenticity assessment, reproducibility and accountability. Such information is commonly stored on metadata placed in various Metadata Repositories (MRs) or Knowledge Bases (KBs). Nevertheless, in various settings it is prohibitive to store the provenance of each digital object due to the high storage space requirements that are needed for having complete provenance. In this paper, we introduce provenance-based inference rules as a means to complete the provenance information, to reduce the amount of provenance information that has to be stored, and to ease quality control (e.g., corrections). Roughly, we show how provenance information can be propagated by identifying a number of basic inference rules over a core conceptual model for representing provenance. The propagation of provenance concerns fundamental modelling concepts such as actors, activities, events, devices and information objects, and their associations. However, since a MR/KB is not static but changes over time due to several factors, the question that arises is how we can satisfy update requests while still supporting the aforementioned inference rules. Towards this end, we elaborate on the specification of the required add/delete operations, consider two different semantics for deletion of information, and provide the corresponding update algorithms. Finally, we report extensive comparative results for different repository policies regarding the derivation of new knowledge, in datasets containing up to one million RDF triples. The results allow us to understand the tradeoffs related to the use of inference rules on storage space and performance of queries and updates.  相似文献   
28.
29.
We reconsider the well-studied Selfish Routing game with affine latency functions. The Price of Anarchy for this class of games takes maximum value 4/3; this maximum is attained already for a simple network of two parallel links, known as Pigou’s network. We improve upon the value 4/3 by means of Coordination Mechanisms. We increase the latency functions of the edges in the network, i.e., if ? e (x) is the latency function of an edge e, we replace it by $\hat{\ell}_{e}(x)$ with $\ell_{e}(x) \le \hat{\ell}_{e}(x)$ for all x. Then an adversary fixes a demand rate as input. The engineered Price of Anarchy of the mechanism is defined as the worst-case ratio of the Nash social cost in the modified network over the optimal social cost in the original network. Formally, if $\hat{C}_{N} (r)$ denotes the cost of the worst Nash flow in the modified network for rate r and C opt (r) denotes the cost of the optimal flow in the original network for the same rate then $$\mathit{ePoA} = \max_{r \ge 0} \frac{\hat{C}_N(r)}{C_{\mathit{opt}}(r)}. $$ We first exhibit a simple coordination mechanism that achieves for any network of parallel links an engineered Price of Anarchy strictly less than 4/3. For the case of two parallel links our basic mechanism gives 5/4=1.25. Then, for the case of two parallel links, we describe an optimal mechanism; its engineered Price of Anarchy lies between 1.191 and 1.192.  相似文献   
30.
An optimized artificial immune network-based classification model, namely OPTINC, was developed for remote sensing-based land use/land cover (LULC) classification. Major improvements of OPTINC compared to a typical immune network-based classification model (aiNet) include (1) preservation of the best antibodies of each land cover class from the antibody population suppression, which ensures that each land cover class is represented by at least one antibody; (2) mutation rates being self-adaptive according to the model performance between training generations, which improves the model convergence; and (3) incorporation of both Euclidean distance and spectral angle mapping distance to measure affinity between two feature vectors using a genetic algorithm-based optimization, which helps the model to better discriminate LULC classes with similar characteristics. OPTINC was evaluated using two sites with different remote sensing data: a residential area in Denver, CO with high-spatial resolution QuickBird image and LiDAR data, and a suburban area in Monticello, UT with HyMap hyperspectral imagery. A decision tree, a multilayer feed-forward back-propagation neural network, and aiNet were also tested for comparison. Classification accuracy, local homogeneity of classified images, and model sensitivity to training sample size were examined. OPTINC outperformed the other models with higher accuracy and more spatially cohesive land cover classes with limited salt-and-pepper noise. OPTINC was relatively less sensitive to training sample size than the neural network, followed by the decision tree.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号