首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   172篇
  免费   8篇
电工技术   2篇
化学工业   22篇
金属工艺   3篇
建筑科学   2篇
能源动力   5篇
轻工业   7篇
水利工程   7篇
无线电   41篇
一般工业技术   16篇
冶金工业   1篇
自动化技术   74篇
  2023年   5篇
  2022年   10篇
  2021年   11篇
  2020年   9篇
  2019年   6篇
  2018年   7篇
  2017年   9篇
  2016年   9篇
  2015年   9篇
  2014年   14篇
  2013年   11篇
  2012年   17篇
  2011年   13篇
  2010年   13篇
  2009年   12篇
  2008年   7篇
  2007年   4篇
  2006年   4篇
  2005年   3篇
  2004年   1篇
  2003年   2篇
  2002年   1篇
  2000年   2篇
  1999年   1篇
排序方式: 共有180条查询结果,搜索用时 15 毫秒
11.
Mobile ad hoc networks (MANETs) are of much interest to both the research community and the military because of the potential to establish a communication network in any situation that involves emergencies. Examples are search‐and‐rescue operations, military deployment in hostile environments, and several types of police operations. One critical open issue is how to route messages considering the characteristics of these networks. The nodes act as routers in an environment without a fixed infrastructure, the nodes are mobile, the wireless medium has its own limitations compared to wired networks, and existing routing protocols cannot be employed, at least without modifications. Over the last few years, a number of routing protocols have been proposed and enhanced to address the issue of routing in MANETs. It is not clear how those different protocols perform under different environments. One protocol may be the best in one network configuration but the worst in another. This article provides an analysis and performance evaluation of those protocols that may be suitable for military communications. The evaluation is conducted in two phases. In the first phase, we compare the protocols based on qualitative metrics to locate those that may fit our evaluation criteria. In the second phase, we evaluate the selected protocols from the first phase based on quantitative metrics in a mobility scenario that reflects tactical military movements. The results disclose that there is no routing protocol in the current stage without modifications that can provide efficient routing to any size of network, regardless of the number of nodes and the network load and mobility. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
12.
Low power fault tolerance design techniques trade reliability to reduce the area cost and the power overhead of integrated circuits by protecting only a subset of their workload or their most vulnerable parts. However, in the presence of faults not all workloads are equally susceptible to errors. In this paper, we present a low power fault tolerance design technique that selects and protects the most susceptible workload. We propose to rank the workload susceptibility as the likelihood of any error to bypass the logic masking of the circuit and propagate to its outputs. The susceptible workload is protected by a partial Triple Modular Redundancy (TMR) scheme. We evaluate the proposed technique on timing-independent and timing-dependent errors induced by permanent and transient faults. In comparison with unranked selective fault tolerance approach, we demonstrate a) a similar error coverage with a 39.7% average reduction of the area overhead or b) a 86.9% average error coverage improvement for a similar area overhead. For the same area overhead case, we observe an error coverage improvement of 53.1% and 53.5% against permanent stuck-at and transition faults, respectively, and an average error coverage improvement of 151.8% and 89.0% against timing-dependent and timing-independent transient faults, respectively. Compared to TMR, the proposed technique achieves an area and power overhead reduction of 145.8% to 182.0%.  相似文献   
13.
The zero-inflated Poisson (ZIP) distribution is an extension of the ordinary Poisson distribution and is used to model count data with an excessive number of zeros. In ZIP models, it is assumed that random shocks occur with probability p, and upon the occurrence of random shock, the number of nonconformities in a product follows the Poisson distribution with parameter λ. In this article, we study in more detail the exponentially weighted moving average control chart based on the ZIP distribution (regarded as ZIP-EWMA) and we also propose a double EWMA chart with an upper time-varying control limit to monitor ZIP processes (regarded as ZIP-DEWMA chart). The two charts are studied to detect upward shifts not only in each parameter individually but also in both parameters simultaneously. The steady-state performance and the performance with estimated parameters are also investigated. The performance of the two charts has been evaluated in terms of the average and standard deviation of the run length, and compared with Shewhart-type and CUSUM schemes for ZIP distribution, it is shown that the proposed chart is very effective especially in detecting shifts in p when λ remains in control (IC) and in both parameters simultaneously. Finally, one real example is given to display the application of the ZIP charts on practitioners.  相似文献   
14.
A nanoporous metal–organic framework material, exhibiting an IRMOF-1 type crystalline structure, was prepared by following a direct solvothermal synthesis approach, using zinc nitrate and terephthalic acid as precursors and dimethylformamide as solvent, combined with supercritical CO2 activation and vacuum outgassing procedures. A series of advanced characterization methods were employed, including scanning electron microscopy, Fourier-transform infrared radiation spectroscopy and X-ray diffraction, in order to study the morphology, surface chemistry and structure of the IRMOF-1 material directly upon its synthesis. Porosity properties, such as Brunauer–Emmet–Teller (BET) specific area (~520 m2/g) and micropore volume (~0.2 cm3/g), were calculated for the activated sample based on N2 gas sorption data collected at 77 K. The H2 storage performance was preliminary assessed by low-pressure (0–1 bar) H2 gas adsorption and desorption measurements at 77 K. The activated IRMOF-1 material of this study demonstrated a fully reversible H2 sorption behavior combined with an adequate gravimetric H2 uptake relative to its BET specific area, thus achieving a value of ~1 wt.% under close-to-atmospheric pressure conditions.  相似文献   
15.
This paper describes a Quality of Service (QoS) service on an IPv6 domain that aims to service aggregates of real-time traffic with minimum delay, jitter, and packet loss. It contains results from the tests that were performed in order to configure and evaluate the QoS mechanisms. As an actual example of real-time traffic, we have used the OpenH323 project, an open source H.323 implementation that has been ported to IPv6. The QoS mechanisms in IPv6 networks is still a field that has not been researched adequately, and we therefore present the results from the experiments in our IPv6 network that took advantage of the QoS mechanisms. This QoS service uses the Modular QoS CLI (MQC) mechanism and especially the Low Latency Queue feature (LLQ) in order to treat packets from real-time applications.  相似文献   
16.
A method for quality assessment of the Global Human Settlement Layer scenes against reference data is presented. It relies on two settlement metrics; the local average and gradient functions that quantify the notions of settlement density and flexible settlement limits respectively. They are both utilized as generalization functions for increasing the level of abstraction of the sets under comparison. Generalization compensates for inaccuracies of the automatic target extraction method and can be computed at multiple scales. The comparison between the target built-up layers and the reference data employs an ordered multi-scale, linear regression computing the goodness of fit measure R2R2. An optimized assessment procedure is investigated in a pilot study and is further employed in a big data exercise. A newly introduced quality metric returns the agreement between automatically extracted built-up from a set of 13605 scenes and the MODIS 500 urban layer, that was found too be as high as 91% for selected sensors. A final experiment attempts a performance increase at lower scales by correlating the target layer with automatically selected training subsets. At 50 m the adjusted R2R2 increases by 3% with a mean squared error improvement of 2% compared to the performance achieved without statistical learning. The experiment suggests that the GHSL assessment at a global scale can be carried out based on limited high resolution reference data of minimal spatial coverage.  相似文献   
17.
In this paper, we introduce a novel framework for low-level image processing and analysis. First, we process images with very simple, difference-based filter functions. Second, we fit the 2-parameter Weibull distribution to the filtered output. This maps each image to the 2D Weibull manifold. Third, we exploit the information geometry of this manifold and solve low-level image processing tasks as minimisation problems on point sets. For a proof-of-concept example, we examine the image autofocusing task. We propose appropriate cost functions together with a simple implicitly-constrained manifold optimisation algorithm and show that our framework compares very favourably against common autofocus methods from literature. In particular, our approach exhibits the best overall performance in terms of combined speed and accuracy.  相似文献   
18.
We introduce a new representation for time series, the Multiresolution Vector Quantized (MVQ) approximation, along with a distance function. Similar to Discrete Wavelet Transform, MVQ keeps both local and global information about the data. However, instead of keeping low-level time series values, it maintains high-level feature information (key subsequences), facilitating the introduction of more meaningful similarity measures. The method is fast and scales linearly with the database size and dimensionality. Contrary to previous methods, the vast majority of which use the Euclidean distance, MVQ uses a multiresolution/hierarchical distance function. In our experiments, the proposed technique consistently outperforms the other major methods.  相似文献   
19.
In this paper we propose a new single‐rate multicast congestion control scheme named Adaptive Smooth Multicast Protocol (ASMP), for multimedia transmission over best‐effort networks. The smoothness lays in the calculation and adaptation of the transmission rate, which is based on dynamic estimations of protocols' parameters and dynamic adjustment of the ‘smoothness factor’, as well. ASMP key attributes are: (a) TCP‐friendly behavior, (b) adaptive scalability to large sets of receivers, (c) high bandwidth utilization, and finally (d) smooth transmission rates, which are suitable for multimedia applications. We evaluate the performance of ASMP and investigate its behavior under various network conditions through extensive simulations conducted with the network simulator software (ns2). Simulation results show that ASMP can be regarded as a serious competitor of TFMCC and PGMCC. In many cases, ASMP outperforms TFMCC in terms of TCP‐friendliness and smooth transmission rates, while PGMCC presents lower scalability than ASMP. We have implemented ASMP on top of RTP/RTCP protocols in ns2 by adding all the RTP/RTCP protocol's attributes that are defined in RFC 3550 and related to quality of service metrics. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
20.
Mining discriminative spatial patterns in image data is an emerging subject of interest in medical imaging, meteorology, engineering, biology, and other fields. In this paper, we propose a novel approach for detecting spatial regions that are highly discriminative among different classes of three dimensional (3D) image data. The main idea of our approach is to treat the initial 3D image as a hyper-rectangle and search for discriminative regions by adaptively partitioning the space into progressively smaller hyper-rectangles (sub-regions). We use statistical information about each hyper-rectangle to guide the selectivity of the partitioning. A hyper-rectangle is partitioned only if its attribute cannot adequately discriminate among the distinct labeled classes, and it is sufficiently large for further splitting. To evaluate the discriminative power of the attributes corresponding to the detected regions, we performed classification experiments on artificial and real datasets. Our results show that the proposed method outperforms major competitors, achieving 30% and 15% better classification accuracy on synthetic and real data respectively while reducing by two orders of magnitude the number of statistical tests required by voxel-based approaches.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号