首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   94篇
  免费   12篇
  国内免费   4篇
电工技术   10篇
综合类   5篇
建筑科学   2篇
能源动力   4篇
无线电   44篇
自动化技术   45篇
  2024年   1篇
  2023年   2篇
  2022年   2篇
  2021年   4篇
  2020年   5篇
  2019年   3篇
  2018年   11篇
  2017年   11篇
  2016年   10篇
  2015年   8篇
  2014年   13篇
  2013年   10篇
  2012年   8篇
  2011年   6篇
  2010年   4篇
  2009年   2篇
  2008年   3篇
  2007年   2篇
  2006年   1篇
  2004年   2篇
  1999年   1篇
  1992年   1篇
排序方式: 共有110条查询结果,搜索用时 16 毫秒
1.
为了实现电网配电通信网融合管理,提高优质服务和运维管理能力,本文研究了在扁平化管理模式下的配电通信网的多源、多通信方式的智能管理技术,建立了基于粒子群算法的混合网络的资源映射模型.将WMN和PLC两个不同类型的网络映射到同一个物理网络上,对其安排不同的工作任务,发挥各自网络的优势和特点,将更多的子载波用于业务吞吐上,提高业务吞吐量.对新建立的模型进行仿真验证,通过与遗传算法的比较,证明本文的算法具有明显的优势.  相似文献   
2.
多天线波束成形系统与发送分集系统在下行蜂窝系统中是两种互补的技术。波束成形系统能够提供阵列增益,但是没有分集增益;相反,发送分集系统能够提供分集增益而没有阵列增益。通过计算在有无切换、相关衰落时系统的中断容量、CDF、传输功率等来比较两种系统的性能。  相似文献   
3.
伴随着GSM网络的全球化普及,短消息及其增值业务以其方便、灵活、快捷、价格低廉等优势已经迈入许多领域,并逐步取代传统的信息传输方式。因此,利用GSM通信模块和短消息技术构建无线发布平台来实现信息的传输和监控将成为一种必然趋势。本文提出的基于GSM短消息的小流量数据传输,将能稳定地进行数据采集和远程监控。  相似文献   
4.
In this paper, we have proposed a new feature selection method called kernel F-score feature selection (KFFS) used as pre-processing step in the classification of medical datasets. KFFS consists of two phases. In the first phase, input spaces (features) of medical datasets have been transformed to kernel space by means of Linear (Lin) or Radial Basis Function (RBF) kernel functions. By this way, the dimensions of medical datasets have increased to high dimension feature space. In the second phase, the F-score values of medical datasets with high dimensional feature space have been calculated using F-score formula. And then the mean value of calculated F-scores has been computed. If the F-score value of any feature in medical datasets is bigger than this mean value, that feature will be selected. Otherwise, that feature is removed from feature space. Thanks to KFFS method, the irrelevant or redundant features are removed from high dimensional input feature space. The cause of using kernel functions transforms from non-linearly separable medical dataset to a linearly separable feature space. In this study, we have used the heart disease dataset, SPECT (Single Photon Emission Computed Tomography) images dataset, and Escherichia coli Promoter Gene Sequence dataset taken from UCI (University California, Irvine) machine learning database to test the performance of KFFS method. As classification algorithms, Least Square Support Vector Machine (LS-SVM) and Levenberg–Marquardt Artificial Neural Network have been used. As shown in the obtained results, the proposed feature selection method called KFFS is produced very promising results compared to F-score feature selection.  相似文献   
5.
In financial markets, investors attempt to maximize their profits within a constructed portfolio with the aim of optimizing the tradeoffs between risk and return across the many stocks. This requires proper handling of conflicting factors, which can benefit from the domain of multiple criteria decision making (MCDM). However, the indexes and factors representing the stock performance are often imprecise or vague and this should be represented by linguistic terms characterized by fuzzy numbers. The aim of this research is to first develop three group MCDM methods, then use them for selecting undervalued stocks by dint of financial ratios and subjective judgments of experts. This study proposes three versions of fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution): conventional TOPSIS (C-TOPSIS), adjusted TOPSIS (A-TOPSIS) and modified TOPSIS (M-TOPSIS) where a new fuzzy distance measure, derived from the confidence level of the experts and fuzzy performance ratings have been included in the proposed methods. The practical aspects of the proposed methods are demonstrated through a case study in the Tehran stock exchange (TSE), which is timely given the need for investors to select undervalued stocks in untapped markets in the anticipation of easing economic sanctions from a change in recent government leadership.  相似文献   
6.
In order to improve the performance of image segmentation, this paper presented a gray level jump segmentation algorithm, which defined the direction of the texture, simultaneously, calculated the width of ridge line, gave the distance characteristics between textures, and established the mathematical model of the texture border, accordingly presented a new texture segmentation algorithm and compared with other texture segmentation algorithms. The simulation results show that the segmentation algorithm has some advantages to texture segmentation, such as has higher segmentation precision, faster segmentation speed, stronger anti-noise capability, less lost information of target, and so on. The segmented regions hardly contain other texture regions and background region. Moreover, this paper extracted the characteristic points and characteristic parameters in various segmented regions for texture image to obtain the characteristic vector, compared the characteristic vector with the standard template vectors, and identified the type of target in a range of threshold value. Experimental results show that the proposed target recognition approach has higher recognition rate and faster recognition speed than the existing target recognition approaches. Advancements in image processing through the study of texture segmentation are not only applicable to image fields, but also are of important theoretical value to target recognition. These researches in this paper will play an important role in a theoretical reference and practical significance to the development of all target recognition departments based on image system such as the aerospace, public security, road traffic, and so on.  相似文献   
7.
The problem of finding the expected shortest path in stochastic networks, where the presence of each node is probabilistic and the arc lengths are random variables, have numerous applications, especially in communication networks. The problem being NP-hard we use an ant colony system (ACS) to propose a metaheuristic algorithm for finding the expected shortest path. A new local heuristic is formulated for the proposed algorithm to consider the probabilistic nodes. The arc lengths are randomly generated based on the arc length distribution functions. Examples are worked out to illustrate the applicability of the proposed approach.  相似文献   
8.
Image fusion methods based on multiscale transform (MST) suffer from high computational load due to the use of fast Fourier transforms (ffts) in the lowpass and highpass filtering steps. Lifting wavelet scheme which is based on second generation wavelets has been proposed as a solution to this issue. Lifting Wavelet Transform (LWT) is composed of split, prediction and update operations all implemented in the spatial domain using multiplications and additions, thus computation time is highly reduced. Since image fusion performance benefits from undecimated transform, it has later been extended to Stationary Lifting Wavelet Transform (SLWT). In this paper, we propose to use the lattice filter for the MST analysis step. Lattice filter is composed of analysis and synthesis parts where simultaneous lowpass and highpass operations are performed in spatial domain with the help of additions/multiplications and delay operations, in a recursive structure which increases robustness to noise. Since the original filter is designed for the undecimated case, we have developed undecimated lattice structures, and applied them to the fusion of multifocus images. Fusion results and evaluation metrics show that the proposed method has better performance especially with noisy images while having similar computational load with LSWT based fusion method.  相似文献   
9.
An important property of today’s big data processing is that the same computation is often repeated on datasets evolving over time, such as web and social network data. While repeating full computation of the entire datasets is feasible with distributed computing frameworks such as Hadoop, it is obviously inefficient and wastes resources. In this paper, we present HadUP (Hadoop with Update Processing), a modified Hadoop architecture tailored to large-scale incremental processing with conventional MapReduce algorithms. Several approaches have been proposed to achieve a similar goal using task-level memoization. However, task-level memoization detects the change of datasets at a coarse-grained level, which often makes such approaches ineffective. Instead, HadUP detects and computes the change of datasets at a fine-grained level using a deduplication-based snapshot differential algorithm (D-SD) and update propagation. As a result, it provides high performance, especially in an environment where task-level memoization has no benefit. HadUP requires only a small amount of extra programming cost because it can reuse the code for the map and reduce functions of Hadoop. Therefore, the development of HadUP applications is quite easy.  相似文献   
10.
杨杰  师学明  李立 《工程勘察》2013,(10):90-94
为研究高密度电法对地下无机物污染液体渗漏监测的有效性,在野外进行了延时性高密度电法实验。在场地内采集浇盐水前和浇盐水后不同时刻的高密度电法数据,分析了视电阻率的时空变化特征。初步实验结果表明,延时性高密度电法对盐水溶液的扩散有很好的探测效果,揭示了延时性高密度电法这种新的地球物理方法将对无机类化学污染物的扩散、海水入侵、堤坝的渗漏、未饱和区域水的渗漏等工程地质和环境地质问题起到越来越重要的作用。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号