首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   199篇
  免费   31篇
  国内免费   18篇
电工技术   3篇
综合类   8篇
化学工业   2篇
能源动力   1篇
无线电   79篇
一般工业技术   9篇
自动化技术   146篇
  2023年   5篇
  2022年   12篇
  2021年   13篇
  2020年   11篇
  2019年   9篇
  2018年   15篇
  2017年   18篇
  2016年   32篇
  2015年   9篇
  2014年   24篇
  2013年   8篇
  2012年   20篇
  2011年   13篇
  2010年   14篇
  2009年   10篇
  2008年   4篇
  2007年   14篇
  2006年   4篇
  2005年   1篇
  2004年   1篇
  2003年   1篇
  2002年   1篇
  2001年   4篇
  2000年   1篇
  1999年   1篇
  1997年   1篇
  1992年   1篇
  1988年   1篇
排序方式: 共有248条查询结果,搜索用时 18 毫秒
1.
2.
Recommender systems apply data mining and machine learning techniques for filtering unseen information and can predict whether a user would like a given item. This paper focuses on gray-sheep users problem responsible for the increased error rate in collaborative filtering based recommender systems. This paper makes the following contributions: we show that (1) the presence of gray-sheep users can affect the performance – accuracy and coverage – of the collaborative filtering based algorithms, depending on the data sparsity and distribution; (2) gray-sheep users can be identified using clustering algorithms in offline fashion, where the similarity threshold to isolate these users from the rest of community can be found empirically. We propose various improved centroid selection approaches and distance measures for the K-means clustering algorithm; (3) content-based profile of gray-sheep users can be used for making accurate recommendations. We offer a hybrid recommendation algorithm to make reliable recommendations for gray-sheep users. To the best of our knowledge, this is the first attempt to propose a formal solution for gray-sheep users problem. By extensive experimental results on two different datasets (MovieLens and community of movie fans in the FilmTrust website), we showed that the proposed approach reduces the recommendation error rate for the gray-sheep users while maintaining reasonable computational performance.  相似文献   
3.
Clustering is the process of organizing objects into groups whose members are similar in some way. Most of the clustering methods involve numeric data only. However, this representation may not be adequate to model complex information which may be: histogram, distributions, intervals. To deal with these types of data, Symbolic Data Analysis (SDA) was developed. In multivariate data analysis, it is common some variables be more or less relevant than others and less relevant variables can mask the cluster structure. This work proposes a clustering method based on fuzzy approach that produces weighted multivariate memberships for interval-valued data. These memberships can change at each iteration of the algorithm and they are different from one variable to another and from one cluster to another. Furthermore, there is a different relevance weight associated to each variable that may also be different from one cluster to another. The advantage of this method is that it is robust to ambiguous cluster membership assignment since weights represent how important the different variables are to the clusters. Experiments are performed with synthetic data sets to compare the performance of the proposed method against other methods already established by the clustering literature. Also, an application with interval-valued scientific production data is presented in this work. Clustering quality results have shown that the proposed method offers higher accuracy when variables have different variabilities.  相似文献   
4.
Previous research on scheduling and solar power issues of wireless sensor networks (WSNs) assumes that the sensors are deployed in a general environment. While monitoring the stream environment, sensors are attached to the stream side to collect the sensed data and transmit the data back to the sink. The stream environment can be scaled in several similar environments. This type of geographic limitation not only exists in a stream environment but also on streets, roads, and trails. This study presents an effective node-selection scheme to enhance the efficiency of saving power and coverage of solar-powered WSNs in a stream environment. Analysis of the sensor deployment in the stream environment permits sensors to be classified into different segments, and then allows the selection of active nodes for building inter-stream connections, inter-segment connections, and intra-segment connections. Based on these connections, the number of active nodes and transmitted packets is minimized. Simulation results show that this scheme can significantly increase the energy efficiency and maintain the monitoring area in solar-powered WSNs.  相似文献   
5.
Early and accurate diagnosis of Parkinson’s disease (PD) is important for early management, proper prognostication and for initiating neuroprotective therapies once they become available. Recent neuroimaging techniques such as dopaminergic imaging using single photon emission computed tomography (SPECT) with 123I-Ioflupane (DaTSCAN) have shown to detect even early stages of the disease. In this paper, we use the striatal binding ratio (SBR) values that are calculated from the 123I-Ioflupane SPECT scans (as obtained from the Parkinson’s progression markers initiative (PPMI) database) for developing automatic classification and prediction/prognostic models for early PD. We used support vector machine (SVM) and logistic regression in the model building process. We observe that the SVM classifier with RBF kernel produced a high accuracy of more than 96% in classifying subjects into early PD and healthy normal; and the logistic model for estimating the risk of PD also produced high degree of fitting with statistical significance indicating its usefulness in PD risk estimation. Hence, we infer that such models have the potential to aid the clinicians in the PD diagnostic process.  相似文献   
6.
We propose an approach that achieves high-capacity quantum key distribution using Chebyshev-map values corresponding to Lucas numbers coding. In particular, we encode a key with the Chebyshev-map values corresponding to Lucas numbers and then use k-Chebyshev maps to achieve consecutive and flexible key expansion and apply the pre-shared classical information between Alice and Bob and fountain codes for privacy amplification to solve the security of the exchange of classical information via the classical channel. Consequently, our high-capacity protocol does not have the limitations imposed by orbital angular momentum and down-conversion bandwidths, and it meets the requirements for longer distances and lower error rates simultaneously.  相似文献   
7.
As telecommunication networks evolve rapidly in terms of scalability, complexity, and heterogeneity, the efficiency of fault localization procedures and the accuracy in the detection of anomalous behaviors are becoming important factors that largely influence the decision making process in large management companies. For this reason, telecommunication companies are doing a big effort investing in new technologies and projects aimed at finding efficient management solutions. One of the challenging issues for network and system management operators is that of dealing with the huge amount of alerts generated by the managed systems and networks. In order to discover anomalous behaviors and speed up fault localization processes, alert correlation is one of the most popular resources. Although many different alert correlation techniques have been investigated, it is still an active research field. In this paper, a survey of the state of the art in alert correlation techniques is presented. Unlike other authors, we consider that the correlation process is a common problem for different fields in the industry. Thus, we focus on showing the broad influence of this problem. Additionally, we suggest an alert correlation architecture capable of modeling current and prospective proposals. Finally, we also review some of the most important commercial products currently available.  相似文献   
8.
李翠锦  瞿中 《电讯技术》2023,63(9):1291-1299
针对目前复杂交通环境下还存在多目标检测精度和速度不高等问题,以特征金字塔网络(Feature Pyramid Network, FPN)为基础,提出了一种多层融合多目标检测与识别算法,以提高目标检测精度和网络泛化能力。首先,采用ResNet101的五层架构将空间分辨率上采样2倍构建自上而下的特征图,按照元素相加的方式将上采样图和自下而上的特征图合并,并构建一个融合高层语义信息与低层几何信息的特征层;然后,根据BBox回归存在训练样本不平衡问题,选择Efficient IOU Loss损失函数并结合Focal Loss提出一种改进Focal EIOU Loss;最后,充分考虑复杂交通环境下的实际情况,进行人工标注混合数据集进行训练。该模型在KITTI测试集上的平均检测精度和速度比FPN分别提升了2.4%和5 frame/s,在Cityscale测试集上平均检测精度和速度比FPN提升了1.9%和4 frame/s。  相似文献   
9.
The aim of this study is to propose a method for building quadrilateral network of curves automatically from a huge number of triangular meshes. The curve net can be served as the framework of automatic surface reconstruction. The proposed method mainly includes three stages: mesh simplification, quadrangulation and curve net generation. Mesh simplification is employed to reduce the number of meshes in accordance with a quadratic error metric for each vertex. Additional post-processing criteria are also employed to improve the shape of the reduced meshes. For quadrangulation, a front composed of a sequence of edges is introduced. An algorithm is proposed to combine each pair of triangles along the front. A new front is then formed and quadrangulation is continued until all triangles are combined or converted. For curve net generation, each edge of quadrilateral meshes is projected onto the triangular meshes to acquire a set of slicing points first. A constrained curve fitting is then employed to convert all sets of slicing points into B-spline curves, with appropriate continuity conditions across adjacent curves. Several examples have been presented to demonstrate the feasibility of the proposed method and its application in automatic surface reconstruction.  相似文献   
10.
With the development of cloud infrastructure, more and more transaction processing systems are hosted in cloud platform. Log, that usually saves production behaviors of a transaction processing system in cloud, is widely used for triaging production failures. Log analysis of a cloud-based system faces challenges as the size of data increases, unstructured formats emerge, and untraceable failures occur more frequently. More requirements of log analysis are raised, such as real-time analysis, failure recovery, and so on. Existing solutions have their own focuses and cannot fulfill the increasing requirements. To address the main requirements and issues, this paper proposes a new log model that classifies and analyzes the interactions of services and the detailed logging information during workflow execution. A workflow analysis technique is used to fast triage production failures and assist failure recoveries. The failed workflow can be reconstructed from failures in real-time production servers by the proposed log analysis solution. The proposed solution is simulated by using a large size of log data and compared with traditional solution. The experimentation results prove the effectiveness and efficiency of proposed triage log analysis and recovery solution.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号