首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   639篇
  免费   52篇
电工技术   8篇
化学工业   197篇
金属工艺   14篇
机械仪表   18篇
建筑科学   24篇
能源动力   24篇
轻工业   66篇
水利工程   2篇
石油天然气   2篇
无线电   33篇
一般工业技术   120篇
冶金工业   33篇
原子能技术   9篇
自动化技术   141篇
  2024年   2篇
  2023年   11篇
  2022年   35篇
  2021年   38篇
  2020年   21篇
  2019年   15篇
  2018年   18篇
  2017年   21篇
  2016年   34篇
  2015年   25篇
  2014年   34篇
  2013年   49篇
  2012年   35篇
  2011年   41篇
  2010年   44篇
  2009年   58篇
  2008年   34篇
  2007年   37篇
  2006年   18篇
  2005年   26篇
  2004年   23篇
  2003年   13篇
  2002年   12篇
  2001年   3篇
  2000年   9篇
  1999年   2篇
  1998年   7篇
  1997年   6篇
  1996年   2篇
  1995年   3篇
  1994年   1篇
  1993年   3篇
  1990年   1篇
  1989年   1篇
  1988年   1篇
  1986年   2篇
  1985年   1篇
  1983年   1篇
  1980年   2篇
  1978年   2篇
排序方式: 共有691条查询结果,搜索用时 15 毫秒
161.
This study evaluates the potential of object-based image analysis in combination with supervised machine learning to identify urban structure type patterns from Landsat Thematic Mapper (TM) images. The main aim is to assess the influence of several critical choices commonly made during the training stage of a learning machine on the classification performance and to give recommendations for classifier-dependent intelligent training. Particular emphasis is given to assess the influence of size and class distribution of the training data, the approach of training data sampling (user-guided or random) and the type of training samples (squares or segments) on the classification performance of a Support Vector Machine (SVM). Different feature selection algorithms are compared and segmentation and classifier parameters are dynamically tuned for the specific image scene, classification task, and training data. The performance of the classifier is measured against a set of reference data sets from manual image interpretation and furthermore compared on the basis of landscape metrics to a very high resolution reference classification derived from light detection and ranging (lidar) measurements. The study highlights the importance of a careful design of the training stage and dynamically tuned classifier parameters, especially when dealing with noisy data and small training data sets. For the given experimental set-up, the study concludes that given optimized feature space and classifier parameters, training an SVM with segment-shaped samples that were sampled in a guided manner and are balanced between the classes provided the best classification results. If square-shaped samples are used, a random sampling provided better results than a guided selection. Equally balanced sample distributions outperformed unbalanced training sets.  相似文献   
162.
This paper considers the problem of achieving a very accurate tracking of a pre‐specified desired output trajectory , for linear, multiple input multiple output, non‐minimum phase and/or non hyperbolic, sampled data, and closed loop control systems. The proposed approach is situated in the general framework of model stable inversion and introduces significant novelties with the purpose of reducing some theoretical and numerical limitations inherent in the methods usually proposed. In particular, the new method does not require either a preactuation or null initial conditions of the system. The desired and the corresponding sought input are partitioned in a transient component ( and ut(k), respectively) and steady‐state ( and us(k), respectively). The desired transient component is freely assigned without requiring it to be null over an initial time interval. This drastically reduces the total settling time. The structure of ut(k) is a priori assumed to be given by a sampled smoothing spline function. The spline coefficients are determined as the least‐squares solution of the over‐determined system of linear equations obtained imposing that the sampled spline function assumed as reference input yield the desired output over a properly defined transient interval. The steady‐state input us(k) is directly analytically computed exploiting the steady‐state output response expressions for inputs belonging to the same set of .  相似文献   
163.
Although very important in software engineering, establishing traceability links between software artifacts is extremely tedious, error-prone, and it requires significant effort. Even when approaches for automated traceability recovery exist, these provide the requirements analyst with a, usually very long, ranked list of candidate links that needs to be manually inspected. In this paper we introduce an approach called Estimation of the Number of Remaining Links (ENRL) which aims at estimating, via Machine Learning (ML) classifiers, the number of remaining positive links in a ranked list of candidate traceability links produced by a Natural Language Processing techniques-based recovery approach. We have evaluated the accuracy of the ENRL approach by considering several ML classifiers and NLP techniques on three datasets from industry and academia, and concerning traceability links among different kinds of software artifacts including requirements, use cases, design documents, source code, and test cases. Results from our study indicate that: (i) specific estimation models are able to provide accurate estimates of the number of remaining positive links; (ii) the estimation accuracy depends on the choice of the NLP technique, and (iii) univariate estimation models outperform multivariate ones.  相似文献   
164.
Data warehouse loading and refreshment is typically performed by means of complex software processes called extraction–transformation–loading (ETL). In this paper, we propose a system based on a suite of visual languages for mastering several aspects of the ETL development process, turning it into a visual programming task. The approach can be easily generalized and applied to other data integration contexts beyond data warehouses. It introduces two new visual languages that are used to specify the ETL process, which can also be represented by means of UML activity diagrams. In particular, the first visual language supports data manipulation activities, whereas the second one provides traceability information of attributes to highlight the impact of potential transformations on integrated schemas depending on them. Once the whole ETL process has been visually specified, the designer might invoke the automatic generation of an activity diagram representing a possible orchestration of it based on its dependencies. The designer can edit such a diagram to modify the proposed orchestration provided that changes do not alter data dependencies. The final specification can be translated into code that is executable on the data sources. Finally, the effectiveness of the proposed approach has been validated through a user study in which we have compared the effort needed to design an ETL process in our approach with respect to the one required with main visual approaches described in the literature.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
165.
This paper presents an inertial measurement unit-based human gesture recognition system for a robot instrument player to understand the instructions dictated by an orchestra conductor and accordingly adapt its musical performance. It is an extension of our previous publications on natural human–robot musical interaction. With this system, the robot can understand the real-time variations in musical parameters dictated by the conductor’s movements, adding expression to its performance while being synchronized with all the other human partner musicians. The enhanced interaction ability would obviously lead to an improvement of the overall live performance, but also allow the partner musicians, as well as the conductor, to better appreciate a joint musical performance, thanks to the complete naturalness of the interaction.  相似文献   
166.
Context: code obfuscation is intended to obstruct code understanding and, eventually, to delay malicious code changes and ultimately render it uneconomical. Although code understanding cannot be completely impeded, code obfuscation makes it more laborious and troublesome, so as to discourage or retard code tampering. Despite the extensive adoption of obfuscation, its assessment has been addressed indirectly either by using internal metrics or taking the point of view of code analysis, e.g., considering the associated computational complexity. To the best of our knowledge, there is no publicly available user study that measures the cost of understanding obfuscated code from the point of view of a human attacker. Aim: this paper experimentally assesses the impact of code obfuscation on the capability of human subjects to understand and change source code. In particular, it considers code protected with two well-known code obfuscation techniques, i.e., identifier renaming and opaque predicates. Method: We have conducted a family of five controlled experiments, involving undergraduate and graduate students from four Universities. During the experiments, subjects had to perform comprehension or attack tasks on decompiled clients of two Java network-based applications, either obfuscated using one of the two techniques, or not. To assess and compare the obfuscation techniques, we measured the correctness and the efficiency of the performed task. Results: —at least for the tasks we considered—simpler techniques (i.e., identifier renaming) prove to be more effective than more complex ones (i.e., opaque predicates) in impeding subjects to complete attack tasks.  相似文献   
167.
To support program comprehension, software artifacts can be labeled—for example within software visualization tools—with a set of representative words, hereby referred to as labels. Such labels can be obtained using various approaches, including Information Retrieval (IR) methods or other simple heuristics. They provide a bird-eye’s view of the source code, allowing developers to look over software components fast and make more informed decisions on which parts of the source code they need to analyze in detail. However, few empirical studies have been conducted to verify whether the extracted labels make sense to software developers. This paper investigates (i) to what extent various IR techniques and other simple heuristics overlap with (and differ from) labeling performed by humans; (ii) what kinds of source code terms do humans use when labeling software artifacts; and (iii) what factors—in particular what characteristics of the artifacts to be labeled—influence the performance of automatic labeling techniques. We conducted two experiments in which we asked a group of students (38 in total) to label 20 classes from two Java software systems, JHotDraw and eXVantage. Then, we analyzed to what extent the words identified with an automated technique—including Vector Space Models, Latent Semantic Indexing (LSI), latent Dirichlet allocation (LDA), as well as customized heuristics extracting words from specific source code elements—overlap with those identified by humans. Results indicate that, in most cases, simpler automatic labeling techniques—based on the use of words extracted from class and method names as well as from class comments—better reflect human-based labeling. Indeed, clustering-based approaches (LSI and LDA) are more worthwhile to be used for source code artifacts having a high verbosity, as well as for artifacts requiring more effort to be manually labeled. The obtained results help to define guidelines on how to build effective automatic labeling techniques, and provide some insights on the actual usefulness of automatic labeling techniques during program comprehension tasks.  相似文献   
168.
In this paper we investigate the structure of the Internet by exploiting an efficient algorithm for extracting k-dense communities from the Internet AS-level topology graph. The analyses showed that the most well-connected communities consist of a small number of ASs characterized by a high level of clusterization, although they tend to direct a lot of their connections to ASs outside the community. In addition these communities are mainly composed of ASs that participate at the Internet Exchange Points (IXPs) and have a worldwide geographical scope. Regarding k-max-dense ASs we found that they play a primary role in the Internet connectivity since they are involved in a huge number of Internet connections (42% of Internet connections). We also investigated the properties of three classes of k-max-dense ASs: Content Delivery Networks, Internet Backbone Providers and Tier-1s. Specifically, we showed that CDNs and IBPs heavily exploit IXPs by participating in many of them and connecting to many IXP participant ASs. On the other hand, we found that a high percentage of connections originated by Tier-1 ASs are likely to involve national ASs which do not participate at IXPs.  相似文献   
169.
The breakthrough of Cloud comes from its service oriented perspective where everything, including the infrastructure, is provided “as a service”. This model is really attractive and convenient for both providers and consumers, as a consequence the Cloud paradigm is quickly growing and widely spreading, also in non commercial contexts. In such a scenario, we propose to incorporate some elements of volunteer computing into the Cloud paradigm through the Cloud@Home solution, involving into the mix nodes and devices provided by potentially any owners or administrators, disclosing high computational resources to contributors and also allowing to maximize their utilization. This paper presents and discusses the first step towards Cloud@Home: providing quality of service and service level agreement facilities on top of unreliable, intermittent Cloud providers. Some of the main issues and challenges of Cloud@Home, such as the monitoring, management and brokering of resources according to service level requirements are addressed through the design of a framework core architecture. All the tasks committed to the architecture’s modules and components, as well as the most relevant component interactions, are identified and discussed from both the structural and the behavioural viewpoints. Some encouraging experiments on an early implementation prototype deployed in a real testing environment are also documented in the paper.  相似文献   
170.
This paper presents a methodology for a reliable comparison among Inertial Measurement Units or attitude estimation devices in a Vicon environment. The misalignment among the reference systems and the lack of synchronization among the devices are the main problems for the correct performance evaluation using Vicon as reference measurement system. We propose a genetic algorithm coupled with Dynamic Time Warping (DTW) to solve these issues. To validate the efficacy of the methodology, a performance comparison is implemented between the WB-3 ultra-miniaturized Inertial Measurement Unit (IMU), developed by our group, with the commercial IMU InertiaCube3? by InterSense.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号