首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6669篇
  免费   453篇
  国内免费   9篇
电工技术   78篇
综合类   2篇
化学工业   1931篇
金属工艺   112篇
机械仪表   203篇
建筑科学   287篇
矿业工程   24篇
能源动力   261篇
轻工业   978篇
水利工程   61篇
石油天然气   35篇
无线电   415篇
一般工业技术   1039篇
冶金工业   308篇
原子能技术   47篇
自动化技术   1350篇
  2024年   22篇
  2023年   96篇
  2022年   272篇
  2021年   310篇
  2020年   208篇
  2019年   231篇
  2018年   277篇
  2017年   291篇
  2016年   280篇
  2015年   233篇
  2014年   312篇
  2013年   568篇
  2012年   445篇
  2011年   547篇
  2010年   372篇
  2009年   406篇
  2008年   339篇
  2007年   312篇
  2006年   263篇
  2005年   196篇
  2004年   146篇
  2003年   165篇
  2002年   136篇
  2001年   77篇
  2000年   83篇
  1999年   64篇
  1998年   50篇
  1997年   55篇
  1996年   51篇
  1995年   39篇
  1994年   30篇
  1993年   27篇
  1992年   15篇
  1991年   21篇
  1990年   12篇
  1989年   19篇
  1988年   11篇
  1987年   11篇
  1986年   11篇
  1985年   14篇
  1984年   9篇
  1983年   8篇
  1982年   14篇
  1981年   6篇
  1980年   9篇
  1979年   10篇
  1978年   19篇
  1976年   8篇
  1975年   5篇
  1973年   5篇
排序方式: 共有7131条查询结果,搜索用时 0 毫秒
81.
The choice of the best interpolation algorithm of data gathered at a finite number of locations has been a persistently relevant topic. Typical papers take a single data set, a single set of data points, and a handful of algorithms. The process considers a subset I of the data points as known, builds the interpolant with each algorithm, applies it to the points of another subset C, and evaluates the MAE (mean absolute error), the RMSE (root mean square error), or any other metric over such points. The less these statistics are, the better the algorithm is, so a deterministic ranking between methods (without confidence level) can be derived based upon it. Ties between methods are usually not considered. In this article a complete protocol is proposed in order to build, with a modest additional effort, a ranking with a confidence level. To illustrate this point, the results of two tests are shown. In the first one, a simple Monte Carlo experiment was devised using irregularly distributed points taken from a reference DEM (digital elevation model) in raster format. Different metrics led to different rankings, suggesting that the choice of the metric to define the ‘best interpolation algorithm’ would need a trade-off. The second experiment used mean daily radiation data from an international interpolation comparison exercise and RMSE as the metric of success. Only five simple interpolation methods were employed. The ranking using this protocol anticipated correctly the first and second place, afterwards confirmed employing independent control data.  相似文献   
82.
83.
Relational learning algorithms mine complex databases for interesting patterns. Usually, the search space of patterns grows very quickly with the increase in data size, making it impractical to solve important problems. In this work we present the design of a relational learning system, that takes advantage of graphics processing units (GPUs) to perform the most time consuming function of the learner, rule coverage. To evaluate performance, we use four applications: a widely used relational learning benchmark for predicting carcinogenesis in rodents, an application in chemo-informatics, an application in opinion mining, and an application in mining health record data. We compare results using a single and multiple CPUs in a multicore host and using the GPU version. Results show that the GPU version of the learner is up to eight times faster than the best CPU version.  相似文献   
84.
Deduplication is the task of identifying the entities in a data set which refer to the same real world object. Over the last decades, this problem has been largely investigated and many techniques have been proposed to improve the efficiency and effectiveness of the deduplication algorithms. As data sets become larger, such algorithms may generate critical bottlenecks regarding memory usage and execution time. In this context, cloud computing environments have been used for scaling out data quality algorithms. In this paper, we investigate the efficacy of different machine learning techniques for scaling out virtual clusters for the execution of deduplication algorithms under predefined time restrictions. We also propose specific heuristics (Best Performing Allocation, Probabilistic Best Performing Allocation, Tunable Allocation, Adaptive Allocation and Sliced Training Data) which, together with the machine learning techniques, are able to tune the virtual cluster estimations as demands fluctuate over time. The experiments we have carried out using multiple scale data sets have provided many insights regarding the adequacy of the considered machine learning algorithms and proposed heuristics for tackling cloud computing provisioning.  相似文献   
85.
In the context of fault detection and isolation of linear parameter‐varying systems, a challenging task appears when the dynamics and the available measurements render the model unobservable, which invalidates the use of standard set‐valued observers. Two results are obtained in this paper, namely, using a left‐coprime factorization, one can achieve set‐valued estimates with ultimately bounded hyper‐volume and convergence dependent on the slowest unobservable mode; and by rewriting the set‐valued observer equations and taking advantage of a coprime factorization, it is possible to have a low‐complexity fault detection and isolation method. Performance is assessed through simulation, illustrating, in particular, the detection time for various types of faults. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
86.
Nowadays, the prevailing use of networks based on traditional centralized management systems reflects on a fast increase of the management costs. The growth in the number of network equipments and services reinforces the need to distribute the management responsibilities throughout the network devices. In this approach, each device executes common network management functionalities, being part of the overall network management platform. In this paper, we present a Unified Distributed Network Management (UDNM) framework that provides a unified (wired and wireless) management network solution, where further different network services can take part of this infrastructure, e.g., flow monitoring, accurate routing decisions, distributed policies dissemination, etc. This framework is divided in two main components: (A) Situation awareness, which sets up initial information through bootstrapping, discovery, fault-management process and exchange of management information; (B) Autonomic Decision System (ADS) that performs distributed decisions in the network with incomplete information. We deploy the UDNM framework in a testbed which involves two cities (\(\approx\)250 km between), different standards (IEEE 802.3, IEEE 802.11 and IEEE 802.16e) and network technologies, such as, wired virtual grid, wireless ad-hoc gateways, ad-hoc mobile access devices. The UDNM framework integrates management functionalities into the managed devices, proving to be a lightweight and easy-respond framework. The performance analysis shows that the UDNM framework is feasible to unify devices management functionalities and to take accurate decisions on top of a real network.  相似文献   
87.
The adoption of quality assurance methods based on software process improvement models has been regarded as an important source of variability in software productivity. Some companies perceive that their implementation has prohibitive costs, whereas some authors identify in their use a way to comply with software development patterns and standards, produce economic value and lead to corporate performance improvement. In this paper, we investigate the relationship between quality maturity levels and labor productivity, using a data set containing 687 Brazilian software firms. We study here the relationship between labor productivity, as measured through the annual gross revenue per worker ratio, and quality levels, which were appraised from 2006 to 2012 according to two distinct software process improvement models: MPS.BR and CMMI. We perform independent statistical tests using appraisals carried out according to each of these models, consequently obtaining a data set with as many observations as possible, in order to seek strong support for our research. We first show that MPS.BR and CMMI appraised quality maturity levels are correlated, but we find no statistical evidence that they are related to higher labor productivity or productivity growth. On the contrary, we present evidence suggesting that average labor productivity is higher in software companies without appraised quality levels. Moreover, our analyses suggest that companies with appraised quality maturity levels are more or less productive depending on factors such as their business nature, main origin of capital and maintained quality level.  相似文献   
88.
In today’s global competitive environment, the need for continuous improvement is a matter of considerable importance within manufacturing enterprises. To this end, project managers, and managers in general, design and assess different projects with the purpose of achieving efficient processes, reducing costs and waste, increasing product and service quality, developing new products and services, enhancing customer relationship management, optimising enterprise resources, and so on. However, it is well-known that managing enterprise resources in order to accomplish effective completion of projects is a complex task to carry out. Furthermore, it has been recognised that the way staff actually understands the purpose of a project, the way they perform different project activities, and how they are able to influence project design and assessment are key factors for influencing the success of a project. This paper presents a systemic methodology to design and assess projects more effectively and efficiently based on program logic models and system dynamics with the aim of facilitating a clear understanding of the needs, purposes, goals, activities and tasks of a project among its stakeholders towards achieving success.  相似文献   
89.
With the TV signal digitization and the current market growth of connected TVs, the authors envision the appearance of accessibility barriers to visually impaired persons. The paper addresses the following hypothesis: (1) visually impaired users want to extend their TV usage to explore new TV features; (2) TV applications are in less conformance with accessibility guidelines compared to their Desktop versions. Additionally, the authors wanted to assess whether guideline conformance reflected real TV accessibility problems experienced by users. The methods used for this study included surveys aimed at characterizing the interest of the visually impaired population regarding the use of TV, and specifically of Web applications on TV, an automated accessibility evaluation to compare TV and Desktop versions of the same Web application, to understand their conformance with accessibility guidelines, and a user study where participants with visual impairments were asked to perform some tasks on both versions. From the survey, we confirmed that people with visual disabilities are interested in extra features on their TV. Results from the automated accessibility evaluation show that TV applications are in a significantly better level of conformance with accessibility guidelines. The user study has illustrated that users were unable to complete any task using the TV versions of the applications. The results from these studies demonstrated that the new features that come with connected TVs still have a long way to go in order to be accessible by all. Furthermore, they lead us to concur with other works that automated evaluations are not enough to assess the accessibility of a Web page.  相似文献   
90.
This paper addresses the problem of stabilizing to a desired equilibrium point an eye-in-hand system, which consists of a single camera mounted on a rigid body free to move on . It is assumed that there is a collection of landmarks fixed in the environment and that the image coordinates of those landmarks are provided to the system by an on-board CCD camera. The proposed method addresses not only the problem of stabilization but also that of maintaining feature visibility along the system’s trajectory. The resulting solution consists of a feedback control law based on the current and desired image coordinates and reconstructed attitude and depth ratio information, which guarantees that (i) the desired equilibrium point is an almost global attractor; (ii) a set of necessary conditions for feature visibility holds throughout the system’s trajectories; and (iii) the image of a predefined feature point is kept inside the camera’s field of view.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号