首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   32570篇
  免费   1491篇
  国内免费   119篇
电工技术   425篇
综合类   229篇
化学工业   5957篇
金属工艺   702篇
机械仪表   759篇
建筑科学   1145篇
矿业工程   102篇
能源动力   1080篇
轻工业   4521篇
水利工程   286篇
石油天然气   175篇
武器工业   6篇
无线电   2525篇
一般工业技术   4751篇
冶金工业   6395篇
原子能技术   256篇
自动化技术   4866篇
  2023年   186篇
  2022年   508篇
  2021年   959篇
  2020年   670篇
  2019年   757篇
  2018年   854篇
  2017年   913篇
  2016年   904篇
  2015年   718篇
  2014年   1066篇
  2013年   1821篇
  2012年   1579篇
  2011年   1964篇
  2010年   1449篇
  2009年   1465篇
  2008年   1336篇
  2007年   1171篇
  2006年   943篇
  2005年   961篇
  2004年   1001篇
  2003年   912篇
  2002年   869篇
  2001年   738篇
  2000年   583篇
  1999年   558篇
  1998年   2079篇
  1997年   1345篇
  1996年   950篇
  1995年   626篇
  1994年   489篇
  1993年   545篇
  1992年   227篇
  1991年   290篇
  1990年   234篇
  1989年   201篇
  1988年   211篇
  1987年   161篇
  1986年   167篇
  1985年   186篇
  1984年   131篇
  1983年   97篇
  1982年   120篇
  1981年   129篇
  1980年   119篇
  1979年   93篇
  1978年   76篇
  1977年   145篇
  1976年   229篇
  1975年   83篇
  1973年   56篇
排序方式: 共有10000条查询结果,搜索用时 843 毫秒
931.
In plant phenotyping, there is a demand for high-throughput, non-destructive systems that can accurately analyse various plant traits by measuring features such as plant volume, leaf area, and stem length. Existing vision-based systems either focus on speed using 2D imaging, which is consequently inaccurate, or on accuracy using time-consuming 3D methods. In this paper, we present a computer-vision system for seedling phenotyping that combines best of both approaches by utilizing a fast three-dimensional (3D) reconstruction method. We developed image processing methods for the identification and segmentation of plant organs (stem and leaf) from the 3D plant model. Various measurements of plant features such as plant volume, leaf area, and stem length are estimated based on these plant segments. We evaluate the accuracy of our system by comparing the measurements of our methods with ground truth measurements obtained destructively by hand. The results indicate that the proposed system is very promising.  相似文献   
932.
Applications in industry often have grown and improved over many years. Since their performance demands increase, they also need to benefit from the availability of multi-core processors. However, a reimplementation from scratch and even a restructuring of these industrial applications is very expensive, often due to high certification efforts. Therefore, a strategy for a systematic parallelization of legacy code is needed. We present a parallelization approach for hard real-time systems, which ensures a high reusage of legacy code and preserves timing analysability. To show its applicability, we apply it on the core algorithm of an avionics application as well as on the control program of a large construction machine. We create models of the legacy programs showing the potential of parallelism, optimize them and change the source codes accordingly. The parallelized applications are placed on a predictable multi-core processor with up to 18 cores. For evaluation, we compare the worst case execution times and their speedups. Furthermore, we analyse limitations coming up at the parallelization process.  相似文献   
933.
In probabilistic planning problems which are usually modeled as Markov Decision Processes (MDPs), it is often difficult, or impossible, to obtain an accurate estimate of the state transition probabilities. This limitation can be overcome by modeling these problems as Markov Decision Processes with imprecise probabilities (MDP-IPs). Robust LAO* and Robust LRTDP are efficient algorithms for solving a special class of MDP-IPs where the probabilities lie in a given interval, known as Bounded-Parameter Stochastic-Shortest Path MDP (BSSP-MDP). However, they do not make clear what assumptions must be made to find a robust solution (the best policy under the worst model). In this paper, we propose a new efficient algorithm for BSSP-MDPs, called Robust ILAO* which has a better performance than Robust LAO* and Robust LRTDP, considered the-state-of-the art of robust probabilistic planning. We also define the assumptions required to ensure a robust solution and prove that Robust ILAO* algorithm converges to optimal values if the initial value of all states is admissible.  相似文献   
934.
Cloud computing systems handle large volumes of data by using almost unlimited computational resources, while spatial data warehouses (SDWs) are multidimensional databases that store huge volumes of both spatial data and conventional data. Cloud computing environments have been considered adequate to host voluminous databases, process analytical workloads and deliver database as a service, while spatial online analytical processing (spatial OLAP) queries issued over SDWs are intrinsically analytical. However, hosting a SDW in the cloud and processing spatial OLAP queries over such database impose novel obstacles. In this article, we introduce novel concepts as cloud SDW and spatial OLAP as a service, and afterwards detail the design of novel schemas for cloud SDW and spatial OLAP query processing over cloud SDW. Furthermore, we evaluate the performance to process spatial OLAP queries in cloud SDWs using our own query processor aided by a cloud spatial index. Moreover, we describe the cloud spatial bitmap index to improve the performance to process spatial OLAP queries in cloud SDWs, and assess it through an experimental evaluation. Results derived from our experiments revealed that such index was capable to reduce the query response time from 58.20 up to 98.89 %.  相似文献   
935.
In the context of fault detection and isolation of linear parameter‐varying systems, a challenging task appears when the dynamics and the available measurements render the model unobservable, which invalidates the use of standard set‐valued observers. Two results are obtained in this paper, namely, using a left‐coprime factorization, one can achieve set‐valued estimates with ultimately bounded hyper‐volume and convergence dependent on the slowest unobservable mode; and by rewriting the set‐valued observer equations and taking advantage of a coprime factorization, it is possible to have a low‐complexity fault detection and isolation method. Performance is assessed through simulation, illustrating, in particular, the detection time for various types of faults. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
936.
Nowadays, the prevailing use of networks based on traditional centralized management systems reflects on a fast increase of the management costs. The growth in the number of network equipments and services reinforces the need to distribute the management responsibilities throughout the network devices. In this approach, each device executes common network management functionalities, being part of the overall network management platform. In this paper, we present a Unified Distributed Network Management (UDNM) framework that provides a unified (wired and wireless) management network solution, where further different network services can take part of this infrastructure, e.g., flow monitoring, accurate routing decisions, distributed policies dissemination, etc. This framework is divided in two main components: (A) Situation awareness, which sets up initial information through bootstrapping, discovery, fault-management process and exchange of management information; (B) Autonomic Decision System (ADS) that performs distributed decisions in the network with incomplete information. We deploy the UDNM framework in a testbed which involves two cities (\(\approx\)250 km between), different standards (IEEE 802.3, IEEE 802.11 and IEEE 802.16e) and network technologies, such as, wired virtual grid, wireless ad-hoc gateways, ad-hoc mobile access devices. The UDNM framework integrates management functionalities into the managed devices, proving to be a lightweight and easy-respond framework. The performance analysis shows that the UDNM framework is feasible to unify devices management functionalities and to take accurate decisions on top of a real network.  相似文献   
937.
This paper presents a real-time framework that combines depth data and infrared laser speckle pattern (ILSP) images, captured from a Kinect device, for static hand gesture recognition to interact with CAVE applications. At the startup of the system, background removal and hand position detection are performed using only the depth map. After that, tracking is started using the hand positions of the previous frames in order to seek for the hand centroid of the current one. The obtained point is used as a seed for a region growing algorithm to perform hand segmentation in the depth map. The result is a mask that will be used for hand segmentation in the ILSP frame sequence. Next, we apply motion restrictions for gesture spotting in order to mark each image as a ‘Gesture’ or ‘Non-Gesture’. The ILSP counterparts of the frames labeled as “Gesture” are enhanced by using mask subtraction, contrast stretching, median filter, and histogram equalization. The result is used as the input for the feature extraction using a scale invariant feature transform algorithm (SIFT), bag-of-visual-words construction and classification through a multi-class support vector machine (SVM) classifier. Finally, we build a grammar based on the hand gesture classes to convert the classification results in control commands for the CAVE application. The performed tests and comparisons show that the implemented plugin is an efficient solution. We achieve state-of-the-art recognition accuracy as well as efficient object manipulation in a virtual scene visualized in the CAVE.  相似文献   
938.
The performance of state-of-the-art speaker verification in uncontrolled environment is affected by different variabilities. Short duration variability is very common in these scenarios and causes the speaker verification performance to decrease quickly while the duration of verification utterances decreases. Linear discriminant analysis (LDA) is the most common session variability compensation algorithm, nevertheless it presents some shortcomings when trained with insufficient data. In this paper we introduce two methods for session variability compensation to deal with short-length utterances on i-vector space. The first method proposes to incorporate the short duration variability information in the within-class variance estimation process. The second proposes to compensate the session and short duration variabilities in two different spaces with LDA algorithms (2S-LDA). First, we analyzed the behavior of the within and between class scatters in the first proposed method. Then, both proposed methods are evaluated on telephone session from NIST SRE-08 for different duration of the evaluation utterances: full (average 2.5 min), 20, 15, 10 and 5 s. The 2S-LDA method obtains good results on different short-length utterances conditions in the evaluations, with a EER relative average improvement of 1.58%, compared to the best baseline (WCCN[LDA]). Finally, we applied the 2S-LDA method in speaker verification under reverberant environment, using different reverberant conditions from Reverb challenge 2013, obtaining an improvement of 8.96 and 23% under matched and mismatched reverberant conditions, respectively.  相似文献   
939.
On the constraints violation in forward dynamics of multibody systems   总被引:1,自引:0,他引:1  
It is known that the dynamic equations of motion for constrained mechanical multibody systems are frequently formulated using the Newton–Euler’s approach, which is augmented with the acceleration constraint equations. This formulation results in the establishment of a mixed set of partial differential and algebraic equations, which are solved in order to predict the dynamic behavior of general multibody systems. The classical solution of the equations of motion is highly prone to constraints violation because the position and velocity constraint equations are not fulfilled. In this work, a general and comprehensive methodology to eliminate the constraints violation at the position and velocity levels is offered. The basic idea of the described approach is to add corrective terms to the position and velocity vectors with the intent to satisfy the corresponding kinematic constraint equations. These corrective terms are evaluated as a function of the Moore–Penrose generalized inverse of the Jacobian matrix and of the kinematic constraint equations. The described methodology is embedded in the standard method to solve the equations of motion based on the technique of Lagrange multipliers. Finally, the effectiveness of the described methodology is demonstrated through the dynamic modeling and simulation of different planar and spatial multibody systems. The outcomes in terms of constraints violation at the position and velocity levels, conservation of the total energy and computational efficiency are analyzed and compared with those obtained with the standard Lagrange multipliers method, the Baumgarte stabilization method, the augmented Lagrangian formulation, the index-1 augmented Lagrangian, and the coordinate partitioning method.  相似文献   
940.
In this work a default revision mechanism is introduced into speculative computation to manage incomplete information. The default revision is supported by a method for the generation of default constraints based on Bayesian networks. The method enables the generation of an initial set of defaults which is used to produce the most likely scenarios during the computation, represented by active processes. As facts arrive, the Bayesian network is used to derive new defaults. The objective with such a new dynamic mechanism is to keep the active processes coherent with arrived facts. This is achieved by changing the initial set of default constraints during the reasoning process in speculative computation. A practical example in clinical decision support is described.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号