首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17038篇
  免费   1468篇
  国内免费   20篇
电工技术   161篇
综合类   10篇
化学工业   4430篇
金属工艺   237篇
机械仪表   511篇
建筑科学   779篇
矿业工程   44篇
能源动力   497篇
轻工业   3132篇
水利工程   153篇
石油天然气   56篇
无线电   1220篇
一般工业技术   3050篇
冶金工业   866篇
原子能技术   77篇
自动化技术   3303篇
  2024年   47篇
  2023年   222篇
  2022年   345篇
  2021年   648篇
  2020年   497篇
  2019年   522篇
  2018年   720篇
  2017年   737篇
  2016年   813篇
  2015年   645篇
  2014年   851篇
  2013年   1572篇
  2012年   1416篇
  2011年   1433篇
  2010年   930篇
  2009年   919篇
  2008年   912篇
  2007年   838篇
  2006年   693篇
  2005年   503篇
  2004年   438篇
  2003年   405篇
  2002年   365篇
  2001年   212篇
  2000年   221篇
  1999年   160篇
  1998年   159篇
  1997年   152篇
  1996年   131篇
  1995年   109篇
  1994年   97篇
  1993年   87篇
  1992年   55篇
  1991年   50篇
  1990年   36篇
  1989年   48篇
  1988年   37篇
  1987年   25篇
  1986年   30篇
  1985年   40篇
  1984年   36篇
  1983年   27篇
  1982年   37篇
  1981年   30篇
  1980年   25篇
  1979年   36篇
  1978年   33篇
  1976年   20篇
  1975年   14篇
  1973年   16篇
排序方式: 共有10000条查询结果,搜索用时 109 毫秒
131.
We present a process to automatically generate three-dimensional mesh representations of the complex, arborized cell membrane surface of cortical neurons (the principal information processing cells of the brain) from nonuniform morphological measurements. Starting from manually sampled morphological points (3D points and diameters) from neurons in a brain slice preparation, we construct a polygonal mesh representation that realistically represents the continuous membrane surface, closely matching the original experimental data. A mapping between the original morphological points and the newly generated mesh enables simulations of electrophysiolgical activity to be visualized on this new membrane representation. We compare the new mesh representation with the state of the art and present a series of use cases and applications of this technique to visualize simulations of single neurons and networks of multiple neurons.  相似文献   
132.
Motion is a key feature for a wide class of computer vision approaches to recognize actions. In this article, we show how to define bio-inspired features for action recognition. To do so, we start from a well-established bio-inspired motion model of cortical areas V1 and MT. The primary visual cortex, designated as V1, is the first cortical area encountered in the visual stream processing and early responses of V1 cells consist in tiled sets of selective spatiotemporal filters. The second cortical area of interest in this article is area MT where MT cells pool incoming information from V1 according to the shape and characteristic of their receptive field. To go beyond the classical models and following the observations from Xiao et al. [61], we propose here to model different surround geometries for MT cells receptive fields. Then, we define the so-called bio-inspired features associated to an input video, based on the average activity of MT cells. Finally, we show how these features can be used in a standard classification method to perform action recognition. Results are given for the Weizmann and KTH databases. Interestingly, we show that the diversity of motion representation at the MT level (different surround geometries), is a major advantage for action recognition. On the Weizmann database, the inclusion of different MT surround geometries improved the recognition rate from 63.01 ± 2.07% up to 99.26 ± 1.66% in the best case. Similarly, on the KTH database, the recognition rate was significantly improved with the inclusion of MT different surround geometries (from 47.82 ± 2.71% up to 92.44 ± 0.01% in the best case). We also discussed the limitations of the current approach which are closely related to the input video duration. These promising results encourage us to further develop bio-inspired models incorporating other brain mechanisms and cortical areas in order to deal with more complex videos.  相似文献   
133.
In this paper, we study the sensitivity of centrality metrics as a key metric of social networks to support visual reasoning. As centrality represents the prestige or importance of a node in a network, its sensitivity represents the importance of the relationship between this and all other nodes in the network. We have derived an analytical solution that extracts the sensitivity as the derivative of centrality with respect to degree for two centrality metrics based on feedback and random walks. We show that these sensitivities are good indicators of the distribution of centrality in the network, and how changes are expected to be propagated if we introduce changes to the network. These metrics also help us simplify a complex network in a way that retains the main structural properties and that results in trustworthy, readable diagrams. Sensitivity is also a key concept for uncertainty analysis of social networks, and we show how our approach may help analysts gain insight on the robustness of key network metrics. Through a number of examples, we illustrate the need for measuring sensitivity, and the impact it has on the visualization of and interaction with social and other scale-free networks.  相似文献   
134.
This work presents a study of RTP multiplexing schemes, which are compared with the normal use of RTP, in terms of experienced quality. Bandwidth saving, latency and packet loss for different options are studied, and some tests of Voice over IP (VoIP) traffic are carried out in order to compare the quality obtained using different implementations of the router buffer. Voice quality is calculated using ITU R-factor, which is a widely accepted quality estimator. The tests show the bandwidth savings of multiplexing, and also the importance of packet size for certain buffers, as latency and packet loss may be affected. The customer’s experience improvement is measured, showing that the use of multiplexing can be interesting in some scenarios, like an enterprise with different offices connected via the Internet. The system is also tested using different numbers of samples per packet, and the distribution of the flows into different tunnels is found to be an important factor in order to achieve an optimal perceived quality for each kind of buffer. Grouping all the flows into a single tunnel will not always be the best solution, as the increase of the number of flows does not improve bandwidth efficiency indefinitely. If the buffer penalizes big packets, it will be better to group the flows into a number of tunnels. The router processing capacity has to be taken into account too, as the limit of packets per second it can manage must not be exceeded. The obtained results show that multiplexing is a good way to improve customer’s experience of VoIP in scenarios where many RTP flows share the same path.  相似文献   
135.
In this paper, a new approximation to off-line signature verification is proposed based on two-class classifiers using an expert decisions ensemble. Different methods to extract sets of local and a global features from the target sample are detailed. Also a normalization by confidence voting method is used in order to decrease the final equal error rate (EER). Each set of features is processed by a single expert, and on the other approach proposed, the decisions of the individual classifiers are combined using weighted votes. Experimental results are given using a subcorpus of the large MCYT signature database for random and skilled forgeries. The results show that the weighted combination outperforms the individual classifiers significantly. The best EER obtained were 6.3 % in the case of skilled forgeries and 2.31 % in the case of random forgeries.  相似文献   
136.
Fuzzy rule-based classification systems (FRBCSs) are known due to their ability to treat with low quality data and obtain good results in these scenarios. However, their application in problems with missing data are uncommon while in real-life data, information is frequently incomplete in data mining, caused by the presence of missing values in attributes. Several schemes have been studied to overcome the drawbacks produced by missing values in data mining tasks; one of the most well known is based on preprocessing, formerly known as imputation. In this work, we focus on FRBCSs considering 14 different approaches to missing attribute values treatment that are presented and analyzed. The analysis involves three different methods, in which we distinguish between Mamdani and TSK models. From the obtained results, the convenience of using imputation methods for FRBCSs with missing values is stated. The analysis suggests that each type behaves differently while the use of determined missing values imputation methods could improve the accuracy obtained for these methods. Thus, the use of particular imputation methods conditioned to the type of FRBCSs is required.  相似文献   
137.
Adaptive anisotropic refinement of finite element meshes allows one to reduce the computational effort required to achieve a specified accuracy of the solution of a PDE problem. We present a new approach to adaptive refinement and demonstrate that this allows one to construct algorithms which generate very flexible and efficient anisotropically refined meshes, even improving the convergence order compared to adaptive isotropic refinement if the problem permits.  相似文献   
138.
Model-based testing is focused on testing techniques which rely on the use of models. The diversity of systems and software to be tested implies the need for research on a variety of models and methods for test automation. We briefly review this research area and introduce several papers selected from the 22nd International Conference on Testing Software and Systems (ICTSS).  相似文献   
139.
Membrane Computing is a discipline aiming to abstract formal computing models, called membrane systems or P systems, from the structure and functioning of the living cells as well as from the cooperation of cells in tissues, organs, and other higher order structures. This framework provides polynomial time solutions to NP-complete problems by trading space for time, and whose efficient simulation poses challenges in three different aspects: an intrinsic massively parallelism of P systems, an exponential computational workspace, and a non-intensive floating point nature. In this paper, we analyze the simulation of a family of recognizer P systems with active membranes that solves the Satisfiability problem in linear time on different instances of Graphics Processing Units (GPUs). For an efficient handling of the exponential workspace created by the P systems computation, we enable different data policies to increase memory bandwidth and exploit data locality through tiling and dynamic queues. Parallelism inherent to the target P system is also managed to demonstrate that GPUs offer a valid alternative for high-performance computing at a considerably lower cost. Furthermore, scalability is demonstrated on the way to the largest problem size we were able to run, and considering the new hardware generation from Nvidia, Fermi, for a total speed-up exceeding four orders of magnitude when running our simulations on the Tesla S2050 server.  相似文献   
140.
We discuss how the standard Cost-Benefit Analysis should be modified in order to take risk (and uncertainty) into account. We propose different approaches used in finance (Value at Risk, Conditional Value at Risk, Downside Risk Measures, and Efficiency Ratio) as useful tools to model the impact of risk in project evaluation. After introducing the concepts, we show how they could be used in CBA and provide some simple examples to illustrate how such concepts can be applied to evaluate the desirability of a new project infrastructure.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号