首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   76302篇
  免费   3778篇
  国内免费   170篇
电工技术   736篇
综合类   272篇
化学工业   15426篇
金属工艺   1541篇
机械仪表   1551篇
建筑科学   2519篇
矿业工程   189篇
能源动力   2280篇
轻工业   11683篇
水利工程   672篇
石油天然气   349篇
武器工业   13篇
无线电   4210篇
一般工业技术   11724篇
冶金工业   15930篇
原子能技术   536篇
自动化技术   10619篇
  2023年   497篇
  2022年   1130篇
  2021年   2073篇
  2020年   1426篇
  2019年   1680篇
  2018年   2290篇
  2017年   2329篇
  2016年   2431篇
  2015年   1871篇
  2014年   2543篇
  2013年   4844篇
  2012年   3941篇
  2011年   4557篇
  2010年   3447篇
  2009年   3422篇
  2008年   3119篇
  2007年   2784篇
  2006年   2238篇
  2005年   2096篇
  2004年   2029篇
  2003年   1829篇
  2002年   1741篇
  2001年   1370篇
  2000年   1214篇
  1999年   1254篇
  1998年   5012篇
  1997年   3369篇
  1996年   2307篇
  1995年   1419篇
  1994年   1136篇
  1993年   1261篇
  1992年   491篇
  1991年   567篇
  1990年   425篇
  1989年   416篇
  1988年   423篇
  1987年   355篇
  1986年   326篇
  1985年   383篇
  1984年   299篇
  1983年   225篇
  1982年   292篇
  1981年   301篇
  1980年   303篇
  1979年   194篇
  1978年   172篇
  1977年   481篇
  1976年   959篇
  1975年   153篇
  1973年   136篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Dense stereo algorithms are able to estimate disparities at all pixels including untextured regions. Typically these disparities are evaluated at integer disparity steps. A subsequent sub-pixel interpolation often fails to propagate smoothness constraints on a sub-pixel level.We propose to increase the sub-pixel accuracy in low-textured regions in four possible ways: First, we present an analysis that shows the benefit of evaluating the disparity space at fractional disparities. Second, we introduce a new disparity smoothing algorithm that preserves depth discontinuities and enforces smoothness on a sub-pixel level. Third, we present a novel stereo constraint (gravitational constraint) that assumes sorted disparity values in vertical direction and guides global algorithms to reduce false matches, especially in low-textured regions. Finally, we show how image sequence analysis improves stereo accuracy without explicitly performing tracking. Our goal in this work is to obtain an accurate 3D reconstruction. Large-scale 3D reconstruction will benefit heavily from these sub-pixel refinements.Results based on semi-global matching, obtained with the above mentioned algorithmic extensions are shown for the Middlebury stereo ground truth data sets. The presented improvements, called ImproveSubPix, turn out to be one of the top-performing algorithms when evaluating the set on a sub-pixel level while being computationally efficient. Additional results are presented for urban scenes. The four improvements are independent of the underlying type of stereo algorithm.  相似文献   
992.
Shift work situations occur in almost all safety‐critical organizations, and the investigations of some catastrophes like Chernobyl, Exxon Valdez, and the Gol/Legacy mid‐air collision indicated that shift work information exchange played an important role during the evolution of the situation before the accidents. Inadequate communications during shift changeovers challenged operators' work in the moments that preceded these accidents, because they got inadequate information about the current situation. Our research focuses on the information exchange activities (verbal, written, and nonverbal) of nuclear power plant control operators during shift changeovers. Our aim is to investigate how verbal exchanges and other representations enable operator crews to share information regarding the events that occurred in the previous shift to achieve adequate situation awareness. Our findings indicated the importance and richness of the information exchange during the shift changeover process to update and validate individual and collective situation awareness, showing that information adequately shared enables the ad hoc configurations of regulation loops and a safer use of simplified strategies that can be understood and be validated by other operators, reducing the occurrence of cognitive overloads and contributing to the construction of a common cognitive ground that enhances system resilience. © 2010 Wiley Periodicals, Inc.  相似文献   
993.
Mass-Spring Models (MSMs) are used to simulate the mechanical behavior of deformable bodies such as soft tissues in medical applications. Although they are fast to compute, they lack accuracy and their design remains still a great challenge. The major difficulties in building realistic MSMs lie on the spring stiffness estimation and the topology identification. In this work, the mechanical behavior of MSMs under tensile loads is analyzed before studying the spring stiffness estimation. In particular, the performed qualitative and quantitative analysis of the behavior of cubical MSMs shows that they have a nonlinear response similar to hyperelastic material models. According to this behavior, a new method for spring stiffness estimation valid for linear and nonlinear material models is proposed. This method adjusts the stress-strain and compressibility curves to a given reference behavior. The accuracy of the MSMs designed with this method is tested taking as reference some soft-tissue simulations based on nonlinear Finite Element Method (FEM). The obtained results show that MSMs can be designed to realistically model the behavior of hyperelastic materials such as soft tissues and can become an interesting alternative to other approaches such as nonlinear FEM.  相似文献   
994.
Many flow visualization techniques, especially integration-based methods, are problematic when the measured data exhibit noise and discretization issues. Particularly, this is the case for flow-sensitive phase-contrast magnetic resonance imaging (PC-MRI) data sets which not only record anatomic information, but also time-varying flow information. We propose a novel approach for the visualization of such data sets using integration-based methods. Our ideas are based upon finite-time Lyapunov exponents (FTLE) and enable identification of vessel boundaries in the data as high regions of separation. This allows us to correctly restrict integration-based visualization to blood vessels. We validate our technique by comparing our approach to existing anatomy-based methods as well as addressing the benefits and limitations of using FTLE to restrict flow. We also discuss the importance of parameters, i.e., advection length and data resolution, in establishing a well-defined vessel boundary. We extract appropriate flow lines and surfaces that enable the visualization of blood flow within the vessels. We further enhance the visualization by analyzing flow behavior in the seeded region and generating simplified depictions.  相似文献   
995.
The recent popularity of digital cameras has posed a new problem: how to efficiently store and retrieve the very large number of digital photos captured and chaotically stored in multiple locations without any annotation. This paper proposes an infrastructure, called PhotoGeo, which aims at helping users with the people photo annotation, event photo annotation, storage and retrieval of personal digital photos. To achieve the desired objective, PhotoGeo uses new algorithms that make it possible to annotate photos with the key metadata to facilitate their retrieval, such as: the people who were shown in the photo (who); where it was captured (where); the date and time of capture (when); and the event that was captured. The paper concludes with a detailed evaluation of these algorithms.  相似文献   
996.
To make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g., from a word or an image within a Web page) as well as hyperlinking into resources (e.g., fragment URIs into Web pages). To turn video and audio into hypervideo and hyperaudio, we need to enable hyperlinking into and out of them. The W3C Media Fragments Working Group is taking on the challenge to further embrace W3C??s mission to lead the World Wide Web to its full potential by developing a Media Fragment protocol and guidelines that ensure the long-term growth of the Web. The major contribution of this paper is the introduction of Media Fragments as a media-format independent, standard means of addressing media resources using URIs. Moreover, we explain how the HTTP protocol can be used and extended to serve Media Fragments and what the impact is for current Web-enabled media formats.  相似文献   
997.
Today middleware is much more powerful, more reliable and faster than it used to be. Nevertheless, for the application developer, the complexity of using middleware platforms has increased accordingly. The volume and variety of application contexts that current middleware technologies have to support require that developers be able to anticipate the widest possible range of execution environments, desired and undesired effects of different programming strategies, handling procedures for runtime errors, and so on. This paper shows how a generic framework designed to evaluate the usability of notations (the Cognitive Dimensions of Notations Framework, or CDN) has been instantiated and used to analyze the cognitive challenges involved in adapting middleware platforms. This human-centric perspective allowed us to achieve novel results compared to existing middleware evaluation research, typically centered around system performance metrics. The focus of our study is on the process of adapting middleware implementations, rather than in the end product of this activity. Our main contributions are twofold. First, we describe a qualitative CDN-based method to analyze the cognitive effort made by programmers while adapting middleware implementations. And second, we show how two platforms designed for flexibility have been compared, suggesting that certain programming language design features might be particularly helpful for developers.  相似文献   
998.
This work presents a study of RTP multiplexing schemes, which are compared with the normal use of RTP, in terms of experienced quality. Bandwidth saving, latency and packet loss for different options are studied, and some tests of Voice over IP (VoIP) traffic are carried out in order to compare the quality obtained using different implementations of the router buffer. Voice quality is calculated using ITU R-factor, which is a widely accepted quality estimator. The tests show the bandwidth savings of multiplexing, and also the importance of packet size for certain buffers, as latency and packet loss may be affected. The customer’s experience improvement is measured, showing that the use of multiplexing can be interesting in some scenarios, like an enterprise with different offices connected via the Internet. The system is also tested using different numbers of samples per packet, and the distribution of the flows into different tunnels is found to be an important factor in order to achieve an optimal perceived quality for each kind of buffer. Grouping all the flows into a single tunnel will not always be the best solution, as the increase of the number of flows does not improve bandwidth efficiency indefinitely. If the buffer penalizes big packets, it will be better to group the flows into a number of tunnels. The router processing capacity has to be taken into account too, as the limit of packets per second it can manage must not be exceeded. The obtained results show that multiplexing is a good way to improve customer’s experience of VoIP in scenarios where many RTP flows share the same path.  相似文献   
999.
In this paper, a new approximation to off-line signature verification is proposed based on two-class classifiers using an expert decisions ensemble. Different methods to extract sets of local and a global features from the target sample are detailed. Also a normalization by confidence voting method is used in order to decrease the final equal error rate (EER). Each set of features is processed by a single expert, and on the other approach proposed, the decisions of the individual classifiers are combined using weighted votes. Experimental results are given using a subcorpus of the large MCYT signature database for random and skilled forgeries. The results show that the weighted combination outperforms the individual classifiers significantly. The best EER obtained were 6.3 % in the case of skilled forgeries and 2.31 % in the case of random forgeries.  相似文献   
1000.
Fuzzy rule-based classification systems (FRBCSs) are known due to their ability to treat with low quality data and obtain good results in these scenarios. However, their application in problems with missing data are uncommon while in real-life data, information is frequently incomplete in data mining, caused by the presence of missing values in attributes. Several schemes have been studied to overcome the drawbacks produced by missing values in data mining tasks; one of the most well known is based on preprocessing, formerly known as imputation. In this work, we focus on FRBCSs considering 14 different approaches to missing attribute values treatment that are presented and analyzed. The analysis involves three different methods, in which we distinguish between Mamdani and TSK models. From the obtained results, the convenience of using imputation methods for FRBCSs with missing values is stated. The analysis suggests that each type behaves differently while the use of determined missing values imputation methods could improve the accuracy obtained for these methods. Thus, the use of particular imputation methods conditioned to the type of FRBCSs is required.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号