首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   27805篇
  免费   3214篇
  国内免费   2322篇
电工技术   1870篇
技术理论   2篇
综合类   3222篇
化学工业   2157篇
金属工艺   376篇
机械仪表   1170篇
建筑科学   2403篇
矿业工程   1219篇
能源动力   759篇
轻工业   1908篇
水利工程   1820篇
石油天然气   1248篇
武器工业   247篇
无线电   2544篇
一般工业技术   1808篇
冶金工业   1030篇
原子能技术   85篇
自动化技术   9473篇
  2024年   97篇
  2023年   345篇
  2022年   698篇
  2021年   794篇
  2020年   832篇
  2019年   737篇
  2018年   715篇
  2017年   872篇
  2016年   1019篇
  2015年   1011篇
  2014年   1545篇
  2013年   1650篇
  2012年   1871篇
  2011年   2163篇
  2010年   1790篇
  2009年   1863篇
  2008年   1878篇
  2007年   2225篇
  2006年   2225篇
  2005年   1849篇
  2004年   1627篇
  2003年   1350篇
  2002年   966篇
  2001年   637篇
  2000年   463篇
  1999年   401篇
  1998年   312篇
  1997年   253篇
  1996年   238篇
  1995年   186篇
  1994年   166篇
  1993年   128篇
  1992年   72篇
  1991年   82篇
  1990年   59篇
  1989年   47篇
  1988年   41篇
  1987年   18篇
  1986年   26篇
  1985年   10篇
  1984年   14篇
  1983年   15篇
  1982年   14篇
  1980年   5篇
  1979年   5篇
  1964年   3篇
  1963年   2篇
  1961年   2篇
  1960年   2篇
  1958年   3篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
针对夜间高速公路,提出识别车道标志线的新型检测算法.该算法采用改进的中值法滤波,基于光密度差的4方向模板对数Prewitt边缘检测预处理图像,得到二值化阈值T. ROI和左右车道线的确定大大减少运算量,结合车道线宽度去除伪车道,采用临时轨迹策略排除光线变化和前方车辆的干扰,最后得到完整清晰的车道线.实验证明该车道检测方法具有快速性、稳定性、准确性.  相似文献   
992.
Microsoft office系列软件在日常工作中应用广泛。我单位在工程项目中大量使用Office软件进行数据处理及交换,对这些原始数据采取适当的方法分析挖掘,可以极大地缩短设计与试验周期、降低成本。由于项目产生的临时数据常以不同格式存放在各种办公软件中,且数量巨大,所以需要一个可以控制Office系列办公软件的程序来完成原始信息的收集及输出。给出了通过使用基于C++的Qt框架并结合COM、ODBC技术获取、处理数据,并输出至数据库或XML文档的实现方法。  相似文献   
993.
Risk management is becoming increasingly important for railway companies in order to safeguard their passengers and employees while improving safety and reducing maintenance costs. However, in many circumstances, the application of probabilistic risk analysis tools may not give satisfactory results because the risk data are incomplete or there is a high level of uncertainty involved in the risk data. This article presents the development of a risk management system for railway risk analysis using fuzzy reasoning approach and fuzzy analytical hierarchy decision making process. In the system, fuzzy reasoning approach (FRA) is employed to estimate the risk level of each hazardous event in terms of failure frequency, consequence severity and consequence probability. This allows imprecision or approximate information in the risk analysis process. Fuzzy analytical hierarchy process (fuzzy-AHP) technique is then incorporated into the risk model to use its advantage in determining the relative importance of the risk contributions so that the risk assessment can be progressed from hazardous event level to hazard group level and finally to railway system level. This risk assessment system can evaluate both qualitative and quantitative risk data and information associated with a railway system effectively and efficiently, which will provide railway risk analysts, managers and engineers with a method and tool to improve their safety management of railway systems and set safety standards. A case study on risk assessment of shunting at Hammersmith depot is used to illustrate the application of the proposed risk assessment system.  相似文献   
994.
Productive wetland systems at land-water interfaces that provide unique ecosystem services are challenging to study because of water dynamics, complex surface cover and constrained field access. We applied object-based image analysis and supervised classification to four 32-m Beijing-1 microsatellite images to examine broad-scale surface cover composition and its change during November 2007-March 2008 low water season at Poyang Lake, the largest freshwater lake-wetland system in China (> 4000 km2). We proposed a novel method for semi-automated selection of training objects in this heterogeneous landscape using extreme values of spectral indices (SIs) estimated from satellite data. Dynamics of the major wetland cover types (Water, Mudflat, Vegetation and Sand) were investigated both as transitions among primary classes based on maximum membership value, and as changes in memberships to all classes even under no change in a primary class. Fuzzy classification accuracy was evaluated as match frequencies between classification outcome and a) the best reference candidate class (MAX function) and b) any acceptable reference class (RIGHT function). MAX-based accuracy was relatively high for Vegetation (≥ 90%), Water (≥ 82%), Mudflat (≥ 76%) and the smallest-area Sand (≥ 75%) in all scenes; these scores improved with the RIGHT function to 87-100%. Classification uncertainty assessed as the proportion of fuzzy object area within a class at a given fuzzy threshold value was the highest for all classes in November 2007, and consistently higher for Mudflat than for other classes in all scenes. Vegetation was the dominant class in all scenes, occupying 41.2-49.3% of the study area. Object memberships to Vegetation mostly declined from November 2007 to February 2008 and increased substantially only in February-March 2008, possibly reflecting growing season conditions and grazing. Spatial extent of Water both declined and increased during the study period, reflecting precipitation and hydrological events. The “fuzziest” Mudflat class was involved in major detected transitions among classes and declined in classification accuracy by March 2008, representing a key target for finer-scale research. Future work should introduce Vegetation sub-classes reflecting differences in phenology and alternative methods to discriminate Mudflat from other classes. Results can be used to guide field sampling and top-down landscape analyses in this wetland.  相似文献   
995.
The approach of using primarily satellite observations to estimate ecosystem gross primary production (GPP) without resorting to interpolation of many surface observations has recently shown promising results. Previous work has shown that the remote sensing based greenness and radiation (GR) model can give accurate GPP estimates in crops. However, the feasibility of its application and the model calibration to other ecosystems remain unknown. With the enhanced vegetation index (EVI) derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) images and the surface based estimates of photosynthetically active radiation (PAR), we provide an analysis of the GR model for estimating monthly GPP using flux measurements at fifteen sites, representing a wide range of ecosystems with various canopy structures and climate characteristics. Results demonstrate that the GR model can provide better estimates of GPP than that of the temperature and greenness (TG) model for the overall data classified as non-forest (NF), deciduous forest (DF) and evergreen forest (EF) sites. Calibration of the GR model is also conducted and has shown reasonable results for all sites with a root mean square error of 47.18 g C/m2/month. Different coefficients acquired for the three plant functional types indicate that there are shifts of importance among various factors that determine the monthly vegetation GPP. The analysis firstly shows the potential use of the GR model in estimating GPP across biomes while it also points to the needs of further considerations in future operational applications.  相似文献   
996.
This paper treats with integral multi-commodity flow through a network. To enhance the Quality of Service (QoS) for channels, it is necessary to minimize delay and congestion. Decreasing the end-to-end delay and consumption of bandwidth across channels are dependent and may be considered in very complex mathematical equations. To capture with this problem, a multi-commodity flow model is introduced whose targets are minimizing delay and congestion in one model. The flow through the network such as packets, also needs to get integral values. A model covering these concepts, is NP-hard while it is very important to find transmission strategies in real-time. For this aim, we extend a cooperative algorithm including traditional mathematical programming such as path enumeration and a meta-heuristic algorithm such as genetic algorithm. To find integral solution satisfying demands of nodes, we generalize a hybrid genetic algorithm to assign the integral commodities where they are needed. In this hybrid algorithm, we use feasible encoding and try to keep feasibility of chromosomes over iterations. By considering some random networks, we show that the proposed algorithm yields reasonable results in a few number of iterations. Also, because this algorithm can be applied in a wide range of objective functions in terms of delay and congestion, it is possible to find some routs for each commodity with high QoS. Due to these outcomes, the presented model and algorithm can be utilized in a variety of application in computer networks and transportation systems to decrease the congestion and increase the usage of channels.  相似文献   
997.
Query matching on XML streams is challenging work for querying efficiency when the amount of queried stream data is huge and the data can be streamed in continuously. In this paper, the method Syntactic Twig-Query Matching (STQM) is proposed to process queries on an XML stream and return the query results continuously and immediately. STQM matches twig queries on the XML stream in a syntactic manner by using a lexical analyzer and a parser, both of which are built from our lexical-rules and grammar-rules generators according to the user's queries and document schema, respectively. For query matching, the lexical analyzer scans the incoming XML stream and the parser recognizes XML structures for retrieving every twig-query result from the XML stream. Moreover, STQM obtains query results without a post-phase for excluding false positives, which are common in many streaming query methods. Through the experimental results, we found that STQM matches the twig query efficiently and also has good scalability both in the queried data size and the branch degree of the twig query. The proposed method takes less execution time than that of a sequence-based approach, which is widely accepted as a proper solution to the XML stream query.  相似文献   
998.
Schema integration aims to create a mediated schema as a unified representation of existing heterogeneous sources sharing a common application domain. These sources have been increasingly written in XML due to its versatility and expressive power. Unfortunately, these sources often use different elements and structures to express the same concepts and relations, thus causing substantial semantic and structural conflicts. Such a challenge impedes the creation of high-quality mediated schemas and has not been adequately addressed by existing integration methods. In this paper, we propose a novel method, named XINTOR, for automating the integration of heterogeneous schemas. Given a set of XML sources and a set of correspondences between the source schemas, our method aims to create a complete and minimal mediated schema: it completely captures all of the concepts and relations in the sources without duplication, provided that the concepts do not overlap. Our contributions are fourfold. First, we resolve structural conflicts inherent in the source schemas. Second, we introduce a new statistics-based measure, called path cohesion, for selecting concepts and relations to be a part of the mediated schema. The path cohesion is statistically computed based on multiple path quality dimensions such as average path length and path frequency. Third, we resolve semantic conflicts by augmenting the semantics of similar concepts with context-dependent information. Finally, we propose a novel double-layered mediated schema to retain a wider range of concepts and relations than existing mediated schemas, which are at best either complete or minimal, but not both. Performed on both real and synthetic datasets, our experimental results show that XINTOR outperforms existing methods with respect to (i) the mediated-schema quality using precision, recall, F-measure, and schema minimality; and (ii) the execution performance based on execution time and scale-up performance.  相似文献   
999.
XQuery is a query and functional programming language that is designed for querying the data in XML documents. This paper addresses how to efficiently query encrypted XML documents using XQuery, with the key point being how to eliminate redundant decryption so as to accelerate the querying process. We propose a processing model that can automatically translate the XQuery statements for encrypted XML documents. The implementation and experimental results demonstrate the practicality of the proposed model.  相似文献   
1000.
Cluster validity indices are used to validate results of clustering and to find a set of clusters that best fits natural partitions for given data set. Most of the previous validity indices have been considerably dependent on the number of data objects in clusters, on cluster centroids and on average values. They have a tendency to ignore small clusters and clusters with low density. Two cluster validity indices are proposed for efficient validation of partitions containing clusters that widely differ in sizes and densities. The first proposed index exploits a compactness measure and a separation measure, and the second index is based an overlap measure and a separation measure. The compactness and the overlap measures are calculated from few data objects of a cluster while the separation measure uses all data objects. The compactness measure is calculated only from data objects of a cluster that are far enough away from the cluster centroids, while the overlap measure is calculated from data objects that are enough near to one or more other clusters. A good partition is expected to have low degree of overlap and a larger separation distance and compactness. The maximum value of the ratio of compactness to separation and the minimum value of the ratio of overlap to separation indicate the optimal partition. Testing of both proposed indices on some artificial and three well-known real data sets showed the effectiveness and reliability of the proposed indices.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号