首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4752篇
  免费   401篇
  国内免费   47篇
电工技术   84篇
综合类   20篇
化学工业   1314篇
金属工艺   166篇
机械仪表   230篇
建筑科学   226篇
矿业工程   13篇
能源动力   249篇
轻工业   527篇
水利工程   63篇
石油天然气   53篇
武器工业   3篇
无线电   472篇
一般工业技术   791篇
冶金工业   164篇
原子能技术   42篇
自动化技术   783篇
  2024年   17篇
  2023年   76篇
  2022年   140篇
  2021年   299篇
  2020年   294篇
  2019年   315篇
  2018年   388篇
  2017年   345篇
  2016年   347篇
  2015年   190篇
  2014年   360篇
  2013年   529篇
  2012年   333篇
  2011年   356篇
  2010年   250篇
  2009年   199篇
  2008年   143篇
  2007年   89篇
  2006年   72篇
  2005年   56篇
  2004年   35篇
  2003年   32篇
  2002年   14篇
  2001年   19篇
  2000年   16篇
  1999年   12篇
  1998年   37篇
  1997年   27篇
  1996年   27篇
  1995年   26篇
  1994年   14篇
  1993年   14篇
  1992年   10篇
  1991年   13篇
  1990年   7篇
  1989年   6篇
  1988年   8篇
  1987年   9篇
  1986年   12篇
  1985年   8篇
  1984年   6篇
  1982年   3篇
  1980年   4篇
  1979年   3篇
  1976年   4篇
  1975年   3篇
  1973年   5篇
  1971年   6篇
  1970年   8篇
  1969年   4篇
排序方式: 共有5200条查询结果,搜索用时 20 毫秒
61.

Mapping vulnerability to Saltwater Intrusion (SWI) in coastal aquifers is studied in this paper using the GALDIT framework but with a novelty of transforming the concept of vulnerability indexing to risk indexing. GALDIT is the acronym of 6 data layers, which are put consensually together to invoke a sense of vulnerability to the intrusion of saltwater against aquifers with freshwater. It is a scoring system of prescribed rates to account for local variations; and prescribed weights to account for relative importance of each data layer but these suffer from subjectivity. Another novelty of the paper is to use fuzzy logic to learn rate values and catastrophe theory to learn weight values and these together are implemented as a scheme and hence Fuzzy-Catastrophe Scheme (FCS). The GALDIT data layers are divided into two groups of Passive Vulnerability Indices (PVI) and Active Vulnerability Indices (AVI), where their sum is Total Vulnerability Index (TVI) and equivalent to GALDIT. Two additional data layers (Pumping and Water table decline) are also introduced to serve as Risk Actuation Index (RAI). The product of TVI and RAI yields Risk Indices. The paper applies these new concepts to a study area, subject to groundwater decline and a possible saltwater intrusion problem. The results provide a proof-of-concept for PVI, AVI, RAI and RI by studying their correlation with groundwater quality samples using the fraction of saltwater (fsea), Groundwater Quality Indices (GQI) and Piper diagram. Significant correlations between the appropriate values are found and these provide a new insight for the study area.

  相似文献   
62.

In this paper, a new fuzzy group decision-making methodology which determines and incorporates negotiation powers of decision makers is developed. The proposed method is based on a combination of interval type-2 fuzzy sets and a multi-criteria decision making (MCDM) model, namely TOPSIS. To examine the applicability of the proposed methodology, it is used for finding the best scenario of allocating water and reclaimed wastewater to domestic, agricultural, and industrial water sectors and restoring groundwater quantity and quality in the Varamin region located in Tehran metropolitan area in Iran. The results show that the selected scenario leads to an acceptable groundwater conservation level during a long-term planning horizon. Although the capital cost of this scenario is high, which leads to groundwater restoration during the 34-year planning horizon, it is determined as the best allocation scenario. This scenario also entails the second least pumping cost, due to less water allocation from the groundwater. To evaluate the results of the proposed methodology, they are compared with those obtained using some well-known interval type-2 decision-making approaches including arithmetic-based, TOPSIS-based, and likelihood-based comparison methods. The Spearman correlation coefficient shows that the obtained results generally concur with those of the other methods. It is also concluded that the proposed methodology gives more reasonable results by calculating and considering the negotiation powers of decision makers in an extended TOPSIS-based group decision-making model.

  相似文献   
63.
Automation of deburring and cleaning of castings is desirable for many reasons. The major reasons are dangerous working conditions, difficulties in finding workers for cleaning sections, and improved profitability. A suitable robot cell capable of using different tools, such as cup grinders, disc grinders and rotary files, is the solution. This robot should be completed with sensors in order to keep the quality of the cleaned surface at an acceptable level. Although using sensors simplifies both the programming and quality control there are still other problems that need to be solved. These involve selection of machining data, e.g. feeding rate and grinding force in a force controlled operation based on parameters such as tool type, disc grinder and geometry. In order to decrease the programming time, a process model for disc grinders has been developed. This article investigates this process model and pays attention to problems such as wavy or burned surfaces and the effect of a robot's repetition accuracy in the results obtained. Many aspects treated in this article are quite general, and can be applied in other types of grinding operations.  相似文献   
64.
In persistent homology, the persistence barcode encodes pairs of simplices meaning birth and death of homology classes. Persistence barcodes depend on the ordering of the simplices (called a filter) of the given simplicial complex. In this paper, we define the notion of “minimal” barcodes in terms of entropy. Starting from a given filtration of a simplicial complex K, an algorithm for computing a “proper” filter (a total ordering of the simplices preserving the partial ordering imposed by the filtration as well as achieving a persistence barcode with small entropy) is detailed, by way of computation, and subsequent modification, of maximum matchings on subgraphs of the Hasse diagram associated to K. Examples demonstrating the utility of computing such a proper ordering on the simplices are given.  相似文献   
65.
66.
Wireless Multimedia Sensor Networks (WMSNs) consist of networks of interconnected devices involved in retrieving multimedia content, such as, video, audio, acoustic, and scalar data, from the environment. The goal of these networks is optimized delivery of multimedia content based on quality of service (QoS) parameters, such as delay, jitter and distortion. In multimedia communications each packet has strict playout deadlines, thus late arriving packets and lost packets are treated equally. It is a challenging task to guarantee soft delay deadlines along with energy minimization, in resource constrained, high data rate WMSNs. Conventional layered approach does not provide optimal solution for guaranteeing soft delay deadlines due to the large amount of overhead involved at each layer. Cross layer approach is fast gaining popularity, due to its ability to exploit the interdependence between different layers, to guarantee QoS constraints like latency, distortion, reliability, throughput and error rate. The paper presents a channel utilization and delay aware routing (CUDAR) protocol for WMSNs. This protocol is based on a cross-layer approach, which provides soft end-to-end delay guarantees along with efficient utilization of resources. Extensive simulation analysis of CUDAR shows that it provides better delay guarantees than existing protocols and consequently reduces jitter and distortion in WMSN communication.  相似文献   
67.
A bipartite state is classical with respect to party A if and only if party A can perform nondisruptive local state identification (NDLID) by a projective measurement. Motivated by this we introduce a class of quantum correlation measures for an arbitrary bipartite state. The measures utilize the general Schatten p-norm to quantify the amount of departure from the necessary and sufficient condition of classicality of correlations provided by the concept of NDLID. We show that for the case of Hilbert–Schmidt norm, i.e., \(p=2\), a closed formula is available for an arbitrary bipartite state. The reliability of the proposed measures is checked from the information-theoretic perspective. Also, the monotonicity behavior of these measures under LOCC is exemplified. The results reveal that for the general pure bipartite states these measures have an upper bound which is an entanglement monotone in its own right. This enables us to introduce a new measure of entanglement, for a general bipartite state, by convex roof construction. Some examples and comparison with other quantum correlation measures are also provided.  相似文献   
68.
In this paper, we present faster than real-time implementation of a class of dense stereo vision algorithms on a low-power massively parallel SIMD architecture, the CSX700. With two cores, each with 96 Processing Elements, this SIMD architecture provides a peak computation power of 96 GFLOPS while consuming only 9 Watts, making it an excellent candidate for embedded computing applications. Exploiting full features of this architecture, we have developed schemes for an efficient parallel implementation with minimum of overhead. For the sum of squared differences (SSD) algorithm and for VGA (640 × 480) images with disparity ranges of 16 and 32, we achieve a performance of 179 and 94 frames per second (fps), respectively. For the HDTV (1,280 × 720) images with disparity ranges of 16 and 32, we achieve a performance of 67 and 35 fps, respectively. We have also implemented more accurate, and hence more computationally expensive variants of the SSD, and for most cases, particularly for VGA images, we have achieved faster than real-time performance. Our results clearly demonstrate that, by developing careful parallelization schemes, the CSX architecture can provide excellent performance and flexibility for various embedded vision applications.  相似文献   
69.
Dysfluency and stuttering are a break or interruption of normal speech such as repetition, prolongation, interjection of syllables, sounds, words or phrases and involuntary silent pauses or blocks in communication. Stuttering assessment through manual classification of speech dysfluencies is subjective, inconsistent, time consuming and prone to error. This paper proposes an objective evaluation of speech dysfluencies based on the wavelet packet transform with sample entropy features. Dysfluent speech signals are decomposed into six levels by using wavelet packet transform. Sample entropy (SampEn) features are extracted at every level of decomposition and they are used as features to characterize the speech dysfluencies (stuttered events). Three different classifiers such as k-nearest neighbor (kNN), linear discriminant analysis (LDA) based classifier and support vector machine (SVM) are used to investigate the performance of the sample entropy features for the classification of speech dysfluencies. 10-fold cross validation method is used for testing the reliability of the classifier results. The effect of different wavelet families on the classification performance is also performed. Experimental results demonstrate that the proposed features and classification algorithms give very promising classification accuracy of 96.67% with the standard deviation of 0.37 and also that the proposed method can be used to help speech language pathologist in classifying speech dysfluencies.  相似文献   
70.
In this paper, we propose a source localization algorithm based on a sparse Fast Fourier Transform (FFT)-based feature extraction method and spatial sparsity. We represent the sound source positions as a sparse vector by discretely segmenting the space with a circular grid. The location vector is related to microphone measurements through a linear equation, which can be estimated at each microphone. For this linear dimensionality reduction, we have utilized a Compressive Sensing (CS) and two-level FFT-based feature extraction method which combines two sets of audio signal features and covers both short-time and long-time properties of the signal. The proposed feature extraction method leads to a sparse representation of audio signals. As a result, a significant reduction in the dimensionality of the signals is achieved. In comparison to the state-of-the-art methods, the proposed method improves the accuracy while the complexity is reduced in some cases.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号