首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   56747篇
  免费   2476篇
  国内免费   199篇
电工技术   867篇
综合类   93篇
化学工业   10831篇
金属工艺   2067篇
机械仪表   3065篇
建筑科学   1236篇
矿业工程   52篇
能源动力   2156篇
轻工业   4065篇
水利工程   252篇
石油天然气   210篇
武器工业   1篇
无线电   10017篇
一般工业技术   11226篇
冶金工业   5817篇
原子能技术   630篇
自动化技术   6837篇
  2023年   531篇
  2022年   833篇
  2021年   1403篇
  2020年   991篇
  2019年   1009篇
  2018年   1332篇
  2017年   1326篇
  2016年   1648篇
  2015年   1302篇
  2014年   2031篇
  2013年   3480篇
  2012年   3192篇
  2011年   3903篇
  2010年   2970篇
  2009年   3163篇
  2008年   2917篇
  2007年   2461篇
  2006年   2242篇
  2005年   1938篇
  2004年   1848篇
  2003年   1699篇
  2002年   1650篇
  2001年   1290篇
  2000年   1204篇
  1999年   1182篇
  1998年   2194篇
  1997年   1436篇
  1996年   1203篇
  1995年   961篇
  1994年   724篇
  1993年   683篇
  1992年   490篇
  1991年   494篇
  1990年   417篇
  1989年   404篇
  1988年   321篇
  1987年   275篇
  1986年   254篇
  1985年   231篇
  1984年   199篇
  1983年   151篇
  1982年   152篇
  1981年   130篇
  1980年   129篇
  1979年   102篇
  1978年   94篇
  1977年   123篇
  1976年   158篇
  1975年   80篇
  1974年   74篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
This paper presents the implementation of a new text document classification framework that uses the Support Vector Machine (SVM) approach in the training phase and the Euclidean distance function in the classification phase, coined as Euclidean-SVM. The SVM constructs a classifier by generating a decision surface, namely the optimal separating hyper-plane, to partition different categories of data points in the vector space. The concept of the optimal separating hyper-plane can be generalized for the non-linearly separable cases by introducing kernel functions to map the data points from the input space into a high dimensional feature space so that they could be separated by a linear hyper-plane. This characteristic causes the implementation of different kernel functions to have a high impact on the classification accuracy of the SVM. Other than the kernel functions, the value of soft margin parameter, C is another critical component in determining the performance of the SVM classifier. Hence, one of the critical problems of the conventional SVM classification framework is the necessity of determining the appropriate kernel function and the appropriate value of parameter C for different datasets of varying characteristics, in order to guarantee high accuracy of the classifier. In this paper, we introduce a distance measurement technique, using the Euclidean distance function to replace the optimal separating hyper-plane as the classification decision making function in the SVM. In our approach, the support vectors for each category are identified from the training data points during training phase using the SVM. In the classification phase, when a new data point is mapped into the original vector space, the average distances between the new data point and the support vectors from different categories are measured using the Euclidean distance function. The classification decision is made based on the category of support vectors which has the lowest average distance with the new data point, and this makes the classification decision irrespective of the efficacy of hyper-plane formed by applying the particular kernel function and soft margin parameter. We tested our proposed framework using several text datasets. The experimental results show that this approach makes the accuracy of the Euclidean-SVM text classifier to have a low impact on the implementation of kernel functions and soft margin parameter C.  相似文献   
992.
Currently, embedded systems have been widely used for ubiquitous computing environments including digital setup boxes, mobile phones, and USN (Ubiquitous Sensor Networks). The significance of security has been growing as it must be necessarily embedded in all these systems. Up until now, many researchers have made efforts to verify the integrity of applied binaries downloaded in embedded systems. The research of problem solving is organized into hardware methods and software-like methods. In this research, the basic approach to solving problems from the software perspective was employed. From the software perspective, unlike in the existing papers (Seshadri et al., Proc. the IEEE symposium on security and privacy, 2004; Seshadri et al., Proc. the symposium on operating systems principals, 2005) based on the standardized model (TTAS.KO-11.0054. 2006) publicized in Korea, there is no extra verifier and conduct for the verification function in the target system. Contrary to the previous schemes (Jung et al. , 2008; Lee et al., LNCS, vol. 4808, pp. 346–355, 2007), verification results are stored in 1 validation check bit, instead of storing signature value for application binary files in the i-node structure for the purpose of reducing run-time execution overhead. Consequently, the proposed scheme is more efficient because it dramatically reduces overhead in storage space, and when it comes to computing, it performs one hash algorithm for initial execution and thereafter compares 1 validation check bit only, instead of signature and hash algorithms for every application binary. Furthermore, in cases where there are frequent changes in the i-node structure or file data depending on the scheme application, the scheme can provide far more effective verification performance compared to the previous schemes.  相似文献   
993.
This paper describes a scheme for proactive human search for a designated person in an undiscovered indoor environment without human operation or intervention. In designing and developing human identification with prior information a new approach that is robust to illumination and distance variations in the indoor environment is proposed. In addition, a substantial exploration method with an octree structure, suitable for path planning in an office configuration, is employed. All these functionalities are integrated in a message- and component-based architecture for the efficient integration and control of the system. This approach is demonstrated by succeeding human search in the challenging robot mission of the 2009 Robot Grand Challenge Contest.  相似文献   
994.
In the blogosphere, there exist posts relevant to a particular subject and blogs that show interest in the subject. In this paper, we define a set of such posts and blogs as a blog community and propose a method for extracting the blog community associated with a particular subject. The proposed method is based on the idea that the blogs who have performed actions (e.g., read, comment, trackback, scrap) to the posts of a particular subject are the ones with interest in the subject, and that the posts that have received actions from such blogs are the ones that contain the subject. The proposed method starts with a small number of manually-selected seed posts containing the subject. Then, the method selects the blogs that have performed actions to the seed posts over some threshold and the posts that have received actions over some threshold. Repeating these two steps gradually expands the blog community. This paper presents various techniques to improve the accuracy of the proposed method. The experimental results show that the proposed method exhibits a higher level of accuracy than the methods proposed in prior research. This paper also discusses business applications of the extracted community, such as target marketing, market monitoring, improving search results, finding power bloggers, and revitalization of the blogosphere.  相似文献   
995.
Object recognition is a well studied but extremely challenging field. We present a novel approach to feature construction for object detection called Evolution-COnstructed Features (ECO features). Most current approaches rely on human experts to construct features for object recognition. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover multiple series of transforms that are highly discriminative. Using ECO features provides several advantages over other object detection algorithms including: no need for a human expert to build feature sets or tune their parameters, ability to generate specialized feature sets for different objects, no limitations to certain types of image sources, and ability to find both global and local feature types. We show in our experiments that the ECO features compete well against state-of-the-art object recognition algorithms.  相似文献   
996.
Accurate depth estimation is a challenging, yet essential step in the conversion of a 2D image sequence to a 3D stereo sequence. We present a novel approach to construct a temporally coherent depth map for each image in a sequence. The quality of the estimated depth is high enough for the purpose of2D to 3D stereo conversion. Our approach first combines the video sequence into a panoramic image. A user can scribble on this single panoramic image to specify depth information. The depth is then propagated to the remainder of the panoramic image. This depth map is then remapped to the original sequence and used as the initial guess for each individual depth map in the sequence. Our approach greatly simplifies the required user interaction during the assignment of the depth and allows for relatively free camera movement during the generation of a panoramic image. We demonstrate the effectiveness of our method by showing stereo converted sequences with various camera motions.  相似文献   
997.
The feature of brevity in mobile phone messages makes it difficult to distinguish lexical patterns to identify spam. This paper proposes a novel approach to spam classification of extremely short messages using not only lexical features that reflect the content of a message but new stylistic features that indicate the manner in which the message is written. Experiments on two mobile phone message collections in two different languages show that the approach outperforms previous content-based approaches significantly, regardless of language.  相似文献   
998.
Feature selection plays an important role in pattern recognition systems. In this study, we explored the problem of selecting effective heart rate variability (HRV) features for recognizing congestive heart failure (CHF) based on mutual information (MI). The MI-based greedy feature selection approach proposed by Battiti was adopted in the study. The mutual information conditioned by the first-selected feature was used as a criterion for feature selection. The uniform distribution assumption was used to reduce the computational load. And, a logarithmic exponent weighting was added to model the relative importance of the MI with respect to the number of the already-selected features. The CHF recognition system contained a feature extractor that generated four categories, totally 50, features from the input HRV sequences. The proposed feature selector, termed UCMIFS, proceeded to select the most effective features for the succeeding support vector machine (SVM) classifier. Prior to feature selection, the 50 features produced a high accuracy of 96.38%, which confirmed the representativeness of the original feature set. The performance of the UCMIFS selector was demonstrated to be superior to the other MI-based feature selectors including MIFS-U, CMIFS, and mRMR. When compared to the other outstanding selectors published in the literature, the proposed UCMIFS outperformed them with as high as 97.59% accuracy in recognizing CHF using only 15 features. The results demonstrated the advantage of using the recruited features in characterizing HRV sequences for CHF recognition. The UCMIFS selector further improved the efficiency of the recognition system with substantially lowered feature dimensions and elevated recognition rate.  相似文献   
999.
1000.
This paper addresses the issue of data governance in a cloud-based storage system. To achieve fine-grained access control over the outsourced data, we propose Self-Healing Attribute-based Privacy Aware Data Sharing in Cloud (SAPDS). The proposed system delegates the key distribution and management process to a cloud server without seeping out any confidential information. It facilitates data owner to restrain access of the user with whom data has been shared. User revocation is achieved by merely changing one attribute associated with the decryption policy, instead of modifying the entire access control policy. It enables authorized users to update their decryption keys followed by each user revocation, making it self-healing, without ever interacting with the data owner. Computation analysis of the proposed system shows that data owner can revoke n′ users with the complexity of O(n′). Besides this, legitimate users can update their decryption keys with the complexity of O(1).  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号