首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   56758篇
  免费   2476篇
  国内免费   199篇
电工技术   867篇
综合类   93篇
化学工业   10831篇
金属工艺   2067篇
机械仪表   3066篇
建筑科学   1236篇
矿业工程   52篇
能源动力   2157篇
轻工业   4065篇
水利工程   252篇
石油天然气   210篇
武器工业   1篇
无线电   10017篇
一般工业技术   11233篇
冶金工业   5817篇
原子能技术   630篇
自动化技术   6839篇
  2023年   531篇
  2022年   833篇
  2021年   1403篇
  2020年   991篇
  2019年   1009篇
  2018年   1332篇
  2017年   1328篇
  2016年   1648篇
  2015年   1303篇
  2014年   2031篇
  2013年   3482篇
  2012年   3193篇
  2011年   3903篇
  2010年   2971篇
  2009年   3164篇
  2008年   2921篇
  2007年   2461篇
  2006年   2242篇
  2005年   1937篇
  2004年   1848篇
  2003年   1699篇
  2002年   1650篇
  2001年   1290篇
  2000年   1204篇
  1999年   1182篇
  1998年   2194篇
  1997年   1436篇
  1996年   1203篇
  1995年   961篇
  1994年   724篇
  1993年   683篇
  1992年   490篇
  1991年   494篇
  1990年   417篇
  1989年   404篇
  1988年   321篇
  1987年   275篇
  1986年   254篇
  1985年   231篇
  1984年   199篇
  1983年   151篇
  1982年   152篇
  1981年   130篇
  1980年   129篇
  1979年   102篇
  1978年   94篇
  1977年   123篇
  1976年   158篇
  1975年   80篇
  1974年   74篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Social media services such as YouTube and Flickr have become online necessities for millions of users worldwide. Social media are online services that enable users to share contents, opinions, and perspectives that support communication with other users. Social media places an emphasis on the shared experience between users, which we call co-experience. However, the online characteristics of social media increase psychological distance between users, which, in turn, results in a decrease in the quality of co-experience. Hence, as the goal of this study, we theoretically modeled and empirically verified the antecedents and user experience-based consequences of psychological distance in a social media-enhanced real-time streaming video service. In order to reduce psychological distance, we introduced two system elements: inhabited space (the degree of being situated in context and in a meaningful place) and isomorph effects (the degree of preserving the structure of a user’s actions). We constructed a social media-enhanced real-time streaming video service prototype and conducted a field experiment with actual social media users. The prototype, which streamed a live baseball game, enabled users to simultaneously view the game from remote locations and to interact with each other through cheering tools. The results indicate that inhabited space and isomorph effects reduce psychological distance between users, and this, in turn, enhances co-experience. This paper ends with theoretical as well as practical implications of the study.  相似文献   
992.
Although supplier involvement in new product development (NPD) projects has become increasingly important for strengthening a firm's competitive position, few studies have investigated the impact of changing research and development (R&D) workloads in NPD. According to the level of design-related characteristics including design-related communication and design-related nature, we identify four types of supplier–manufacturer relationships: sequential mode, passive supplier involvement, active supplier involvement, and strategic development. Differing from qualitative and survey-oriented research, this study proposes a system dynamics model for the quantitative exploration of workload impacts on R&D-system equilibrium under different supplier–manufacturer relationships. We justify experimentally the NPD performances of these supplier–manufacturer relationship configurations under workload impacts and provide managerial insights.  相似文献   
993.
Reliable routing of packets in a Mobile Ad Hoc Network (MANET) has always been a major concern. The open medium and the susceptibility of the nodes of being fault-prone make the design of protocols for these networks a challenging task. The faults in these networks, which occur either due to the failure of nodes or due to reorganization, can eventuate to packet loss. Such losses degrade the performance of the routing protocols running on them. In this paper, we propose a routing algorithm, named as learning automata based fault-tolerant routing algorithm (LAFTRA), which is capable of routing in the presence of faulty nodes in MANETs using multipath routing. We have used the theory of Learning Automata (LA) for optimizing the selection of paths, reducing the overhead in the network, and for learning about the faulty nodes present in the network. The proposed algorithm can be juxtaposed to any existing routing protocol in a MANET. The results of simulation of our protocol using network simulator 2 (ns-2) shows the increase in packet delivery ratio and decrease in overhead compared to the existing protocols. The proposed protocol gains an edge over FTAR, E2FT by nearly 2% and by more than 10% when compared with AODV in terms of packet delivery ratio with nearly 30% faulty nodes in the network. The overhead generated by our protocol is lesser by 1% as compared to FTAR and by nearly 17% as compared to E2FT when there are nearly 30% faulty nodes.  相似文献   
994.
This paper presents the implementation of a new text document classification framework that uses the Support Vector Machine (SVM) approach in the training phase and the Euclidean distance function in the classification phase, coined as Euclidean-SVM. The SVM constructs a classifier by generating a decision surface, namely the optimal separating hyper-plane, to partition different categories of data points in the vector space. The concept of the optimal separating hyper-plane can be generalized for the non-linearly separable cases by introducing kernel functions to map the data points from the input space into a high dimensional feature space so that they could be separated by a linear hyper-plane. This characteristic causes the implementation of different kernel functions to have a high impact on the classification accuracy of the SVM. Other than the kernel functions, the value of soft margin parameter, C is another critical component in determining the performance of the SVM classifier. Hence, one of the critical problems of the conventional SVM classification framework is the necessity of determining the appropriate kernel function and the appropriate value of parameter C for different datasets of varying characteristics, in order to guarantee high accuracy of the classifier. In this paper, we introduce a distance measurement technique, using the Euclidean distance function to replace the optimal separating hyper-plane as the classification decision making function in the SVM. In our approach, the support vectors for each category are identified from the training data points during training phase using the SVM. In the classification phase, when a new data point is mapped into the original vector space, the average distances between the new data point and the support vectors from different categories are measured using the Euclidean distance function. The classification decision is made based on the category of support vectors which has the lowest average distance with the new data point, and this makes the classification decision irrespective of the efficacy of hyper-plane formed by applying the particular kernel function and soft margin parameter. We tested our proposed framework using several text datasets. The experimental results show that this approach makes the accuracy of the Euclidean-SVM text classifier to have a low impact on the implementation of kernel functions and soft margin parameter C.  相似文献   
995.
Currently, embedded systems have been widely used for ubiquitous computing environments including digital setup boxes, mobile phones, and USN (Ubiquitous Sensor Networks). The significance of security has been growing as it must be necessarily embedded in all these systems. Up until now, many researchers have made efforts to verify the integrity of applied binaries downloaded in embedded systems. The research of problem solving is organized into hardware methods and software-like methods. In this research, the basic approach to solving problems from the software perspective was employed. From the software perspective, unlike in the existing papers (Seshadri et al., Proc. the IEEE symposium on security and privacy, 2004; Seshadri et al., Proc. the symposium on operating systems principals, 2005) based on the standardized model (TTAS.KO-11.0054. 2006) publicized in Korea, there is no extra verifier and conduct for the verification function in the target system. Contrary to the previous schemes (Jung et al. , 2008; Lee et al., LNCS, vol. 4808, pp. 346–355, 2007), verification results are stored in 1 validation check bit, instead of storing signature value for application binary files in the i-node structure for the purpose of reducing run-time execution overhead. Consequently, the proposed scheme is more efficient because it dramatically reduces overhead in storage space, and when it comes to computing, it performs one hash algorithm for initial execution and thereafter compares 1 validation check bit only, instead of signature and hash algorithms for every application binary. Furthermore, in cases where there are frequent changes in the i-node structure or file data depending on the scheme application, the scheme can provide far more effective verification performance compared to the previous schemes.  相似文献   
996.
This paper describes a scheme for proactive human search for a designated person in an undiscovered indoor environment without human operation or intervention. In designing and developing human identification with prior information a new approach that is robust to illumination and distance variations in the indoor environment is proposed. In addition, a substantial exploration method with an octree structure, suitable for path planning in an office configuration, is employed. All these functionalities are integrated in a message- and component-based architecture for the efficient integration and control of the system. This approach is demonstrated by succeeding human search in the challenging robot mission of the 2009 Robot Grand Challenge Contest.  相似文献   
997.
In the blogosphere, there exist posts relevant to a particular subject and blogs that show interest in the subject. In this paper, we define a set of such posts and blogs as a blog community and propose a method for extracting the blog community associated with a particular subject. The proposed method is based on the idea that the blogs who have performed actions (e.g., read, comment, trackback, scrap) to the posts of a particular subject are the ones with interest in the subject, and that the posts that have received actions from such blogs are the ones that contain the subject. The proposed method starts with a small number of manually-selected seed posts containing the subject. Then, the method selects the blogs that have performed actions to the seed posts over some threshold and the posts that have received actions over some threshold. Repeating these two steps gradually expands the blog community. This paper presents various techniques to improve the accuracy of the proposed method. The experimental results show that the proposed method exhibits a higher level of accuracy than the methods proposed in prior research. This paper also discusses business applications of the extracted community, such as target marketing, market monitoring, improving search results, finding power bloggers, and revitalization of the blogosphere.  相似文献   
998.
Object recognition is a well studied but extremely challenging field. We present a novel approach to feature construction for object detection called Evolution-COnstructed Features (ECO features). Most current approaches rely on human experts to construct features for object recognition. ECO features are automatically constructed by uniquely employing a standard genetic algorithm to discover multiple series of transforms that are highly discriminative. Using ECO features provides several advantages over other object detection algorithms including: no need for a human expert to build feature sets or tune their parameters, ability to generate specialized feature sets for different objects, no limitations to certain types of image sources, and ability to find both global and local feature types. We show in our experiments that the ECO features compete well against state-of-the-art object recognition algorithms.  相似文献   
999.
Accurate depth estimation is a challenging, yet essential step in the conversion of a 2D image sequence to a 3D stereo sequence. We present a novel approach to construct a temporally coherent depth map for each image in a sequence. The quality of the estimated depth is high enough for the purpose of2D to 3D stereo conversion. Our approach first combines the video sequence into a panoramic image. A user can scribble on this single panoramic image to specify depth information. The depth is then propagated to the remainder of the panoramic image. This depth map is then remapped to the original sequence and used as the initial guess for each individual depth map in the sequence. Our approach greatly simplifies the required user interaction during the assignment of the depth and allows for relatively free camera movement during the generation of a panoramic image. We demonstrate the effectiveness of our method by showing stereo converted sequences with various camera motions.  相似文献   
1000.
The feature of brevity in mobile phone messages makes it difficult to distinguish lexical patterns to identify spam. This paper proposes a novel approach to spam classification of extremely short messages using not only lexical features that reflect the content of a message but new stylistic features that indicate the manner in which the message is written. Experiments on two mobile phone message collections in two different languages show that the approach outperforms previous content-based approaches significantly, regardless of language.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号