首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84060篇
  免费   1002篇
  国内免费   406篇
电工技术   776篇
综合类   2316篇
化学工业   11451篇
金属工艺   4785篇
机械仪表   3016篇
建筑科学   2179篇
矿业工程   564篇
能源动力   1109篇
轻工业   3591篇
水利工程   1273篇
石油天然气   347篇
无线电   9310篇
一般工业技术   16304篇
冶金工业   2830篇
原子能技术   264篇
自动化技术   25353篇
  2019年   13篇
  2018年   14452篇
  2017年   13379篇
  2016年   9959篇
  2015年   606篇
  2014年   230篇
  2013年   209篇
  2012年   3137篇
  2011年   9399篇
  2010年   8275篇
  2009年   5540篇
  2008年   6775篇
  2007年   7782篇
  2006年   122篇
  2005年   1210篇
  2004年   1136篇
  2003年   1181篇
  2002年   542篇
  2001年   101篇
  2000年   177篇
  1999年   64篇
  1998年   90篇
  1997年   52篇
  1996年   62篇
  1995年   29篇
  1994年   28篇
  1993年   22篇
  1992年   15篇
  1991年   26篇
  1989年   11篇
  1988年   21篇
  1977年   18篇
  1976年   12篇
  1974年   11篇
  1969年   28篇
  1968年   48篇
  1967年   37篇
  1966年   44篇
  1965年   45篇
  1964年   13篇
  1963年   32篇
  1962年   24篇
  1961年   22篇
  1960年   30篇
  1959年   36篇
  1958年   40篇
  1957年   39篇
  1956年   38篇
  1955年   64篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
961.
Complex queries are widely used in current Web applications. They express highly specific information needs, but simply aggregating the meanings of primitive visual concepts does not perform well. To facilitate image search of complex queries, we propose a new image reranking scheme based on concept relevance estimation, which consists of Concept-Query and Concept-Image probabilistic models. Each model comprises visual, web and text relevance estimation. Our work performs weighted sum of the underlying relevance scores, a new ranking list is obtained. Considering the Web semantic context, we involve concepts by leveraging lexical and corpus-dependent knowledge, such as Wordnet and Wikipedia, with co-occurrence statistics of tags in our Flickr corpus. The experimental results showed that our scheme is significantly better than the other existing state-of-the-art approaches.  相似文献   
962.
Recently, uncertain graph data management and mining techniques have attracted significant interests and research efforts due to potential applications such as protein interaction networks and social networks. Specifically, as a fundamental problem, subgraph similarity all-matching is widely applied in exploratory data analysis. The purpose of subgraph similarity all-matching is to find all the similarity occurrences of the query graph in a large data graph. Numerous algorithms and pruning methods have been developed for the subgraph matching problem over a certain graph. However, insufficient efforts are devoted to subgraph similarity all-matching over an uncertain data graph, which is quite challenging due to high computation costs. In this paper, we define the problem of subgraph similarity maximal all-matching over a large uncertain data graph and propose a framework to solve this problem. To further improve the efficiency, several speed-up techniques are proposed such as the partial graph evaluation, the vertex pruning, the calculation model transformation, the incremental evaluation method and the probability upper bound filtering. Finally, comprehensive experiments are conducted on real graph data to test the performance of our framework and optimization methods. The results verify that our solutions can outperform the basic approach by orders of magnitudes in efficiency.  相似文献   
963.
Social media influence analysis, sometimes also called authority detection, aims to rank users based on their influence scores in social media. Existing approaches of social influence analysis usually focus on how to develop effective algorithms to quantize users’ influence scores. They rarely consider a person’s expertise levels which are arguably important to influence measures. In this paper, we propose a computational approach to measuring the correlation between expertise and social media influence, and we take a new perspective to understand social media influence by incorporating expertise into influence analysis. We carefully constructed a large dataset of 13,684 Chinese celebrities from Sina Weibo (literally ”Sina microblogging”). We found that there is a strong correlation between expertise levels and social media influence scores. Our analysis gave a good explanation of the phenomenon of “top across-domain influencers”. In addition, different expertise levels showed influence variation patterns: e.g., (1) high-expertise celebrities have stronger influence on the “audience” in their expertise domains; (2) expertise seems to be more important than relevance and participation for social media influence; (3) the audiences of top expertise celebrities are more likely to forward tweets on topics outside the expertise domains from high-expertise celebrities.  相似文献   
964.
Use Case modeling is a popular technique for documenting functional requirements of software systems. Refactoring is the process of enhancing the structure of a software artifact without changing its intended behavior. Refactoring, which was first introduced for source code, has been extended for use case models. Antipatterns are low quality solutions to commonly occurring design problems. The presence of antipatterns in a use case model is likely to propagate defects to other software artifacts. Therefore, detection and refactoring of antipatterns in use case models is crucial for ensuring the overall quality of a software system. Model transformation can greatly ease several software development activities including model refactoring. In this paper, a model transformation approach is proposed for improving the quality of use case models. Model transformations which can detect antipattern instances in a given use case model, and refactor them appropriately are defined and implemented. The practicability of the approach is demonstrated by applying it on a case study that pertains to biodiversity database system. The results show that model transformations can efficiently improve quality of use case models by saving time and effort.  相似文献   
965.
Utilising the methodology of content analysis, this study uses a multidisciplinary approach to define public e-procurement. Various aspects of e-procurement have been discussed from information systems, supply chain management, electronic commerce/electronic government, and public procurement to come up with an integrated definition of public e-procurement. Following this, e-procurement assimilation has been defined and its impact on procurement efficiency has been evaluated. Following the confirmatory factor analysis in structural equation modelling, dimensional level analysis in ANOVA has been undertaken for the three forms of e-procurement technologies namely e-tendering, e-catalogue management systems, and e-marketplace. The results show the positive and significant impact of the assimilation process on procurement efficiency.  相似文献   
966.
Participation is the cornerstone of any community. Promoting, understanding and properly managing it allows not only keeping the community sustainable, but also providing personalized services to its members and managers. This article presents a case study in which student participation in a course community was motivated using two different extrinsic mechanisms, and mediated by a software platform. The results were compared with a baseline community of the same course, in which participation was not motivated by external means. The analysis of these results indicates that managing a partially virtual course community requires the introduction of monitoring services, community managers and extrinsic mechanisms to motivate participation. These findings allow community managers to improve their capability for promoting participation and keeping the community sustainable. The findings also raise several implications that should be considered in the design of software supporting this kind of community, when managing the participation of its members.  相似文献   
967.
Crowdsourcing is currently attracting much attention from organisations for its competitive advantages over traditional work structures regarding how to utilise skills and labour and especially to harvest expertise and innovation. Prior research suggests that the decision to crowdsource cannot simply be based on perceived advantages; rather multiple factors should be considered. However, a structured account and integration of the most important decision factors is still lacking. This research fills the gap by providing a systematic literature review of the decision to crowdsource. Our results identify nine factors and sixteen sub-factors influencing this decision. These factors are structured into a decision framework concerning task, people, management, and environmental factors. Based on this framework, we give several recommendations for managers making the crowdsourcing decision.  相似文献   
968.
The growing need for location based services motivates the moving k nearest neighbor query (MkNN), which requires to find the k nearest neighbors of a moving query point continuously. In most existing solutions, data objects are abstracted as points. However, lots of real-world data objects, such as roads, rivers or pipelines, should be reasonably modeled as line segments or polyline segments. In this paper, we present LV*-Diagram to handle MkNN queries over line segment data objects. LV*-Diagram dynamically constructs a safe region. The query results remain unchanged if the query point is in the safe region, and hence, the computation cost of the server is greatly reduced. Experimental results show that our approach significantly outperforms the baseline method w.r.t. CPU load, I/O, and communication costs.  相似文献   
969.
This work introduces and establishes a new model for cache management, where clients suggest preferences regarding their expectations for the time they are willing to wait, and the level of obsolescence they are willing to tolerate. The cache uses these preferences to decide upon entrance and exit of objects to and from its storage, and select the best copy of requested object among all available copies (fresh, cached, remote). We introduce three replacement policies, each evicts objects based on ongoing scores, considering users’ preferences combined with other objects’ properties such as size, obsolescence rate and popularity. Each replacement algorithm follows a different strategy: (a) an optimal solution that use dynamic programming approach to find the best objects to be kept (b) another optimal solution that use branch and bound approach to find the worst objects to be thrown out (c) an algorithm that use heuristic approach to efficiently select the objects to be evicted. Using these replacement algorithms the cache is able to keep the objects that are best suited for users preferences and dump the other objects. We compare our proposed algorithms to the Least-Recently-Used algorithm, and provide evidence to the advantages of our algorithms providing better service to cache’s users with less burden on network resources and reduced workloads on origin servers.  相似文献   
970.
Knowledge collaboration (KC) is an important strategy measure to improve knowledge management, focusing on not only efficiency of knowledge cooperation, but also adding value of intellectual capital and social capital. In virtual teams, many factors, such as team’s network characteristics, collaborative culture, and individual collaborative intention, affect the performance of KC. By discussing the nature of KC, this paper presents that the performance of can be measured from two aspects: effectiveness of collaboration and efficiency of cooperation. Among them, effectiveness of collaboration is measured through value added and efficiency of cooperation is measured through accuracy and timeliness. Then the paper discusses the factors affecting the performance of KC from network characteristics, individual attributes and team attributes. The results show that network characteristics, individual attributes and team attributes in virtual team have significant impacts on the performance of KC.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号