首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10974篇
  免费   83篇
  国内免费   143篇
电工技术   127篇
综合类   5篇
化学工业   551篇
金属工艺   437篇
机械仪表   64篇
建筑科学   70篇
矿业工程   4篇
能源动力   51篇
轻工业   229篇
水利工程   24篇
石油天然气   11篇
无线电   345篇
一般工业技术   248篇
冶金工业   143篇
原子能技术   91篇
自动化技术   8800篇
  2023年   17篇
  2022年   57篇
  2021年   70篇
  2020年   27篇
  2019年   32篇
  2018年   27篇
  2017年   23篇
  2016年   30篇
  2015年   36篇
  2014年   239篇
  2013年   238篇
  2012年   819篇
  2011年   3126篇
  2010年   1176篇
  2009年   1031篇
  2008年   714篇
  2007年   636篇
  2006年   475篇
  2005年   598篇
  2004年   545篇
  2003年   597篇
  2002年   292篇
  2001年   13篇
  2000年   6篇
  1999年   28篇
  1998年   94篇
  1997年   27篇
  1996年   20篇
  1995年   6篇
  1994年   8篇
  1993年   8篇
  1992年   13篇
  1991年   7篇
  1990年   15篇
  1989年   12篇
  1988年   16篇
  1987年   8篇
  1986年   14篇
  1985年   8篇
  1984年   30篇
  1983年   14篇
  1982年   9篇
  1981年   10篇
  1980年   4篇
  1978年   2篇
  1977年   2篇
  1976年   3篇
  1975年   4篇
  1974年   2篇
  1928年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
181.
Predicting defect-prone software modules using support vector machines   总被引:2,自引:0,他引:2  
Effective prediction of defect-prone software modules can enable software developers to focus quality assurance activities and allocate effort and resources more efficiently. Support vector machines (SVM) have been successfully applied for solving both classification and regression problems in many applications. This paper evaluates the capability of SVM in predicting defect-prone software modules and compares its prediction performance against eight statistical and machine learning models in the context of four NASA datasets. The results indicate that the prediction performance of SVM is generally better than, or at least, is competitive against the compared models.  相似文献   
182.
Parametric software cost estimation models are based on mathematical relations, obtained from the study of historical software projects databases, that intend to be useful to estimate the effort and time required to develop a software product. Those databases often integrate data coming from projects of a heterogeneous nature. This entails that it is difficult to obtain a reasonably reliable single parametric model for the range of diverging project sizes and characteristics. A solution proposed elsewhere for that problem was the use of segmented models in which several models combined into a single one contribute to the estimates depending on the concrete characteristic of the inputs. However, a second problem arises with the use of segmented models, since the belonging of concrete projects to segments or clusters is subject to a degree of fuzziness, i.e. a given project can be considered to belong to several segments with different degrees.This paper reports the first exploration of a possible solution for both problems together, using a segmented model based on fuzzy clusters of the project space. The use of fuzzy clustering allows obtaining different mathematical models for each cluster and also allows the items of a project database to contribute to more than one cluster, while preserving constant time execution of the estimation process. The results of an evaluation of a concrete model using the ISBSG 8 project database are reported, yielding better figures of adjustment than its crisp counterpart.  相似文献   
183.
The use of Source Code Author Profiles (SCAP) represents a new, highly accurate approach to source code authorship identification that is, unlike previous methods, language independent. While accuracy is clearly a crucial requirement of any author identification method, in cases of litigation regarding authorship, plagiarism, and so on, there is also a need to know why it is claimed that a piece of code is written by a particular author. What is it about that piece of code that suggests a particular author? What features in the code make one author more likely than another? In this study, we describe a means of identifying the high-level features that contribute to source code authorship identification using as a tool the SCAP method. A variety of features are considered for Java and Common Lisp and the importance of each feature in determining authorship is measured through a sequence of experiments in which we remove one feature at a time. The results show that, for these programs, comments, layout features and package-related naming influence classification accuracy whereas user-defined naming, an obvious programmer related feature, does not appear to influence accuracy. A comparison is also made between the relative feature contributions in programs written in the two languages.  相似文献   
184.
Development of software intensive systems (systems) in practice involves a series of self-contained phases for the lifecycle of a system. Semantic and temporal gaps, which occur among phases and among developer disciplines within and across phases, hinder the ongoing development of a system because of the interdependencies among phases and among disciplines. Such gaps are magnified among systems that are developed at different times by different development teams, which may limit reuse of artifacts of systems development and interoperability among the systems. This article discusses such gaps and a systems development process for avoiding them.  相似文献   
185.
We developed a model of consumer-to-consumer (C2C) e-commerce trust and tested it. We expected that two influences: internal (natural propensity to trust [NPT] and perception of web site quality [PWSQ]) and external (other's trust of buyers/sellers [OTBS] and third party recognition [TPR]) would affect an individual's trust in C2C e-commerce. However contrary to studies of other types of e-commerce, support was only found for PWSQ and TPR; we therefore discussed possible reasons for this contradiction. Suggestions are made of ways to help e-commerce site developers provide a trustworthy atmosphere and identify trustworthy consumers.  相似文献   
186.
Data distribution management (DDM) plays a key role in traffic control for large-scale distributed simulations. In recent years, several solutions have been devised to make DDM more efficient and adaptive to different traffic conditions. Examples of such systems include the region-based, fixed grid-based, and dynamic grid-based (DGB) schemes, as well as grid-filtered region-based and agent-based DDM schemes. However, less effort has been directed toward improving the processing performance of DDM techniques. This paper presents a novel DDM scheme called the adaptive dynamic grid-based (ADGB) scheme that optimizes DDM time through the analysis of matching performance. ADGB uses an advertising scheme in which information about the target cell involved in the process of matching subscribers to publishers is known in advance. An important concept known as the distribution rate (DR) is devised. The DR represents the relative processing load and communication load generated at each federate. The DR and the matching performance are used as part of the ADGB method to select, throughout the simulation, the devised advertisement scheme that achieves the maximum gain with acceptable network traffic overhead. If we assume the same worst case propagation delays, when the matching probability is high, the performance estimation of ADGB has shown that a maximum efficiency gain of 66% can be achieved over the DGB scheme. The novelty of the ADGB scheme is its focus on improving performance, an important (and often forgotten) goal of DDM strategies.  相似文献   
187.
Multicast is a fundamental routing service in wireless mesh networks (WMNs) due to its many potential applications such as video conferencing, online games, and webcast. Recently, researchers proposed using link-quality-based routing metrics for finding high-throughput paths for multicast routing. However, the performance of such link-quality-based multicast routing is still limited by severe unfairness. Two major artifacts that exist in WMNs are fading which leads to low quality links, and interference which leads to unfair channel allocation in the 802.11 MAC protocol. These artifacts cause the multicast application to behave unfairly with respect to the performance achieved by the multicast receivers.  相似文献   
188.
Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and high interprocessor communication overhead, which results from non-trivial data movement between tasks scheduled on different processors. In this paper, we present a new high-performance scheduling algorithm, called the longest dynamic critical path (LDCP) algorithm, for HeDCSs with a bounded number of processors. The LDCP algorithm is a list-based scheduling algorithm that uses a new attribute to efficiently select tasks for scheduling in HeDCSs. The efficient selection of tasks enables the LDCP algorithm to generate high-quality task schedules in a heterogeneous computing environment. The performance of the LDCP algorithm is compared to two of the best existing scheduling algorithms for HeDCSs: the HEFT and DLS algorithms. The comparison study shows that the LDCP algorithm outperforms the HEFT and DLS algorithms in terms of schedule length and speedup. Moreover, the improvement in performance obtained by the LDCP algorithm over the HEFT and DLS algorithms increases as the inter-task communication cost increases. Therefore, the LDCP algorithm provides a practical solution for scheduling parallel applications with high communication costs in HeDCSs.  相似文献   
189.
The lack of proper support for multicast services in the Internet has hindered the widespread use of applications that rely on group communication services such as mobile software agents. Although they do not require high bandwidth or heavy traffic, these types of applications need to cooperate in a scalable, fair and decentralized way. This paper presents GMAC, an overlay network that implements all multicast related functionality–including membership management and packet forwarding–in the end systems. GMAC introduces a new approach for providing multicast services for mobile agent platforms in a decentralized way, where group members cooperate in a fair way, minimize the protocol overhead, thus achieving great scalability. Simulations comparing GMAC with other approaches, in aspects such as end-to-end group propagation delay, group latency, group bandwidth, protocol overhead, resource utilization and failure recovery, show that GMAC is a scalable and robust solution to provide multicast services in a decentralized way to mobile software agent platforms with requirements similar to MoviLog.  相似文献   
190.
The goal of service differentiation is to provide different service quality levels to meet changing system configuration and resource availability and to satisfy different requirements and expectations of applications and users. In this paper, we investigate the problem of quantitative service differentiation on cluster-based delay-sensitive servers. The goal is to support a system-wide service quality optimization with respect to resource allocation on a computer system while provisioning proportionality fairness to clients. We first propose and promote a square-root proportional differentiation model. Interestingly, both popular delay factors, queueing delay and slowdown, are reciprocally proportional to the allocated resource usage. We formulate the problem of quantitative service differentiation as a generalized resource allocation optimization towards the minimization of system delay, defined as the sum of weighted delay of client requests. We prove that the optimization-based resource allocation scheme essentially provides square-root proportional service differentiation to clients. We then study the problem of service differentiation provisioning from an important relative performance metric, slowdown. We give a closed-form expression of the expected slowdown of a popular heavy-tailed workload model with respect to resource allocation on a server cluster. We design a two-tier resource management framework, which integrates a dispatcher-based node partitioning scheme and a server-based adaptive process allocation scheme. We evaluate the resource allocation framework with different models via extensive simulations. Results show that the square-root proportional model provides service differentiation at a minimum cost of system delay. The two-tier resource allocation framework can provide fine-grained and predictable service differentiation on cluster-based servers.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号