首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9476篇
  免费   55篇
  国内免费   23篇
电工技术   162篇
综合类   31篇
化学工业   270篇
金属工艺   296篇
机械仪表   69篇
建筑科学   92篇
矿业工程   11篇
能源动力   33篇
轻工业   38篇
水利工程   14篇
石油天然气   20篇
武器工业   2篇
无线电   329篇
一般工业技术   117篇
冶金工业   69篇
原子能技术   90篇
自动化技术   7911篇
  2024年   5篇
  2023年   5篇
  2022年   28篇
  2021年   36篇
  2020年   29篇
  2019年   20篇
  2018年   14篇
  2017年   17篇
  2016年   12篇
  2015年   15篇
  2014年   225篇
  2013年   174篇
  2012年   782篇
  2011年   2298篇
  2010年   1124篇
  2009年   977篇
  2008年   685篇
  2007年   597篇
  2006年   460篇
  2005年   581篇
  2004年   527篇
  2003年   582篇
  2002年   277篇
  2001年   5篇
  2000年   2篇
  1999年   7篇
  1998年   17篇
  1997年   10篇
  1996年   9篇
  1995年   6篇
  1994年   5篇
  1993年   6篇
  1992年   5篇
  1991年   7篇
  1989年   1篇
  1988年   1篇
  1987年   1篇
  1985年   1篇
  1984年   1篇
排序方式: 共有9554条查询结果,搜索用时 46 毫秒
91.
We developed a model of consumer-to-consumer (C2C) e-commerce trust and tested it. We expected that two influences: internal (natural propensity to trust [NPT] and perception of web site quality [PWSQ]) and external (other's trust of buyers/sellers [OTBS] and third party recognition [TPR]) would affect an individual's trust in C2C e-commerce. However contrary to studies of other types of e-commerce, support was only found for PWSQ and TPR; we therefore discussed possible reasons for this contradiction. Suggestions are made of ways to help e-commerce site developers provide a trustworthy atmosphere and identify trustworthy consumers.  相似文献   
92.
Data distribution management (DDM) plays a key role in traffic control for large-scale distributed simulations. In recent years, several solutions have been devised to make DDM more efficient and adaptive to different traffic conditions. Examples of such systems include the region-based, fixed grid-based, and dynamic grid-based (DGB) schemes, as well as grid-filtered region-based and agent-based DDM schemes. However, less effort has been directed toward improving the processing performance of DDM techniques. This paper presents a novel DDM scheme called the adaptive dynamic grid-based (ADGB) scheme that optimizes DDM time through the analysis of matching performance. ADGB uses an advertising scheme in which information about the target cell involved in the process of matching subscribers to publishers is known in advance. An important concept known as the distribution rate (DR) is devised. The DR represents the relative processing load and communication load generated at each federate. The DR and the matching performance are used as part of the ADGB method to select, throughout the simulation, the devised advertisement scheme that achieves the maximum gain with acceptable network traffic overhead. If we assume the same worst case propagation delays, when the matching probability is high, the performance estimation of ADGB has shown that a maximum efficiency gain of 66% can be achieved over the DGB scheme. The novelty of the ADGB scheme is its focus on improving performance, an important (and often forgotten) goal of DDM strategies.  相似文献   
93.
Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and high interprocessor communication overhead, which results from non-trivial data movement between tasks scheduled on different processors. In this paper, we present a new high-performance scheduling algorithm, called the longest dynamic critical path (LDCP) algorithm, for HeDCSs with a bounded number of processors. The LDCP algorithm is a list-based scheduling algorithm that uses a new attribute to efficiently select tasks for scheduling in HeDCSs. The efficient selection of tasks enables the LDCP algorithm to generate high-quality task schedules in a heterogeneous computing environment. The performance of the LDCP algorithm is compared to two of the best existing scheduling algorithms for HeDCSs: the HEFT and DLS algorithms. The comparison study shows that the LDCP algorithm outperforms the HEFT and DLS algorithms in terms of schedule length and speedup. Moreover, the improvement in performance obtained by the LDCP algorithm over the HEFT and DLS algorithms increases as the inter-task communication cost increases. Therefore, the LDCP algorithm provides a practical solution for scheduling parallel applications with high communication costs in HeDCSs.  相似文献   
94.
The goal of service differentiation is to provide different service quality levels to meet changing system configuration and resource availability and to satisfy different requirements and expectations of applications and users. In this paper, we investigate the problem of quantitative service differentiation on cluster-based delay-sensitive servers. The goal is to support a system-wide service quality optimization with respect to resource allocation on a computer system while provisioning proportionality fairness to clients. We first propose and promote a square-root proportional differentiation model. Interestingly, both popular delay factors, queueing delay and slowdown, are reciprocally proportional to the allocated resource usage. We formulate the problem of quantitative service differentiation as a generalized resource allocation optimization towards the minimization of system delay, defined as the sum of weighted delay of client requests. We prove that the optimization-based resource allocation scheme essentially provides square-root proportional service differentiation to clients. We then study the problem of service differentiation provisioning from an important relative performance metric, slowdown. We give a closed-form expression of the expected slowdown of a popular heavy-tailed workload model with respect to resource allocation on a server cluster. We design a two-tier resource management framework, which integrates a dispatcher-based node partitioning scheme and a server-based adaptive process allocation scheme. We evaluate the resource allocation framework with different models via extensive simulations. Results show that the square-root proportional model provides service differentiation at a minimum cost of system delay. The two-tier resource allocation framework can provide fine-grained and predictable service differentiation on cluster-based servers.  相似文献   
95.
This paper describes the parallel simulation of sediment dynamics in shallow water. By using a Lagrangian model, the problem is transformed to one in which a large number of independent particles must be tracked. This results in a technique that can be parallelised with high efficiency. We have developed a sediment transport model using three different sediment suspension methods. The first method uses a modified mean for the Poisson distribution function to determine the expected number of the suspended particles in each particular grid cell of the domain over all available processors. The second method determines the number of particles to suspend with the aid of the Poisson distribution function only in those grid cells which are assigned to that processor. The third method is based on the technique of using a synchronised pseudo-random-number generator to generate identical numbers of suspended particles in all valid grid cells for each processor. Parallel simulation experiments are performed in order to investigate the efficiency of these three methods. Also the parallel performance of the implementations is analysed. We conclude that the second method is the best method on distributed computing systems (e.g., a Beowulf cluster), whereas the third maintains the best load distribution.  相似文献   
96.
Standardisation initiatives (ISO and IEC) try to answer the problem of managing heterogeneous information, scattered within organizations, by formalising the knowledge related to products technical data. While the product is the centred object from which, along its lifecycle, all enterprise systems, either inside a single enterprise or between cooperating networked enterprises, have a specific view, we may consider it as active as far as it participates to the decisions making by providing knowledge about itself. This paper proposes a novel approach, postulating that the product, represented by its technical data, may be considered as interoperable per se with the many applications involved in manufacturing enterprises as far as it embeds knowledge about itself, as it stores all its technical data, provided that these are embedded on a common model. The matter of this approach is to formalise of all technical data and concepts contributing to the definition of a Product Ontology, embedded into the product itself and making it interoperable with applications, minimising loss of semantics.  相似文献   
97.
一种基于ENVI二次开发的遥感薄云去除方法的改进   总被引:2,自引:0,他引:2  
近年来常用的同态滤波去云算法因其采用滤波器的局限性只能去除低频区域的云而无法去除高频区域的云。因此在分析传统的同态滤波去云算法的基础上,引入了空域滤波,采用中值滤波器对图像进行处理,旨在去除高频区域的云。而后在ENVI遥感图像处理软件平台中采用IDL语言实现了算法并对ENVI进行二次开发。经实验结果分析表明,该法是有效的,并且为遥感数据的后续应用提供了方便。  相似文献   
98.
Redesigning Purdue's Online Writing Lab (OWL) presented the opportunity for collaboration among Writing Center and Professional Writing Program members. While the article briefly describes the OWL redesign process, the argument focuses on collaboration and presents a model for sustainable intraprogram collaboration. Following Hawhee, usability research is defined as “invention-in-the-middle,” which offers a model for understanding research process as part of the infrastructure of new media instruction as described by DeVoss, Cushman, and Grabill. This article offers four stakeholder perspectives on the process of participatory technology design: of writing center administrators, graduate students, technical writing practitioners, and writing program graduate faculty members. The model asserted by this article presents a dynamic understanding of expertise and of fluidity in the roles of participants. Collaborative usability research, seen as invention-in-the-middle, contributes both to long-term sustainability of technological artifacts as well as the discursive interactions among stakeholders whose work supports these artifacts.  相似文献   
99.
The traditional modes of knowledge production and circulation in academia are (slowly but surely) shifting from the hierarchical, top-down systems of print to the distributed, bottom-up systems of the Web. It is in the context of these shifts and the rapid development of Web 2.0 tools and methods that we argue for a concomitant shift in the predominant practices of graduate education in rhetoric—particularly for students of digital rhetoric. In this article, we describe the development of a research network that combines the power of digital networking with the collaborative facilitation offered by communities of practice and consider how research networks can be grown and sustained as part of the graduate education of technorhetoricians.  相似文献   
100.
This paper consists of two parts. In the first, more theoretic part, two Wiener systems driven by the same Gaussian noise excitation are considered. For each of these systems, the best linear approximation (BLA) of the output (in mean square sense) is calculated, and the residuals, defined as the difference between the actual output and the linearly simulated output is considered for both outputs. The paper is focused on the study of the linear relations that exist between these residuals. Explicit expressions are given as a function of the dynamic blocks of both systems, generalizing earlier results obtained by Brillinger [Brillinger, D. R. (1977). The identification of a particular nonlinear time series system. Biometrika, 64(3), 509-515] and Billings and Fakhouri [Billings, S. A., & Fakhouri, S. Y. (1982). Identification of systems containing linear dynamic and static nonlinear elements. Automatica, 18(1), 15-26]. Compared to these earlier results, a much wider class of static nonlinear blocks is allowed, and the efficiency of the estimate of the linear approximation between the residuals is considerably improved. In the second, more practical, part of the paper, this new theoretical result is used to generate initial estimates for the transfer function of the dynamic blocks of a Wiener-Hammerstein system. This method is illustrated on experimental data.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号