首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   242篇
  免费   26篇
  国内免费   9篇
电工技术   31篇
综合类   6篇
化学工业   10篇
金属工艺   2篇
机械仪表   8篇
建筑科学   35篇
矿业工程   7篇
能源动力   13篇
轻工业   8篇
水利工程   8篇
石油天然气   9篇
无线电   8篇
一般工业技术   17篇
冶金工业   19篇
原子能技术   5篇
自动化技术   91篇
  2024年   1篇
  2023年   7篇
  2022年   9篇
  2021年   9篇
  2020年   10篇
  2019年   7篇
  2018年   10篇
  2017年   12篇
  2016年   15篇
  2015年   11篇
  2014年   21篇
  2013年   32篇
  2012年   22篇
  2011年   14篇
  2010年   12篇
  2009年   10篇
  2008年   16篇
  2007年   16篇
  2006年   6篇
  2005年   11篇
  2004年   5篇
  2003年   5篇
  2002年   3篇
  2001年   1篇
  2000年   2篇
  1999年   3篇
  1998年   2篇
  1997年   1篇
  1996年   1篇
  1995年   2篇
  1988年   1篇
排序方式: 共有277条查询结果,搜索用时 125 毫秒
1.
Load testing of applications is an important and costly activity for software provider companies. Classical solutions are very difficult to set up statically, and their cost is prohibitive in terms of both human and hardware resources. Virtualized cloud computing platforms provide new opportunities for stressing an application's scalability, by providing a large range of flexible and less expensive (pay‐per‐use model) computation units. On the basis of these advantages, load testing solutions could be provided on demand in the cloud. This paper describes a Benchmark‐as‐a‐Service solution that automatically scales the load injection platform and facilitates its setup according to load profiles. Our approach is based on: (i) virtualization of the benchmarking platform to create self‐scaling injectors; (ii) online calibration to characterize the injector's capacity and impact on the benched application; and (iii) a provisioning solution to appropriately scale the load injection platform ahead of time. We also report experiments on a benchmark illustrating the benefits of this system in terms of cost and resource reductions. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
2.
Since computer networks play an important role in distributed computing environments, an application's performance depends heavily on the quality of service provided by the communication networks. To ensure a high performance, the characteristics of wide area networks, WANs, must be well understood. This paper presents methodologies to characterize WAN traffic based on real measurements from Bellcore's backbone network that connects remote sites using dedicated T1 links. This paper also suggests some workload models that can be used for wide area network sizing and performance evaluation studies. It is found that the inter-site traffic pattern depends on the time of the day and the day of the week. Furthermore, the traffic between two sites is found to be reasonably symmetric, except for those sites designated as back-up sites. The coefficient of variation is used as a measure for the traffic burstiness and it is found to be 1·5 degree during working hours. The methods presented here are easy to use and cost-effective.  相似文献   
3.
4.
Carl Staelin 《Software》2005,35(11):1079-1105
lmbench is a powerful and extensible suite of micro‐benchmarks that measures a variety of important aspects of system performance. It has a powerful timing harness that manages most of the ‘housekeeping’ chores associated with benchmarking, making it easy to create new benchmarks that analyze systems or components of specific interest to the user. In many ways lmbench is a Swiss army knife for performance analysis. It includes an extensive suite of micro‐benchmarks that give powerful insights into system performance. For those aspects of system or application performance not covered by the suite, it is generally a simple task to create new benchmarks using the timing harness. lmbench is written in ANSI‐C and uses POSIX interfaces, so it is portable across a wide variety of systems and architectures. It also includes powerful new tools that measure performance under scalable loads to analyze SMP and clustered system performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
5.
分析了数字图像水印攻击方法及对策,提出了一种新的数字图像水印攻击分类,可以完善已有攻击分类.数字水印基准测试软件为数字水印算法提供一种评测标准.在文献[7,8]的基础上,介绍了新的数字水印攻击基准Certimark,并对Checkmark和Stirmark中典型的射影攻击和旋转攻击进行了分析比较.  相似文献   
6.
随着“一带一路”建设的推进,我国电力企业紧抓机遇,以市场为导向,以项目为依托,积极开拓海外市场,开展跨国经营,在更大范围、更广领域、更高层次上参与国际电力市场的开发合作与竞争。然而,在全球市场上,我国电力企业的国际化经营水平却处于较低水平。在走向国际化同时,需结合自身实际,从管理、技术上有所创新,步步为营,吸收国外先进的技术与管理经验,注重企业文化的形成,而国际对标则是提高企业国际竞争力的重要途径,是企业迈向国际尖端水平的“快捷方式”。  相似文献   
7.
Attributing authorship of documents with unknown creators has been studied extensively for natural language text such as essays and literature, but less so for non‐natural languages such as computer source code. Previous attempts at attributing authorship of source code can be categorised by two attributes: the software features used for the classification, either strings of n tokens/bytes (n‐grams) or software metrics; and the classification technique that exploits those features, either information retrieval ranking or machine learning. The results of existing studies, however, are not directly comparable as all use different test beds and evaluation methodologies, making it difficult to assess which approach is superior. This paper summarises all previous techniques to source code authorship attribution, implements feature sets that are motivated by the literature, and applies information retrieval ranking methods or machine classifiers for each approach. Importantly, all approaches are tested on identical collections from varying programming languages and author types. Our conclusions are as follows: (i) ranking and machine classifier approaches are around 90% and 85% accurate, respectively, for a one‐in‐10 classification problem; (ii) the byte‐level n‐gram approach is best used with different parameters to those previously published; (iii) neural networks and support vector machines were found to be the most accurate machine classifiers of the eight evaluated; (iv) use of n‐gram features in combination with machine classifiers shows promise, but there are scalability problems that still must be overcome; and (v) approaches based on information retrieval techniques are currently more accurate than approaches based on machine learning. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   
8.
Key computational kernels must run near their peak efficiency for most high‐performance computing (HPC) applications. Getting this level of efficiency has always required extensive tuning of the kernel on a particular platform of interest. The success or failure of an optimization is usually measured by invoking a timer. Understanding how to build reliable and context‐sensitive timers is one of the most neglected areas in HPC, and this results in a host of HPC software that looks good when reported in the papers, but delivers only a fraction of the reported performance when used by actual HPC applications. In this paper, we motivate the importance of timer design and then discuss the techniques and methodologies we have developed in order to accurately time HPC kernel routines for our well‐known empirical tuning framework, ATLAS. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
9.
Tablets, smartphones, and wearables have limited resources. Applications on these devices employ a graphical user interface (GUI) for interaction with users. Language runtimes for GUIs employ dynamic memory management using garbage collection (GC). However, GC policies and algorithms are designed for data centers and cloud computing, but they are not necessarily ideal for resource-constrained embedded devices. In this article, we present GUI GC, a JavaFX GUI benchmark, which we use to compare the performance of the four GC policies of the Eclipse OpenJ9 Java runtime on a resource-constrained environment. Overall, our experiments suggest that the default policy Gencon registered significantly lower execution times than its counterparts. The region-based policy, Balanced, did not fully utilize blocking times; thus, using GUI GC, we conducted experiments with explicit GC invocations that measured significant improvements of up to 13.22% when multiple CPUs were available. Furthermore, we created a second version of GUI GC that expands on the number of controllable load-stressing dimensions; we conducted a large number of randomly configured experiments to quantify the performance effect that each knob has. Finally, we analyzed our dataset to derive suitable knob configurations for desired runtime, GC, and hardware stress levels.  相似文献   
10.
Many scientific workflows are data intensive: large volumes of intermediate datasets are generated during their execution. Some valuable intermediate datasets need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science on clouds has become popular nowadays, more intermediate datasets in scientific cloud workflows can be stored by different storage strategies based on a pay-as-you-go model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenances in scientific workflows. With the IDG, deleted intermediate datasets can be regenerated, and as such we develop a novel algorithm that can find a minimum cost storage strategy for the intermediate datasets in scientific cloud workflow systems. The strategy achieves the best trade-off of computation cost and storage cost by automatically storing the most appropriate intermediate datasets in the cloud storage. This strategy can be utilised on demand as a minimum cost benchmark for all other intermediate dataset storage strategies in the cloud. We utilise Amazon clouds’ cost model and apply the algorithm to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that benchmarking effectively demonstrates the cost effectiveness over other representative storage strategies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号