首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1463篇
  免费   42篇
电工技术   18篇
化学工业   152篇
金属工艺   29篇
机械仪表   20篇
建筑科学   60篇
矿业工程   27篇
能源动力   27篇
轻工业   124篇
水利工程   14篇
石油天然气   22篇
武器工业   1篇
无线电   205篇
一般工业技术   173篇
冶金工业   264篇
原子能技术   9篇
自动化技术   360篇
  2023年   9篇
  2022年   8篇
  2021年   23篇
  2020年   19篇
  2019年   27篇
  2018年   27篇
  2017年   40篇
  2016年   43篇
  2015年   33篇
  2014年   54篇
  2013年   80篇
  2012年   58篇
  2011年   103篇
  2010年   88篇
  2009年   72篇
  2008年   86篇
  2007年   84篇
  2006年   75篇
  2005年   62篇
  2004年   58篇
  2003年   57篇
  2002年   36篇
  2001年   31篇
  2000年   13篇
  1999年   23篇
  1998年   25篇
  1997年   25篇
  1996年   57篇
  1995年   26篇
  1994年   16篇
  1993年   19篇
  1992年   12篇
  1991年   5篇
  1990年   6篇
  1989年   8篇
  1988年   8篇
  1987年   10篇
  1986年   7篇
  1985年   10篇
  1984年   10篇
  1983年   6篇
  1982年   6篇
  1981年   5篇
  1979年   4篇
  1978年   5篇
  1977年   5篇
  1976年   7篇
  1975年   3篇
  1974年   4篇
  1973年   3篇
排序方式: 共有1505条查询结果,搜索用时 13 毫秒
11.
This paper presents results from an industrial study that applied input space partitioning and semi-automated requirements modeling to large-scale industrial software, specifically financial calculation engines. Calculation engines are used in financial service applications such as banking, mortgage, insurance, and trading to compute complex, multi-conditional formulas to make high risk financial decisions. They form the heart of financial applications, and can cause severe economic harm if incorrect. Controllability and observability of these calculation engines are low, so robust and sophisticated test methods are needed to ensure the results are valid. However, the industry norm is to use pure human-based, requirements-driven test design, usually with very little automation. The Federal Home Loan Mortgage Corporation (FHLMC), commonly known as Freddie Mac, concerned that these test design techniques may lead to ineffective and inefficient testing, partnered with a university to use high quality, sophisticated test design on several ongoing projects. The goal was to determine if such test design can be cost-effective on this type of critical software. In this study, input space partitioning, along with automation, were applied with the help of several special-purpose tools to validate the effectiveness of input space partitioning. Results showed that these techniques were far more effective (finding more software faults) and more efficient (requiring fewer tests and less labor), and the managers reported that the testing cycle was reduced from five human days to 0.5. This study convinced upper management to begin infusing this approach into other software development projects.  相似文献   
12.
Dimensional scaling approaches are widely used to develop multi-body human models in injury biomechanics research. Given the limited experimental data for any particular anthropometry, a validated model can be scaled to different sizes to reflect the biological variance of population and used to characterize the human response. This paper compares two scaling approaches at the whole-body level: one is the conventional mass-based scaling approach which assumes geometric similarity; the other is the structure-based approach which assumes additional structural similarity by using idealized mechanical models to account for the specific anatomy and expected loading conditions. Given the use of exterior body dimensions and a uniform Young’s modulus, the two approaches showed close values of the scaling factors for most body regions, with 1.5 % difference on force scaling factors and 13.5 % difference on moment scaling factors, on average. One exception was on the thoracic modeling, with 19.3 % difference on the scaling factor of the deflection. Two 6-year-old child models were generated from a baseline adult model as application example and were evaluated using recent biomechanical data from cadaveric pediatric experiments. The scaled models predicted similar impact responses of the thorax and lower extremity, which were within the experimental corridors; and suggested further consideration of age-specific structural change of the pelvis. Towards improved scaling methods to develop biofidelic human models, this comparative analysis suggests further investigation on interior anatomical geometry and detailed biological material properties associated with the demographic range of the population.  相似文献   
13.
14.
Cheating is rampant in current gameplay on the Internet. However, it isn't as well understood as we might expect. The authors summarize the various known methods of cheating and define a taxonomy of online game cheating with respect to the underlying vulnerability, consequence, and cheating principal. This taxonomy provides a systematic introduction to the characteristics of cheats in online games and how they can arise. Although cheating in online games is largely due to various security failures, the four traditional aspects of security—confidentiality, integrity, availability, and authenticity—are insufficient to explain it. Instead, fairness becomes a vital additional aspect, and its enforcement provides a convincing perspective for understanding security techniques' role in developing and operating online games.  相似文献   
15.
Jeff Jones 《Natural computing》2011,10(4):1345-1369
The single-celled organism Physarum polycephalum efficiently constructs and minimises dynamical nutrient transport networks resembling proximity graphs in the Toussaint hierarchy. We present a particle model which collectively approximates the behaviour of Physarum. We demonstrate spontaneous transport network formation and complex network evolution using the model and show that the model collectively exhibits quasi-physical emergent properties, allowing it to be considered as a virtual computing material. This material is used as an unconventional method to approximate spatially represented geometry problems by representing network nodes as nutrient sources. We demonstrate three different methods for the construction, evolution and minimisation of Physarum-like transport networks which approximate Steiner trees, relative neighbourhood graphs, convex hulls and concave hulls. We extend the model to adapt population size in response to nutrient availability and show how network evolution is dependent on relative node position (specifically inter-node angle), sensor scaling and nutrient concentration. We track network evolution using a real-time method to record transport network topology in response to global differences in nutrient concentration. We show how Steiner nodes are utilised at low nutrient concentrations whereas direct connections to nutrients are favoured when nutrient concentration is high. The results suggest that the foraging and minimising behaviour of Physarum-like transport networks reflect complex interplay between nutrient concentration, nutrient location, maximising foraging area coverage and minimising transport distance. The properties and behaviour of the synthetic virtual plasmodium may be useful in future physical instances of distributed unconventional computing devices, and may also provide clues to the generation of emergent computation behaviour by Physarum.  相似文献   
16.
This article presents an experience report where we compare 8 years of experience of product related usability testing and evaluation with principles for software process improvement (SPI). In theory the product and the process views are often seen to be complementary, but studies of industry have demonstrated the opposite. Therefore, more empirical studies are needed to understand and improve the present situation. We find areas of close agreement as well as areas where our work illuminates new characteristics. It has been identified that successful SPI is dependent upon being successfully combined with a business orientation. Usability and business orientation also have strong connections although this has not been extensively addressed in SPI publications. Reasons for this could be that usability focuses on product metrics whilst today's SPI mainly focuses on process metrics. Also because today's SPI is dominated by striving towards a standardized, controllable, and predictable software engineering process; whilst successful usability efforts in organisations are more about creating a creative organisational culture advocating a useful product throughout the development and product life cycle. We provide a study and discussion that supports future development when combining usability and product focus with SPI, in particular if these efforts are related to usability process improvement efforts.  相似文献   
17.
Web applications are fast becoming more widespread, larger, more interactive, and more essential to the international use of computers. It is well understood that web applications must be highly dependable, and as a field we are just now beginning to understand how to model and test Web applications. One straightforward technique is to model Web applications as finite state machines. However, large numbers of input fields, input choices and the ability to enter values in any order combine to create a state space explosion problem. This paper evaluates a solution that uses constraints on the inputs to reduce the number of transitions, thus compressing the FSM. The paper presents an analysis of the potential savings of the compression technique and reports actual savings from two case studies.  相似文献   
18.
Web software applications have become complex, sophisticated programs that are based on novel computing technologies. Their most essential characteristic is that they represent a different kind of software deployment—most of the software is never delivered to customers’ computers, but remains on servers, allowing customers to run the software across the web. Although powerful, this deployment model brings new challenges to developers and testers. Checking static HTML links is no longer sufficient; web applications must be evaluated as complex software products. This paper focuses on three aspects of web applications that are unique to this type of deployment: (1) an extremely loose form of coupling that features distributed integration, (2) the ability that users have to directly change the potential flow of execution, and (3) the dynamic creation of HTML forms. Taken together, these aspects allow the potential control flow to vary with each execution, thus the possible control flows cannot be determined statically, prohibiting several standard analysis techniques that are fundamental to many software engineering activities. This paper presents a new way to model web applications, based on software couplings that are new to web applications, dynamic flow of control, distributed integration, and partial dynamic web application development. This model is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications. The atomic section model has numerous applications in web applications; this paper applies the model to the problem of testing web applications.  相似文献   
19.
We demonstrate that certain large-clique graph triangulations can be useful for reducing computational requirements when making queries on mixed stochastic/deterministic graphical models. This is counter to the conventional wisdom that triangulations that minimize clique size are always most desirable for use in computing queries on graphical models. Many of these large-clique triangulations are non-minimal and are thus unattainable via the popular elimination algorithm. We introduce ancestral pairs as the basis for novel triangulation heuristics and prove that no more than the addition of edges between ancestral pairs needs to be considered when searching for state space optimal triangulations in such graphs. Empirical results on random and real world graphs are given. We also present an algorithm and correctness proof for determining if a triangulation can be obtained via elimination, and we show that the decision problem associated with finding optimal state space triangulations in this mixed setting is NP-complete.  相似文献   
20.
“Global Interoperability Using Semantics, Standards, Science and Technology” is a concept that is predicated on the assumption that the semantic integration, frameworks and standards that support information exchange, and advances in science and technology can enable information-systems interoperability for many diverse users. This paper recommends technologies and approaches for enabling interoperability across a wide spectrum of political, geographical, and organizational levels, e.g. coalition, federal, state, tribal, regional, non government, and private. These recommendations represent steps toward the goal of the Semantic Web, where computers understand information on web sites through knowledge representations, agents, and ontologies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号