首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   42994篇
  免费   14342篇
  国内免费   15篇
电工技术   802篇
综合类   5篇
化学工业   18288篇
金属工艺   417篇
机械仪表   832篇
建筑科学   1790篇
矿业工程   9篇
能源动力   1033篇
轻工业   7680篇
水利工程   322篇
石油天然气   69篇
无线电   7259篇
一般工业技术   12222篇
冶金工业   1081篇
原子能技术   62篇
自动化技术   5480篇
  2024年   9篇
  2023年   92篇
  2022年   216篇
  2021年   454篇
  2020年   1587篇
  2019年   3327篇
  2018年   3282篇
  2017年   3584篇
  2016年   4083篇
  2015年   4084篇
  2014年   4109篇
  2013年   5328篇
  2012年   2979篇
  2011年   2721篇
  2010年   2862篇
  2009年   2786篇
  2008年   2259篇
  2007年   2125篇
  2006年   1813篇
  2005年   1521篇
  2004年   1439篇
  2003年   1384篇
  2002年   1344篇
  2001年   1165篇
  2000年   1120篇
  1999年   515篇
  1998年   210篇
  1997年   150篇
  1996年   109篇
  1995年   73篇
  1994年   69篇
  1993年   63篇
  1992年   52篇
  1991年   52篇
  1990年   32篇
  1989年   29篇
  1988年   34篇
  1987年   32篇
  1986年   31篇
  1985年   25篇
  1984年   27篇
  1983年   28篇
  1982年   20篇
  1981年   21篇
  1980年   20篇
  1979年   15篇
  1978年   9篇
  1977年   14篇
  1976年   15篇
  1972年   7篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Operational risk is commonly analyzed in terms of the distribution of aggregate yearly losses. Risk measures can then be computed as statistics of this distribution that focus on the region of extreme losses. Assuming independence among the operational risk events and between the likelihood that they occur and their magnitude, separate models are made for the frequency and for the severity of the losses. These are then combined to estimate the distribution of aggregate losses. While the detailed form of the frequency distribution does not significantly affect the risk analysis, the choice of model for the severity often has a significant impact on operational risk measures. For heavy-tailed distributions these measures are dominated by extreme losses, whose probability cannot be reliably extrapolated from the available data. With limited empirical evidence, it is difficult to distinguish among alternative models that produce very different values of the risk measures. Furthermore, the estimates obtained can be unstable and overly sensitive to the presence or absence of single extreme events. Setting a bound on the maximum amount that can be lost in a single event reduces the dependence on the distributional assumptions and improves the robustness and stability of the risk measures, while preserving their sensitivity to changes in the risk profile. This bound should be determined by expert assessment on the basis of economic arguments and validated by the regulator, so that it can be used as a control parameter in the risk analysis.  相似文献   
992.
In this paper, we present a heterogeneous parallel solver of a high frequency single level Fast Multipole Method (FMM) for the Helmholtz equation applied to acoustic scattering. The developed solution uses multiple GPUs to tackle the compute bound steps of the FMM (aggregation, disaggregation, and near interactions) while the CPU handles a memory bound step (translation) using OpenMP. The proposed solver performance is measured on a workstation with two GPUs (NVIDIA GTX 480) and is compared with that of a distributed memory solver run on a cluster of 32 nodes (HP BL465c) with an Infiniband network. Some energy efficiency results are also presented in this work.  相似文献   
993.
Two key aspects of the Knowledge Society are the interconnection between the actors involved in the decision making processes and the importance of the human factor, particularly the citizen’s continuous learning and education. This paper presents a new module devoted to knowledge extraction and diffusion that has been incorporated into a previously developed decision making tool concerning the Internet and related with the multicriteria selection of a discrete number of alternatives (PRIOR-Web). Quantitative and qualitative procedures using data and text mining methods have been employed in the extraction of knowledge. Graphical visualisation tools have been incorporated in the diffusion stage of the methodological approach suggested when dealing with decision making in the Knowledge Society. The resulting collaborative platform is being used as the methodological support for the cognitive democracy known as e-cognocracy.  相似文献   
994.
Currently, there are a large number of hotel Web sites that develop their own seals of quality based on customer feedback. As a result, a hotel can be classified differently by various Web sites at the same time, creating confusion in the consumer perceptions about the quality of a given hotel. Moreover, there are attempts to standardize such service quality evaluation, such as the SERVQUAL instrument, which is a multiple‐item scale for measuring service quality with several dimensions. In this context, we present a two‐stage linguistic multicriteria decision‐making model to integrate the hotel guests' opinions included in several Web sites, with two objectives: on the one hand, obtaining a SERVQUAL scale evaluation value of service quality with the integrated answers of the input opinions; on the other hand, getting a SERVQUAL overall evaluation value of service quality. This model is incorporated into an opinion aggregation architecture to integrate heterogeneous data (natural language included) from various tourism Web sites. As a particular case study, we show an application example using the high‐end hotels located in Granada (Spain). © 2012 Wiley Periodicals, Inc.  相似文献   
995.
BPMN: An introduction to the standard   总被引:1,自引:0,他引:1  
The Business Process Model and Notation (BPMN) is the de-facto standard for representing in a very expressive graphical way the processes occurring in virtually every kind of organization one can think of, from cuisine recipes to the Nobel Prize assignment process, incident management, e-mail voting systems, travel booking procedures, to name a few. In this work, we give an overview of BPMN and we present what are the links with other well-known machineries such as BPEL and XPDL. We give an assessment of how the OMG's BPMN standard is perceived and used by practitioners in everyday business process modeling chores.  相似文献   
996.
This paper proposes an approach that solves the Robot Localization problem by using a conditional state-transition Hidden Markov Model (HMM). Through the use of Self Organized Maps (SOMs) a Tolerant Observation Model (TOM) is built, while odometer-dependent transition probabilities are used for building an Odometer-Dependent Motion Model (ODMM). By using the Viterbi Algorithm and establishing a trigger value when evaluating the state-transition updates, the presented approach can easily take care of Position Tracking (PT), Global Localization (GL) and Robot Kidnapping (RK) with an ease of implementation difficult to achieve in most of the state-of-the-art localization algorithms. Also, an optimization is presented to allow the algorithm to run in standard microprocessors in real time, without the need of huge probability gridmaps.  相似文献   
997.
We propose an approach to shape detection of highly deformable shapes in images via manifold learning with regression. Our method does not require shape key points be defined at high contrast image regions, nor do we need an initial estimate of the shape. We only require sufficient representative training data and a rough initial estimate of the object position and scale. We demonstrate the method for face shape learning, and provide a comparison to nonlinear Active Appearance Model. Our method is extremely accurate, to nearly pixel precision and is capable of accurately detecting the shape of faces undergoing extreme expression changes. The technique is robust to occlusions such as glasses and gives reasonable results for extremely degraded image resolutions.  相似文献   
998.
Intelligent service robots provide various services to users by understanding the context and goals of a user task. In order to provide more reliable services, intelligent service robots need to consider various factors, such as their surrounding environments, users' changing needs, and constrained resources. To handle these factors, most of the intelligent service robots are controlled by a task‐based control system, which generates a task plan that represents a sequence of actions, and executes those actions by invoking the corresponding functions. However, the traditional task‐based control systems lack the consideration of resource factors even though intelligent service robots have limited resources (limited computational power, memory space, and network bandwidth). Moreover, system‐specific concerns such as the relationships among functional modules are not considered during the task generation phase. Without considering both the resource conditions and interdependencies among software modules as a whole, it will be difficult to efficiently manage the functionalities that are essential to provide core services to users. In this paper, we propose a mechanism for intelligent service robots to efficiently use their resources on‐demand by separating system‐specific information from task generation. We have defined a sub‐architecture that corresponds to each action of a task plan, and provides a way of using the limited resources by minimizing redundant software components and maintaining essential components for the current action. To support the optimization of resource consumption, we have developed a two‐phase optimization process, which is composed of the topological and temporal optimization steps. We have conducted an experiment with these mechanisms for an infotainment robot, and simulated the optimization process. Results show that our approach contributed to increase the utilization rate by 20% of the robot resources. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
999.
I‐Ching Hsu 《Software》2012,42(10):1211-1227
Web 2.0 Mashups offer entirely new opportunities for context‐aware application (CAA) developers by integrating Web 2.0 technologies to facilitate interoperability among heterogeneous context‐aware systems. From a software engineering perspective, a visualized approach for Web 2.0‐based CAA modeling is crucial. Current CAA development, however, cannot provide a conceptual model for Web 2.0‐based CAA. Therefore, the development efficiency and potential for reuse are decreased. The UML is a general purpose modeling language with potential for use in many application domains. However, UML often lacks elements needed to model concepts in specific domains, such as Web 2.0‐based CAA modeling. To address the above issues, this study presents the Web 2.0‐based CAA UML profile, a UML profile for modeling Web 2.0‐based CAA. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
1000.
The incident indirect light over a range of image pixels is often coherent. Two common approaches to exploit this inter‐pixel coherence to improve rendering performance are Irradiance Caching and Radiance Caching. Both compute incident indirect light only for a small subset of pixels (the cache), and later interpolate between pixels. Irradiance Caching uses scalar values that can be interpolated efficiently, but cannot account for shading variations caused by normal and reflectance variation between cache items. Radiance Caching maintains directional information, e.g., to allow highlights between cache items, but at the cost of storing and evaluating a Spherical Harmonics (SH) function per pixel. The arithmetic and bandwidth cost for this evaluation is linear in the number of coefficients and can be substantial. In this paper, we propose a method to replace it by an efficient per‐cache item pre‐filtering based on MIP maps — such as previously done for environment maps — leading to a single constant‐time lookup per pixel. Additionally, per‐cache item geometry statistics stored in distance‐MIP maps are used to improve the quality of each pixel's lookup. Our approximate interactive global illumination approach is an order of magnitude faster than Radiance Caching with Phong BRDFs and can be combined with Monte Carlo‐raytracing, Point‐based Global Illumination or Instant Radiosity.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号