首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1547篇
  免费   54篇
  国内免费   2篇
电工技术   15篇
综合类   1篇
化学工业   403篇
金属工艺   52篇
机械仪表   31篇
建筑科学   64篇
矿业工程   2篇
能源动力   55篇
轻工业   180篇
水利工程   6篇
无线电   154篇
一般工业技术   302篇
冶金工业   119篇
原子能技术   10篇
自动化技术   209篇
  2024年   3篇
  2023年   10篇
  2022年   30篇
  2021年   54篇
  2020年   25篇
  2019年   26篇
  2018年   27篇
  2017年   22篇
  2016年   34篇
  2015年   27篇
  2014年   54篇
  2013年   118篇
  2012年   84篇
  2011年   136篇
  2010年   101篇
  2009年   87篇
  2008年   84篇
  2007年   65篇
  2006年   71篇
  2005年   51篇
  2004年   41篇
  2003年   46篇
  2002年   38篇
  2001年   22篇
  2000年   20篇
  1999年   23篇
  1998年   29篇
  1997年   48篇
  1996年   23篇
  1995年   28篇
  1994年   20篇
  1993年   17篇
  1992年   12篇
  1991年   14篇
  1990年   5篇
  1989年   12篇
  1988年   5篇
  1987年   14篇
  1986年   4篇
  1985年   11篇
  1984年   6篇
  1983年   5篇
  1982年   5篇
  1981年   5篇
  1980年   6篇
  1979年   9篇
  1978年   9篇
  1977年   3篇
  1976年   4篇
  1973年   3篇
排序方式: 共有1603条查询结果,搜索用时 0 毫秒
91.
High-level synthesis (HLS) is a potential solution to increase the productivity of FPGA-based real-time image processing development. It allows designers to reap the benefits of hardware implementation directly from the algorithm behaviors specified using C-like languages with high abstraction level. In order to close the performance gap between the manual and HLS-based FPGA designs, various code optimization forms are made available in today’s HLS tools. This paper proposes a HLS source code and directive manipulation strategy for real-time image processing by taking into account the applying order of different optimization forms. Experiment results demonstrate that our approach can improve more effectively the test implementations comparing to the other optimization strategies.  相似文献   
92.
A profit and a demand are associated with each edge of a set of profitable edges of a given graph. A travel time is associated with each edge of the graph. A fleet of capacitated vehicles is given to serve the profitable edges. A maximum duration of the route of each vehicle is also given. The profit of an edge can be collected by one vehicle only that also serves the demand of the edge. The objective of this problem, which is called the undirected capacitated arc routing problem with profits (UCARPP), is to find a set of routes that satisfy the constraints on the duration of the route and on the capacity of the vehicle and maximize the total collected profit. We propose a branch-and-price algorithm and several heuristics. We can solve exactly instances with up to 97 profitable edges. The best heuristics find the optimal solution on most of instances where it is available.  相似文献   
93.
Owing to the dynamic nature of collaborative environments, the software intended to support collaborative work should adapt itself to the different situations that may occur. This requirement is related to the concept of “context of use”, which has been considered as an important aspect in the design of interactive systems. Nevertheless, two main problems about this concept have been identified by current research in context-aware computing: (1) most of the studies have mainly focused on the context of a single user, so the context of multiple users involved in a common endeavor remains little explored, and (2) adaptability in context-aware systems generally takes into account a reduced number of contextual variables (mainly the user’s location and platform). In this paper, we firstly re-conceptualize the notion of “context of use”, in order to consider the main characteristics of collaborative environments. Based on this new notion, we then design and implement a framework that allows application developers to specify the adaptability of groupware systems in terms of the state of activities, roles, collaborators’ location, available resources, and other typical variables of working groups. This framework has been generalized from scenarios that highlight dynamic situations presented in real collaborative settings. Finally, we validate our proposal by a set of applications that are able to adapt their user interface and functionality, when significant changes are produced in the environment, the working group, and/or the used devices.  相似文献   
94.
A regression model approach using a normalized difference vegetation index (NDVI) has the potential for estimating crop production in East Africa. However, before production estimation can become a reality, the underlying model assumptions and statistical nature of the sample data (NDVI and crop production) must be examined rigorously. Annual maize production statistics from 1982-90 for 36 agricultural districts within Kenya were used as the dependent variable; median area NDVI (independent variable) values from each agricultural district and year were extracted from the annual maximum NDVI data set. The input data and the statistical association of NDVI with maize production for Kenya were tested systematically for the following items: (1) homogeneity of the data when pooling the sample, (2) gross data errors and influence points, (3) serial (time) correlation, (4) spatial autocorrelation and (5) stability of the regression coefficients. The results of using a simple regression model with NDVI as the only independent variable are encouraging (r 0.75, p 0.05) and illustrate that NDVI can be a responsive indicator of maize production, especially in areas of high NDVI spatial variability, which coincide with areas of production variability in Kenya.  相似文献   
95.
This paper is a general overview of the S project, run at Blaise Pascal University between 1996 and 2002. The main goal of the S project was to demonstrate the applicability of skeleton-based parallel programming techniques to the fast prototyping of reactive vision applications. This project has produced several versions of a full-fledged integrated parallel programming environment (PPE). These PPEs have been used to implement realistic vision applications, such as road following or vehicle tracking for assisted driving, on embedded parallel platforms embarked on semi-autonomous vehicles. All versions of S share a common front-end and repertoire of skeletons––presented in previous papers––but differ in the techniques used for implementing skeletons. This paper focuses on these implementation issues, by making a comparative survey, according to a set of four criteria (efficiency, expressivity, portability, predictability), of these implementation techniques. It also gives an account of the lessons we have learned, both when dealing with these implementation issues and when using the resulting tools for prototyping vision applications.  相似文献   
96.
This work presents a straightforward approach aimed at modeling the dynamic I–V characteristics of microwave active solid‐state devices. The drain‐source current generator represents the most significant source of nonlinearity in a transistor and, therefore, its correct modeling is fundamental to predict accurately the current and voltage waveforms under large‐signal operation. The proposed approach relies on using a small set of low‐frequency time‐domain waveform measurements combined with numerical optimization‐based estimation of the nonlinear model parameters. The procedure is applied to a gallium nitride HEMT and silicon FinFET. The effectiveness of the modeling procedure in terms of prediction accuracy and generalization capability is demonstrated by validation of the extracted models under operating conditions different than the ones used for the parameters estimation. Good agreement between measurements and model simulations is achieved for both technologies and in both low‐ and high‐frequency range. © 2013 Wiley Periodicals, Inc. Int J RF and Microwave CAE 24:109–116, 2014.  相似文献   
97.
In the last decade, there has been a growing awareness regarding social exclusion. Considering the ageing population and the likelihood of older people being socially excluded, the aims of this article are to: (1) review existing studies concerning social exclusion in later life; and (2) identify how environmental and life-course perspectives are presented in studies focusing on social exclusion in later life. A systematic review in seven scientific databases was conducted to explore the peer-reviewed evidence. In total, 26 articles were included and analysed. Findings describe the variety of methods, conceptualisation, dimensions and measures used in this recent area of research. Determinants of social exclusion in later life are discussed and life-course and environmental perspectives are examined. The discussion highlights the complex character of the concept and measurement of social exclusion, and the presence of general and age-specific dimensions of social exclusion in later life. The time and context relativity and the need for life-course and environmental perspectives on social exclusion in later life are discussed. Finally, future directions of research are discussed.  相似文献   
98.
Cardiovascular disease (CVD) is the leading cause of death and loss of productive life years in the world. The underlying syndrome of CVD, atherosclerosis, is a complex disease process, which involves lipid metabolism, inflammation, innate and adaptive immunity, and many other pathophysiological aspects. Furthermore, CVD is influenced by genetic as well as environmental factors. Early detection of CVD and identification of patients at risk are crucial to reduce the burden of disease and to allow personalized treatment. As established risk factors fail to accurately predict which part of the population is likely to suffer from the disease, novel biomarkers are urgently needed. Proteomics can play a significant role in identifying these biomarkers. In this review, we describe the progress made in proteome profiling of the atherosclerotic plaque and several novel sources of potential biomarkers, including circulating cells and plasma extracellular vesicles. The importance of longitudinal biobanking in biomarker discovery is highlighted and exemplified by several plaque proteins identified in the biobank study Athero-Express. Finally, we discuss the PTMs of proteins that are involved in atherosclerosis, which may become one of the foci in the ongoing quest for biomarkers through proteomics of plaque and other matrices relevant to the progression of atherosclerosis.  相似文献   
99.
When the steady states are largely predominant with respect to transitional phases, steady-state simulation seems sufficient to predict the behavior of a complex system. Over the past 20 years, different modeling languages and dedicated tools have been developed to improve steady state simulation.In this paper, focus is made on steady-state simulation for system control and design. A model combining an emission sub-model with a ship propulsion sub-model was implemented in a constraint programming (CP) approach. It will help to determine the efficiency (i.e. the ability to model and solve the problem) and complexity of implementation (i.e. difficulties encountered during the implementation) of this approach.First, requirements for the steady-state simulation of complex systems are defined. Then, CP approach is shown to be able to answer these issues through experiments. This approach is then compared to one of the main simulation languages: Modelica.Although the two approaches (i.e Modelica and CP) are able to reverse models, the study shows that the use of Modelica principles for steady-state simulation involves some crippling limitations, such as the non-management of under/over-constrained systems, or inequalities.This study also shows that the constraint programming approach permits to meet some needs for steady-state simulation not yet covered by current approaches.  相似文献   
100.
Experience with a Hybrid Processor: K-Means Clustering   总被引:2,自引:0,他引:2  
We discuss hardware/software co-processing on a hybrid processor for a compute- and data-intensive multispectral imaging algorithm, k-means clustering. The experiments are performed on two models of the Altera Excalibur board, the first using the soft IP core 32-bit NIOS 1.1 RISC processor, and the second with the hard IP core ARM processor. In our experiments, we compare performance of the sequential k-means algorithm with three different accelerated versions. We consider granularity and synchronization issues when mapping an algorithm to a hybrid processor. Our results show that speedup of 11.8X is achieved by migrating computation to the Excalibur ARM hardware/software as compared to software only on a Gigahertz Pentium III. Speedup on the Excalibur NIOS is limited by the communication cost of transferring data from external memory through the processor to the customized circuits. This limitation is overcome on the Excalibur ARM, in which dual-port memories, accessible to both the processor and configurable logic, have the biggest performance impact of all the techniques studied.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号