首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3223篇
  免费   205篇
  国内免费   3篇
电工技术   27篇
综合类   3篇
化学工业   906篇
金属工艺   52篇
机械仪表   73篇
建筑科学   133篇
矿业工程   6篇
能源动力   66篇
轻工业   596篇
水利工程   37篇
石油天然气   19篇
无线电   138篇
一般工业技术   568篇
冶金工业   450篇
原子能技术   15篇
自动化技术   342篇
  2024年   9篇
  2023年   30篇
  2022年   45篇
  2021年   219篇
  2020年   91篇
  2019年   118篇
  2018年   120篇
  2017年   95篇
  2016年   98篇
  2015年   77篇
  2014年   131篇
  2013年   243篇
  2012年   205篇
  2011年   247篇
  2010年   159篇
  2009年   167篇
  2008年   175篇
  2007年   148篇
  2006年   129篇
  2005年   99篇
  2004年   85篇
  2003年   76篇
  2002年   66篇
  2001年   37篇
  2000年   44篇
  1999年   46篇
  1998年   90篇
  1997年   72篇
  1996年   43篇
  1995年   33篇
  1994年   20篇
  1993年   28篇
  1992年   13篇
  1991年   15篇
  1990年   14篇
  1989年   15篇
  1988年   15篇
  1987年   15篇
  1986年   8篇
  1985年   12篇
  1984年   5篇
  1983年   14篇
  1982年   5篇
  1981年   9篇
  1980年   6篇
  1978年   7篇
  1977年   5篇
  1976年   9篇
  1974年   4篇
  1965年   2篇
排序方式: 共有3431条查询结果,搜索用时 531 毫秒
71.

Context

In software development, Testing is an important mechanism both to identify defects and assure that completed products work as specified. This is a common practice in single-system development, and continues to hold in Software Product Lines (SPL). Even though extensive research has been done in the SPL Testing field, it is necessary to assess the current state of research and practice, in order to provide practitioners with evidence that enable fostering its further development.

Objective

This paper focuses on Testing in SPL and has the following goals: investigate state-of-the-art testing practices, synthesize available evidence, and identify gaps between required techniques and existing approaches, available in the literature.

Method

A systematic mapping study was conducted with a set of nine research questions, in which 120 studies, dated from 1993 to 2009, were evaluated.

Results

Although several aspects regarding testing have been covered by single-system development approaches, many cannot be directly applied in the SPL context due to specific issues. In addition, particular aspects regarding SPL are not covered by the existing SPL approaches, and when the aspects are covered, the literature just gives brief overviews. This scenario indicates that additional investigation, empirical and practical, should be performed.

Conclusion

The results can help to understand the needs in SPL Testing, by identifying points that still require additional investigation, since important aspects regarding particular points of software product lines have not been addressed yet.  相似文献   
72.
There is a chronic lack of shared application domains to test advanced research models and agent negotiation architectures in Multiagent Systems. In this paper we introduce a friendly testbed for that purpose. The testbed is based on The Diplomacy Game where negotiation and the relationships between players play an essential role. The testbed profits from the existence of a large community of human players that know the game and can easily provide data for experiments. We explain the infrastructure in the paper and make it freely available to the AI community.  相似文献   
73.
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.  相似文献   
74.
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering.  相似文献   
75.
Deduplication is the task of identifying the entities in a data set which refer to the same real world object. Over the last decades, this problem has been largely investigated and many techniques have been proposed to improve the efficiency and effectiveness of the deduplication algorithms. As data sets become larger, such algorithms may generate critical bottlenecks regarding memory usage and execution time. In this context, cloud computing environments have been used for scaling out data quality algorithms. In this paper, we investigate the efficacy of different machine learning techniques for scaling out virtual clusters for the execution of deduplication algorithms under predefined time restrictions. We also propose specific heuristics (Best Performing Allocation, Probabilistic Best Performing Allocation, Tunable Allocation, Adaptive Allocation and Sliced Training Data) which, together with the machine learning techniques, are able to tune the virtual cluster estimations as demands fluctuate over time. The experiments we have carried out using multiple scale data sets have provided many insights regarding the adequacy of the considered machine learning algorithms and proposed heuristics for tackling cloud computing provisioning.  相似文献   
76.
Satellite remote sensing has the potential to contribute to plant phenology monitoring at spatial and temporal scales relevant for regional and global scale studies. Historically, temporal composites of satellite data, ranging from 8 days to 16 days, have been used as a starting point for satellite-derived phenology data sets. In this study we assess how the temporal resolution of such composites affects the estimation of the start of season (SOS) by: 1) calibrating a relationship between satellite derived SOS with in situ leaf unfolding (LU) of trembling aspen (Populus tremuloides) across Canada and 2) quantifying the sensitivity of calibrated satellite SOS estimates and trends, over Canadian broadleaf forests, to the temporal resolution of NDVI data. SOS estimates and trends derived from daily NDVI data were compared to SOS estimates and trends derived from multiday NDVI composites that retain the exact date of the maximum NDVI value or that assume the midpoint of the multiday interval as the observation date. In situ observations of LU dates were acquired from the PlantWatch Canada network. A new Canadian database of cloud and snow screened daily 1-km resolution National Oceanic and Atmospheric Administration advanced very high resolution radiometer surface reflectance images was used as input satellite data. The mean absolute errors of SOS dates with respect to in situ LU dates ranged between 13 and 40 days. SOS estimates from NDVI composites that retain the exact date of the maximum NDVI value had smaller errors (~ 13 to 20 days). The sensitivity analysis reinforced these findings: SOS estimates from NDVI composites that use the exact date had smaller absolute deviations from the LU date (0 to − 5 days) than the SOS estimates from NDVI composites that use the midpoint (− 2 to − 27 days). The SOS trends between 1985 and 2007 were not sensitive to the temporal resolution or compositing methods. However, SOS trends at individual ecozones showed significant differences with the SOS trends from daily NDVI data (Taiga plains and the Pacific maritime zones). Overall, our results suggest that satellite based estimates of vegetation green-up dates should preferably use sub-sampled NDVI composites that include the exact observation date of the maximum NDVI to minimize errors in both, SOS estimates and SOS trend analyses. For trend analyses alone, any of the compositing methods could be used, preferably with composite intervals of less than 28 days. This is an important finding, as it suggests that existing long-term 10-day or 15-day NDVI composites could be used for SOS trend analyses over broadleaf forests in Canada or similar areas. Future studies will take advantage of the growing in situ phenology networks to improve the validation of satellite derived green-up dates.  相似文献   
77.
Correcting design decay in source code is not a trivial task. Diagnosing and subsequently correcting inconsistencies between a software system’s code and its design rules (e.g., database queries are only allowed in the persistence layer) and coding conventions can be complex, time-consuming and error-prone. Providing support for this process is therefore highly desirable, but of a far greater complexity than suggesting basic corrective actions for simplistic implementation problems (like the “declare a local variable for non-declared variable” suggested by Eclipse).We present an abductive reasoning approach to inconsistency correction that consists of (1) a means for developers to document and verify a system’s design and coding rules, (2) an abductive logic reasoner that hypothesizes possible causes of inconsistencies between the system’s code and the documented rules and (3) a library of corrective actions for each hypothesized cause. This work builds on our previous work, where we expressed design rules as equality relationships between sets of source code artifacts (e.g., the set of methods in the persistence layer is the same as the set of methods that query the database). In this paper, we generalize our approach to design rules expressed as user-defined binary relationships between two sets of source code artifacts (e.g., every state changing method should invoke a persistence method).We illustrate our approach on the design of IntensiVE, a tool suite that enables defining sets of source code artifacts intensionally (by means of logic queries) and verifying relationships between such sets.  相似文献   
78.
In this work a simple neural network with asymmetric basis functions is proposed as a feature extractor for P waves in electrocardiographic signals (ECG). The neural network is trained using the classical backward-error-propagation algorithm. The performance of the proposed network was tested using actual ECG signals and compared with other types of neural feature extractors.  相似文献   
79.
Recent developments in cellular imaging now permit the minimally invasive study of protein interactions in living cells. These advances are of enormous interest to cell biologists, as proteins rarely act in isolation, but rather in concert with others in forming cellular machinery. Up until recently, all protein interactions had to be determined in vitro using biochemical approaches. This biochemical legacy has provided cell biologists with the basis to test defined protein-protein interactions not only inside cells, but now also with spatial resolution. More recent developments in TCSPC imaging are now also driving towards being able to determine protein interaction rates with similar spatial resolution, and together, these experimental advances allow investigators to perform biochemical experiments inside living cells. Here, we discuss some findings we have made along the way which may be useful for physiologists to consider.  相似文献   
80.
Modeling and Managing Interactions among Business Processes   总被引:3,自引:0,他引:3  
Most workflow management systems (WfMSs) only support the separate andindependent execution of business processes. However, processes often needto interact with each other, in order to synchronize the execution of theiractivities, to exchange process data, to request execution of services, orto notify progresses in process execution. Recent market trends also raisethe need for cooperation and interaction between processes executed in differentorganizations, posing additional challenges. In fact, in order to reduce costsand provide better services, companies are pushed to increase cooperation and toform virtual enterprises, where business processes span across organizationalboundaries and are composed of cooperating workflows executed in differentorganizations. Workflow interaction in a cross-organizational environment iscomplicated by the heterogeneity of workflow management platforms on top ofwhich workflows are defined and executed and by the different and possiblycompeting business policies and business goals that drive process executionin each organization.In this paper we propose a model and system that enable interactionbetween workflows executed in the same or in different organizations. Weextend traditional workflow models by allowing workflows to publish andsubscribe to events, and by enabling the definition of points in the processexecution where events should be sent or received. Event notifications aremanaged by a suitable event service that is capable of filtering andcorrelating events, and of dispatching them to the appropriate targetworkflow instances. The extended model can be easily mapped onto anyworkflow model, since event specific constructs can be specified by means ofordinary workflow activities, for which we provide the implementation. Inaddition, the event service is easily portable to different platforms, anddoes not require integration with the WfMS that supports the cooperatingworkflows. Therefore, the proposed approach is applicable in virtually anyenvironment and is independent on the specific platform adopted  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号