全文获取类型
收费全文 | 788篇 |
免费 | 30篇 |
国内免费 | 2篇 |
专业分类
电工技术 | 9篇 |
综合类 | 2篇 |
化学工业 | 161篇 |
金属工艺 | 31篇 |
机械仪表 | 18篇 |
建筑科学 | 24篇 |
能源动力 | 45篇 |
轻工业 | 43篇 |
水利工程 | 3篇 |
石油天然气 | 2篇 |
无线电 | 86篇 |
一般工业技术 | 194篇 |
冶金工业 | 72篇 |
原子能技术 | 9篇 |
自动化技术 | 121篇 |
出版年
2024年 | 4篇 |
2023年 | 24篇 |
2022年 | 27篇 |
2021年 | 32篇 |
2020年 | 34篇 |
2019年 | 24篇 |
2018年 | 27篇 |
2017年 | 36篇 |
2016年 | 34篇 |
2015年 | 21篇 |
2014年 | 29篇 |
2013年 | 69篇 |
2012年 | 43篇 |
2011年 | 53篇 |
2010年 | 23篇 |
2009年 | 37篇 |
2008年 | 36篇 |
2007年 | 32篇 |
2006年 | 22篇 |
2005年 | 30篇 |
2004年 | 19篇 |
2003年 | 10篇 |
2002年 | 9篇 |
2001年 | 6篇 |
2000年 | 7篇 |
1999年 | 10篇 |
1998年 | 11篇 |
1997年 | 12篇 |
1996年 | 10篇 |
1995年 | 15篇 |
1994年 | 6篇 |
1993年 | 8篇 |
1992年 | 9篇 |
1991年 | 5篇 |
1990年 | 2篇 |
1989年 | 3篇 |
1988年 | 3篇 |
1987年 | 5篇 |
1986年 | 2篇 |
1985年 | 2篇 |
1983年 | 7篇 |
1982年 | 4篇 |
1980年 | 2篇 |
1979年 | 4篇 |
1978年 | 2篇 |
1976年 | 2篇 |
1973年 | 2篇 |
1971年 | 1篇 |
1970年 | 1篇 |
1969年 | 1篇 |
排序方式: 共有820条查询结果,搜索用时 15 毫秒
51.
Shrihari Vasudevan 《Robotics and Autonomous Systems》2012,60(12):1528-1544
This paper addresses the problem of fusing multiple sets of heterogeneous sensor data using Gaussian processes (GPs). Experiments on large scale terrain modeling in mining automation are presented. Three techniques in increasing order of model complexity are discussed. The first is based on adding data to an existing GP model. The second approach treats data from different sources as different noisy samples of a common underlying terrain and fusion is performed using heteroscedastic GPs. The final approach, based on dependent GPs, models each data set by a separate GP and learns spatial correlations between data sets through auto and cross covariances. The paper presents a unifying view of approaches to data fusion using GPs, a statistical evaluation that compares these approaches and multiple previously untested variants of them and an insight into the effect of model complexity on data fusion. Experiments suggest that in situations where data being fused is not rich enough to require a complex GP data fusion model or when computational resources are limited, the use of simpler GP data fusion techniques, which are constrained versions of the more generic models, reduces optimization complexity and consequently can enable superior learning of hyperparameters, resulting in a performance gain. 相似文献
52.
Manufacturing facilities are expected to maintain a high level of production and at the same time, employ strict safety standards to ensure the safe evacuation of the people in the event of emergencies (fire is considered in this paper). These two goals are often conflicting. This paper presents a methodology to evaluate evacuation safety versus productivity concurrently for various, widely known manufacturing layouts. While the safety performance indicators such as evacuation times are inferred from the crowd (agent based) simulation, the productivity performance indicators (e.g. throughput) are analyzed using the discrete event simulation. To this end, this research focuses on creating innovative techniques for developing accurate crowd simulations, where Belief-Desire-Intention (BDI) agent framework is employed to build each person’s individual actions and the interactions between them. The data model and rule based action algorithms for each agent are reverse-engineered from the human-in-the-loop experiments in the immersive virtual reality environments. Finally, experiments are conducted using the constructed simulations to compare safety and productivity for different layouts. To demonstrate the proposed methodology, an automotive power-train (engine and transmission) manufacturing plant was used. Initial results look quite promising. 相似文献
53.
In today’s world, Cloud Computing (CC) enables the users to access computing resources and services over cloud without any need to own the infrastructure. Cloud Computing is a concept in which a network of devices, located in remote locations, is integrated to perform operations like data collection, processing, data profiling and data storage. In this context, resource allocation and task scheduling are important processes which must be managed based on the requirements of a user. In order to allocate the resources effectively, hybrid cloud is employed since it is a capable solution to process large-scale consumer applications in a pay-by-use manner. Hence, the model is to be designed as a profit-driven framework to reduce cost and make span. With this motivation, the current research work develops a Cost-Effective Optimal Task Scheduling Model (CEOTS). A novel algorithm called Target-based Cost Derivation (TCD) model is used in the proposed work for hybrid clouds. Moreover, the algorithm works on the basis of multi-intentional task completion process with optimal resource allocation. The model was successfully simulated to validate its effectiveness based on factors such as processing time, make span and efficient utilization of virtual machines. The results infer that the proposed model outperformed the existing works and can be relied in future for real-time applications. 相似文献
54.
Pillai Karthik Ganesh Ramaswamy Radhakrishnan Kanthavel Ramakrishnan Dhaya Yesudhas Harold Robinson Eanoch Golden Julie Kumar Raghvendra Long Hoang Viet Son Le Hoang 《Multimedia Tools and Applications》2021,80(5):7077-7101
Multimedia Tools and Applications - Detection and clustering of commercial advertisements plays an important role in multimedia indexing also in the creation of personalized user content. In... 相似文献
55.
Logan Molyneux Krishnan Vasudevan Homero Gil de Zúñiga 《Journal of Computer-Mediated Communication》2015,20(4):381-399
Recent research suggests that social interactions in video games may lead to the development of community bonding and prosocial attitudes. Building on this line of research, a national survey of U.S. adults finds that gamers who develop ties with a community of fellow gamers possess gaming social capital, a new gaming‐related community construct that is shown to be a positive antecedent in predicting both face‐to‐face social capital and civic participation. 相似文献
56.
Steering the Craft: UI Elements and Visualizations for Supporting Progressive Visual Analytics 下载免费PDF全文
Progressive visual analytics (PVA) has emerged in recent years to manage the latency of data analysis systems. When analysis is performed progressively, rough estimates of the results are generated quickly and are then improved over time. Analysts can therefore monitor the progression of the results, steer the analysis algorithms, and make early decisions if the estimates provide a convincing picture. In this article, we describe interface design guidelines for helping users understand progressively updating results and make early decisions based on progressive estimates. To illustrate our ideas, we present a prototype PVA tool called Insights Feed for exploring Twitter data at scale. As validation, we investigate the tradeoffs of our tool when exploring a Twitter dataset in a user study. We report the usage patterns in making early decisions using the user interface, guiding computational methods, and exploring different subsets of the dataset, compared to sequential analysis without progression. 相似文献
57.
Score normalization in multimodal biometric systems 总被引:8,自引:0,他引:8
Multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically provide better recognition performance compared to systems based on a single biometric modality. Although information fusion in a multimodal system can be performed at various levels, integration at the matching score level is the most common approach due to the ease in accessing and combining the scores generated by different matchers. Since the matching scores output by the various modalities are heterogeneous, score normalization is needed to transform these scores into a common domain, prior to combining them. In this paper, we have studied the performance of different normalization techniques and fusion rules in the context of a multimodal biometric system based on the face, fingerprint and hand-geometry traits of a user. Experiments conducted on a database of 100 users indicate that the application of min–max, z-score, and tanh normalization schemes followed by a simple sum of scores fusion method results in better recognition performance compared to other methods. However, experiments also reveal that the min–max and z-score normalization techniques are sensitive to outliers in the data, highlighting the need for a robust and efficient normalization procedure like the tanh normalization. It was also observed that multimodal systems utilizing user-specific weights perform better compared to systems that assign the same set of weights to the multiple biometric traits of all users. 相似文献
58.
Manish Goyal Sundar Murugappan Cecil Piya William Benjamin Yi Fang Min Liu Karthik Ramani 《Computer aided design》2012,44(6):537-553
The process of re-creating CAD models from actual physical parts, formally known as digital shape reconstruction (DSR) is an integral part of product development, especially in re-design. While, the majority of current methods used in DSR are surface-based, our overarching goal is to obtain direct parameterization of 3D meshes, by avoiding the actual segmentation of the mesh into different surfaces. As a first step towards reverse modeling physical parts, we extract (1) locally prominent cross-sections (PCS) from triangular meshes, and (2) organize and cluster them into sweep components, which form the basic building blocks of the re-created CAD model. In this paper, we introduce two new algorithms derived from Locally Linear Embedding (LLE) (Roweis and Sauk, 2000 [3]) and Affinity Propagation (AP) (Frey and Dueck, 2007 [4]) for organizing and clustering PCS. The LLE algorithm analyzes the cross-sections (PCS) using their geometric properties to build a global manifold in an embedded space. The AP algorithm, then clusters the local cross sections by propagating affinities among them in the embedded space to form different sweep components. We demonstrate the robustness and efficiency of the algorithms through many examples including actual laser-scanned (point cloud) mechanical parts. 相似文献
59.
Krishankumar R. Sivagami R. Saha Abhijit Rani Pratibha Arun Karthik Ravichandran K. S. 《Applied Intelligence》2022,52(12):13497-13519
Applied Intelligence - The role of cloud services in the data-intensive industry is indispensable. Cision recently reported that the cloud market would grow to 55 billion USD, with an active... 相似文献
60.
The assumption of proportional hazards (PH) fundamental to the Cox PH model sometimes may not hold in practice. In this paper, we propose a generalization of the Cox PH model in terms of the cumulative hazard function taking a form similar to the Cox PH model, with the extension that the baseline cumulative hazard function is raised to a power function. Our model allows for interaction between covariates and the baseline hazard and it also includes, for the two sample problem, the case of two Weibull distributions and two extreme value distributions differing in both scale and shape parameters. The partial likelihood approach can not be applied here to estimate the model parameters. We use the full likelihood approach via a cubic B-spline approximation for the baseline hazard to estimate the model parameters. A semi-automatic procedure for knot selection based on Akaike’s information criterion is developed. We illustrate the applicability of our approach using real-life data. 相似文献