首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   588篇
  免费   12篇
  国内免费   2篇
电工技术   15篇
综合类   1篇
化学工业   87篇
金属工艺   32篇
机械仪表   30篇
建筑科学   23篇
能源动力   29篇
轻工业   35篇
水利工程   5篇
无线电   70篇
一般工业技术   86篇
冶金工业   30篇
自动化技术   159篇
  2023年   13篇
  2022年   18篇
  2021年   19篇
  2020年   19篇
  2019年   14篇
  2018年   25篇
  2017年   23篇
  2016年   18篇
  2015年   20篇
  2014年   24篇
  2013年   31篇
  2012年   22篇
  2011年   34篇
  2010年   24篇
  2009年   22篇
  2008年   20篇
  2007年   23篇
  2006年   18篇
  2005年   12篇
  2004年   14篇
  2003年   17篇
  2002年   15篇
  2001年   7篇
  2000年   6篇
  1999年   2篇
  1998年   11篇
  1997年   9篇
  1996年   8篇
  1995年   11篇
  1994年   13篇
  1993年   8篇
  1992年   9篇
  1991年   3篇
  1990年   2篇
  1989年   6篇
  1988年   4篇
  1987年   4篇
  1986年   5篇
  1985年   9篇
  1984年   4篇
  1983年   2篇
  1982年   5篇
  1981年   4篇
  1980年   4篇
  1979年   3篇
  1978年   2篇
  1976年   2篇
  1975年   4篇
  1974年   4篇
  1973年   3篇
排序方式: 共有602条查询结果,搜索用时 15 毫秒
211.
This paper addresses the anomaly detection problem in large-scale data mining applications using residual subspace analysis. We are specifically concerned with situations where the full data cannot be practically obtained due to physical limitations such as low bandwidth, limited memory, storage, or computing power. Motivated by the recent compressed sensing (CS) theory, we suggest a framework wherein random projection can be used to obtained compressed data, addressing the scalability challenge. Our theoretical contribution shows that the spectral property of the CS data is approximately preserved under a such a projection and thus the performance of spectral-based methods for anomaly detection is almost equivalent to the case in which the raw data is completely available. Our second contribution is the construction of the framework to use this result and detect anomalies in the compressed data directly, thus circumventing the problems of data acquisition in large sensor networks. We have conducted extensive experiments to detect anomalies in network and surveillance applications on large datasets, including the benchmark PETS 2007 and 83 GB of real footage from three public train stations. Our results show that our proposed method is scalable, and importantly, its performance is comparable to conventional methods for anomaly detection when the complete data is available.  相似文献   
212.
This paper focuses on space efficient representations of rooted trees that permit basic navigation in constant time. While most of the previous work has focused on binary trees, we turn our attention to trees of higher degree. We consider both cardinal trees (or k-ary tries), where each node has k slots, labelled {1,...,k}, each of which may have a reference to a child, and ordinal trees, where the children of each node are simply ordered. Our representations use a number of bits close to the information theoretic lower bound and support operations in constant time. For ordinal trees we support the operations of finding the degree, parent, ith child, and subtree size. For cardinal trees the structure also supports finding the child labelled i of a given node apart from the ordinal tree operations. These representations also provide a mapping from the n nodes of the tree onto the integers {1, ..., n}, giving unique labels to the nodes of the tree. This labelling can be used to store satellite information with the nodes efficiently.  相似文献   
213.
Robust identification of uncertain systems arises whenever a chosen family of models does not completely describe reality. In these situations the issue of unmodeled dynamics gains significance in addition to random measurement noise. To deal with such mixed stochastic/deterministic settings we introduce a novel notion for robust consistency, which requires that the expectation (with respect to noise) of the worst-case (with respect to unmodeled dynamics) identification error asymptotically approach zero. It turns out that this notion leads to transparent necessary and sufficient conditions. We show that robust consistency holds, if and only if there is an instrument-input-pair capable of annihilating the residual error as well as stochastic noise. An extension of this result to the well-known “bounded but unknown” noise model shows that if we were to remove a set of Lebesgue measure zero, the error bound asymptotically approaches zero.  相似文献   
214.
Radhakrishnan  Sen  Venkatesh 《Algorithmica》2008,34(4):462-479
   Abstract. We study the quantum complexity of the static set membership problem: given a subset S (|S| ≤ n ) of a universe of size m ( >> n ), store it as a table, T: {0,1} r --> {0,1} , of bits so that queries of the form ``Is x in S ?' can be answered. The goal is to use a small table and yet answer queries using few bit probes. This problem was considered recently by Buhrman et al. [BMRV], who showed lower and upper bounds for this problem in the classical deterministic and randomized models. In this paper we formulate this problem in the ``quantum bit probe model'. We assume that access to the table T is provided by means of a black box (oracle) unitary transform O T that takes the basis state | y,b > to the basis state | y,b
T(y) > . The query algorithm is allowed to apply O T on any superposition of basis states. We show tradeoff results between space (defined as 2 r ) and number of probes (oracle calls) in this model. Our results show that the lower bounds shown in [BMRV] for the classical model also hold (with minor differences) in the quantum bit probe model. These bounds almost match the classical upper bounds. Our lower bounds are proved using linear algebraic arguments.  相似文献   
215.
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images (‘No Reference’ metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.  相似文献   
216.
In this paper, we present a machine learning approach for subject independent human action recognition using depth camera, emphasizing the importance of depth in recognition of actions. The proposed approach uses the flow information of all 3 dimensions to classify an action. In our approach, we have obtained the 2-D optical flow and used it along with the depth image to obtain the depth flow (Z motion vectors). The obtained flow captures the dynamics of the actions in space–time. Feature vectors are obtained by averaging the 3-D motion over a grid laid over the silhouette in a hierarchical fashion. These hierarchical fine to coarse windows capture the motion dynamics of the object at various scales. The extracted features are used to train a Meta-cognitive Radial Basis Function Network (McRBFN) that uses a Projection Based Learning (PBL) algorithm, referred to as PBL-McRBFN, henceforth. PBL-McRBFN begins with zero hidden neurons and builds the network based on the best human learning strategy, namely, self-regulated learning in a meta-cognitive environment. When a sample is used for learning, PBL-McRBFN uses the sample overlapping conditions, and a projection based learning algorithm to estimate the parameters of the network. The performance of PBL-McRBFN is compared to that of a Support Vector Machine (SVM) and Extreme Learning Machine (ELM) classifiers with representation of every person and action in the training and testing datasets. Performance study shows that PBL-McRBFN outperforms these classifiers in recognizing actions in 3-D. Further, a subject-independent study is conducted by leave-one-subject-out strategy and its generalization performance is tested. It is observed from the subject-independent study that McRBFN is capable of generalizing actions accurately. The performance of the proposed approach is benchmarked with Video Analytics Lab (VAL) dataset and Berkeley Multi-modal Human Action Database (MHAD).  相似文献   
217.
The (188)W/(188)Re generator using an acidic alumina column for chromatographic separation of (188)Re has remained the most popular procedure world over. The capacity of bulk alumina for taking up tungstate ions is limited (~50 mg W/g) necessitating the use of very high specific activity (188)W (185-370 GBq/g), which can be produced only in very few high flux reactors available in the world. In this context, the use of high-capacity sorbents would not only mitigate the requirement of high specific activity (188)W but also facilitate easy access to (188)Re. A solid state mechanochemical approach to synthesize nanocrystalline γ-Al(2)O(3) possessing very high W-sorption capacity (500 mg W/g) was developed. The structural and other investigations of the material were carried out using X-ray diffraction (XRD), transmission electron microscopy (TEM), Brunauer Emmett Teller (BET) surface area analysis, thermogravimetric-differential thermal analysis (TG-DTA), and dynamic light scattering (DLS) techniques. The synthesized material had an average crystallite size of ~5 nm and surface area of 252 ± 10 m(2)/g. Sorption characteristics such as distribution ratios (K(d)), capacity, breakthrough profile, and elution behavior were investigated to ensure quantitative uptake of (188)W and selective elution of (188)Re. A 11.1 GBq (300 mCi) (188)W/(188)Re generator was developed using nanocrystalline γ-Al(2)O(3), and its performance was evaluated for a period of 6 months. The overall yield of (188)Re was >80%, with >99.999% radionuclidic purity and >99% radiochemical purity. The eluted (188)Re possessed appreciably high radioactive concentration and was compatible for the preparation of (188)Re labeled radiopharmaceuticals.  相似文献   
218.
The potential of cryogenic effect on frictional behaviour of newly developed titanium alloy Ti–5Al–4V–0.6Mo–0.4Fe (Ti54) sliding against tungsten carbide was investigated and compared with conventional titanium alloy Ti6Al4V (Ti64). In this study, four models were developed to describe the interrelationship between the friction coefficient (response) and independent variables such as speed, load, and sliding distance (time). These variables were investigated using the design of experiments and utilization of the response surface methodology (RSM). By using this method, it was possible to study the effect of main and mixed (interaction) independent variables on the friction coefficient (COF) of both titanium alloys.Under cryogenic condition, the friction coefficient of both Ti64 and Ti54 behaved differently, i.e. an increase in the case of Ti64 and decrease in the case of Ti54. For Ti64, at higher levels of load and speed, sliding in cryogenic conditions produces relatively higher friction coefficients compared to those obtained in dry air conditions. On contrary, introduction of cryogenic fluid reduces the friction coefficients of Ti54 at all tested conditions of load, speed, and time.The established models demonstrated that the mixed effect of load/speed, time/speed, and load/time consistently decrease the COF of Ti54. However this was not the case for Ti64 whereas the COF increased up to 20% when the Ti64 was tested at higher levels of load and sliding time. Furthermore, the models indicated that interaction of loads and speeds was more effective for both Ti-alloy and have the most substantial influence on the friction. In addition, COF for both alloys behaved linearly with the speed but nonlinearly with the load.  相似文献   
219.
In this paper, we show that the transfer matrix theory of multilayer optics can be used to solve the modes of any two-dimensional (2D) waveguide for their effective indices and field distributions. A 2D waveguide, even composed of numerous layers, is essentially a multilayer stack and the transmission through the stack can be analysed using the transfer matrix theory. The result is a transfer matrix with four complex value elements, namely A, B, C and D. The effective index of a guided mode satisfies two conditions: (1) evanescent waves exist simultaneously in the first (cladding) layer and last (substrate) layer, and (2) the complex element D vanishes. For a given mode, the field distribution in the waveguide is the result of a ‘folded’ plane wave. In each layer, there is only propagation and absorption; at each boundary, only reflection and refraction occur, which can be calculated according to the Fresnel equations. As examples, we show that this method can be used to solve modes supported by the multilayer step-index dielectric waveguide, slot waveguide, gradient-index waveguide and various plasmonic waveguides. The results indicate the transfer matrix method is effective for 2D waveguide mode solution in general.  相似文献   
220.
Significance of improving the material processing techniques for SiC/Al composite has been felt due to its enormous application in various key industries. In this research study, SiC reinforced that Al 6061 composite has been developed specially through stir casting route and the same was admitted to hot extrusion process to convert the round geometry into the hexagonal section. Totally, nine experiments were conducted based on L9 orthogonal array emphasized by Taguchi's technique, and the optimum levels were predicted based on the average response graph method. During the experiments, ram speed, temperature of the billet, and a friction between the die and the billet were considered as the process variables, thereby considering the extrusion force as the response variable. Additionally, the analysis of variance has been applied to determine the most significant factor to influence the response. At last, confirmation test was carried out to validate the results of the optimized model. In order to enhance the degree of validation, very famous analytical method of the upper bound technique was also employed to compare the results of the optimized model. Results of the upper bound technique and confirmation test were deviated in zero tolerance with the predicted one.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号