首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   770篇
  免费   20篇
  国内免费   1篇
电工技术   3篇
化学工业   151篇
金属工艺   5篇
机械仪表   12篇
建筑科学   22篇
能源动力   24篇
轻工业   64篇
水利工程   8篇
无线电   100篇
一般工业技术   94篇
冶金工业   20篇
原子能技术   5篇
自动化技术   283篇
  2024年   1篇
  2023年   6篇
  2022年   21篇
  2021年   19篇
  2020年   12篇
  2019年   12篇
  2018年   15篇
  2017年   18篇
  2016年   23篇
  2015年   27篇
  2014年   33篇
  2013年   61篇
  2012年   54篇
  2011年   56篇
  2010年   42篇
  2009年   59篇
  2008年   50篇
  2007年   51篇
  2006年   42篇
  2005年   30篇
  2004年   22篇
  2003年   18篇
  2002年   15篇
  2001年   10篇
  2000年   6篇
  1999年   10篇
  1998年   12篇
  1997年   12篇
  1996年   7篇
  1995年   10篇
  1994年   7篇
  1993年   3篇
  1992年   4篇
  1991年   3篇
  1990年   3篇
  1989年   2篇
  1988年   2篇
  1987年   3篇
  1986年   1篇
  1985年   1篇
  1984年   1篇
  1980年   1篇
  1977年   1篇
  1970年   1篇
  1965年   1篇
  1958年   1篇
  1957年   2篇
排序方式: 共有791条查询结果,搜索用时 15 毫秒
71.
An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets.

Program summary

Title of program: GDF v2.0Catalogue identifier: ADXC_v2_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 98 147No. of bytes in distributed program, including test data, etc.: 2 040 684Distribution format: tar.gzProgramming language: GNU C++Computer: The program is designed to be portable in all systems running the GNU C++ compilerOperating system: Linux, Solaris, FreeBSDRAM: 200000 bytesClassification: 4.9Does the new version supersede the previous version?: YesNature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data.Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem.Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems.Summary of revisions: The following features have been added to the package GDF:
Multi-output function approximation. The package can now approximate any function . This feature gives also to the package the capability of performing classification and not only regression.
User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased.
Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions.
More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process.
Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code.Running time: Depending on the train data.  相似文献   
72.
With the advent of the information and related emerging technologies, such as RFID, small size sensors and sensor networks or, more generally, product embedded information devices (PEID), a new generation of products called smart or intelligent products is available in the market.Although various definitions of intelligent products have been proposed, we introduce a new definition of the notion of Intelligent Product inspired by what happens in nature with us as human beings and the way we develop intelligence and knowledge. We see an intelligent product as a product system which contains sensing, memory, data processing, reasoning and communication capabilities at four intelligence levels. This future generations of Intelligent Products will need new Product Data Technologies allowing the seamless interoperability of systems and exchange of not only Static but of Dynamic Product Data as well. Actual standards for PDT cover only lowest intelligence of today’s products. In this context, we try to shape the actual state and a possible future of the Product Data Technologies from a Closed-Loop Product Lifecycle Management (C-L PLM) perspective.Our approach is founded in recent findings of the FP6 IP 507100 project PROMISE and follow-up research work. Standards of the STEP family, covering the product lifecycle to a certain extend (PLCS) as well as MIMOSA and ISO 15926 are discussed together with more recent technologies for the management of ID and sensor data such as EPCglobal, OGC-SWE and relevant PROMISE propositions for standards.Finally, the first efforts towards ontology based semantic standards for product lifecycle management and associated knowledge management and sharing are presented and discussed.  相似文献   
73.
74.
This paper presents an efficient dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A non-trivial extension of Featherstone's O(n) recursive forward dynamics algorithm is derived which allows enforcing one or more constraints on the animated figures. We demonstrate how the constraint force evaluation algorithm we have developed makes it possible to simulate collisions between articulated figures, to compute the results of impulsive forces, to enforce joint limits, to model closed kinematic loops, and to robustly control motion at interactive rates. Particular care has been taken to make the algorithm not only fast, but also easy to implement and use. To better illustrate how the constraint force evaluation algorithm works, we provide pseudocode for its major components. Additionally, we analyze its computational complexity and finally we present examples demonstrating how our system has been used to generate interactive, physically correct complex motion with small user effort.  相似文献   
75.
Mutual information (MI) is used in feature selection to evaluate two key-properties of optimal features, the relevance of a feature to the class variable and the redundancy of similar features. Conditional mutual information (CMI), i.e., MI of the candidate feature to the class variable conditioning on the features already selected, is a natural extension of MI but not so far applied due to estimation complications for high dimensional distributions. We propose the nearest neighbor estimate of CMI, appropriate for high-dimensional variables, and build an iterative scheme for sequential feature selection with a termination criterion, called CMINN. We show that CMINN is equivalent to feature selection MI filters, such as mRMR and MaxiMin, in the presence of solely single feature effects, and more appropriate for combined feature effects. We compare CMINN to mRMR and MaxiMin on simulated datasets involving combined effects and confirm the superiority of CMINN in selecting the correct features (indicated also by the termination criterion) and giving best classification accuracy. The application to ten benchmark databases shows that CMINN obtains the same or higher classification accuracy compared to mRMR and MaxiMin at a smaller cardinality of the selected feature subset.  相似文献   
76.
The Mahalanobis-Taguchi (MT) strategy combines mathematical and statistical concepts like Mahalanobis distance, Gram-Schmidt orthogonalization and experimental designs to support diagnosis and decision-making based on multivariate data. The primary purpose is to develop a scale to measure the degree of abnormality of cases, compared to “normal” or “healthy” cases, i.e. a continuous scale from a set of binary classified cases. An optimal subset of variables for measuring abnormality is then selected and rules for future diagnosis are defined based on them and the measurement scale. This maps well to problems in software defect prediction based on a multivariate set of software metrics and attributes. In this paper, the MT strategy combined with a cluster analysis technique for determining the most appropriate training set, is described and applied to well-known datasets in order to evaluate the fault-proneness of software modules. The measurement scale resulting from the MT strategy is evaluated using ROC curves and shows that it is a promising technique for software defect diagnosis. It compares favorably to previously evaluated methods on a number of publically available data sets. The special characteristic of the MT strategy that it quantifies the level of abnormality can also stimulate and inform discussions with engineers and managers in different defect prediction situations.  相似文献   
77.
Multiuser location-aware applications present a new form of mediated communication that takes place within a digital as well as a physical spatial context. The inherently hybrid character of locative media use necessitates that the designers of such applications take into account the way communication and social interaction is influenced by contextual elements. In this paper, an investigation into the communicational and social practices of users who participate in a location-based game is presented, with an emphasis on group formation and dynamics, interpersonal communication, and experienced sense of immersion. This investigation employs a methodological approach that is reliant on both qualitative and quantitative data analysis. A series of this user experience study’s results are presented and discussed.  相似文献   
78.
According to the database outsourcing model, a data owner delegates database functionality to a third-party service provider, which answers queries received from clients. Authenticated query processing enables the clients to verify the correctness of query results. Despite the abundance of methods for authenticated processing in conventional databases, there is limited work on outsourced data streams. Stream environments pose new challenges such as the need for fast structure updating, support for continuous query processing and authentication, and provision for temporal completeness. Specifically, in addition to the correctness of individual results, the client must be able to verify that there are no missing results in between data updates. This paper presents a comprehensive set of methods covering relational streams. We first describe REF, a technique that achieves correctness and temporal completeness but incurs false transmissions, i.e., the provider has to inform the clients whenever there is a data update, even if their results are not affected. Then, we propose CADS, which minimizes the processing and transmission overhead through an elaborate indexing scheme and a virtual caching mechanism. In addition, we present an analytical study to determine the optimal indexing granularity, and extend CADS for the case that the data distribution changes over time. Finally, we evaluate the effectiveness of our techniques through extensive experiments.  相似文献   
79.
We present a data-driven dynamic coupling between discrete and continuous methods for tracking objects of high dofs, which overcomes the limitations of previous techniques. In our approach, two trackers work in parallel, and the coupling between them is based on the tracking error. We use a model-based continuous method to achieve accurate results and, in cases of failure, we re-initialize the model using our discrete tracker. This method maintains the accuracy of a more tightly coupled system, while increasing its efficiency. At any given frame, our discrete tracker uses the current and several previous frames to search into a database for the best matching solution. For improved robustness, object configuration sequences, rather than single configurations, are stored in the database. We apply our framework to the problem of 3D hand tracking from image sequences and the discrimination between fingerspelling and continuous signs in American Sign Language.  相似文献   
80.
In the Internet era, users’ fundamental privacy and anonymity rights have received significant research and regulatory attention. This is not only a result of the exponential growth of data that users generate when accomplishing their daily task by means of computing devices with advanced capabilities, but also because of inherent data properties that allow them to be linked with a real or soft identity. Service providers exploit these facts for user monitoring and identification, albeit impacting users’ anonymity, based mainly on personal identifiable information or on sensors that generate unique data to provide personalized services. In this paper, we report on the feasibility of user identification using general system features like memory, CPU and network data, as provided by the underlying operating system. We provide a general framework based on supervised machine learning algorithms both for distinguishing users and informing them about their anonymity exposure. We conduct a series of experiments to collect trial datasets for users’ engagement on a shared computing platform. We evaluate various well-known classifiers in terms of their effectiveness in distinguishing users, and we perform a sensitivity analysis of their configuration setup to discover optimal settings under diverse conditions. Furthermore, we examine the bounds of sampling data to eliminate the chances of user identification and thus promote anonymity. Overall results show that under certain configurations users’ anonymity can be preserved, while in other cases users’ identification can be inferred with high accuracy, without relying on personal identifiable information.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号