首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8443篇
  免费   90篇
电工技术   71篇
综合类   1篇
化学工业   674篇
金属工艺   83篇
机械仪表   91篇
建筑科学   167篇
矿业工程   18篇
能源动力   83篇
轻工业   474篇
水利工程   53篇
石油天然气   16篇
无线电   611篇
一般工业技术   750篇
冶金工业   4744篇
原子能技术   39篇
自动化技术   658篇
  2021年   43篇
  2020年   33篇
  2019年   65篇
  2018年   69篇
  2017年   52篇
  2016年   52篇
  2015年   42篇
  2014年   70篇
  2013年   214篇
  2012年   119篇
  2011年   172篇
  2010年   127篇
  2009年   123篇
  2008年   180篇
  2007年   178篇
  2006年   160篇
  2005年   143篇
  2004年   118篇
  2003年   109篇
  2002年   112篇
  2001年   117篇
  2000年   100篇
  1999年   212篇
  1998年   1395篇
  1997年   807篇
  1996年   543篇
  1995年   369篇
  1994年   309篇
  1993年   315篇
  1992年   113篇
  1991年   113篇
  1990年   127篇
  1989年   130篇
  1988年   107篇
  1987年   119篇
  1986年   104篇
  1985年   106篇
  1984年   76篇
  1983年   64篇
  1982年   62篇
  1981年   64篇
  1980年   73篇
  1979年   62篇
  1978年   56篇
  1977年   132篇
  1976年   291篇
  1975年   56篇
  1974年   37篇
  1973年   37篇
  1972年   36篇
排序方式: 共有8533条查询结果,搜索用时 0 毫秒
111.
The digital revolution has changed the daily work of archivists. As a result, archivists have refined their analog practices and initiated new ones to accommodate the unique information creation and access needs of users. While limitless storage and technological obsolescence may be surmountable challenges, appraisal continues to be a necessary component of this preservation enterprise. Macro-appraisal—a top-down functional assessment of an organization’s high-level decision making that defines the enduring value of specific bodies of documentary evidence of these creative or administrative processes—is used to facilitate the management of diverse bodies of born-digital records in archives and special collections.  相似文献   
112.
In this paper, we consider the problem of flocking and shape‐orientation control of multi‐agent systems with inter‐agent and obstacle collision avoidance. We first consider the problem of forcing a set of autonomous agents to form a desired formation shape and orientation while avoiding inter‐agent collision and collision with convex obstacles, and following a trajectory known to only one of the agents, namely the leader of the formation. Then we build upon the solution given to this problem and solve the problem of guaranteeing obstacle collision avoidance by changing the size and the orientation of the formation. Changing the size and the orientation of the formation is helpful when the agents want to go through a narrow passage while the existing size or orientation of the formation does not allow this. We also propose collision avoidance algorithms that temporarily change the shape of the formation to avoid collision with stationary or moving nonconvex obstacles. Simulation results are presented to show the performance of the proposed control laws.  相似文献   
113.
An extension to the divide-and-conquer algorithm (DCA) is presented in this paper to model constrained multibody systems. The constraints of interest are those applied to the system due to the inverse dynamics or control laws rather than the kinematically closed loops which have been studied in the literature. These imposed constraints are often expressed in terms of the generalized coordinates and speeds. A set of unknown generalized constraint forces must be considered in the equations of motion to enforce these algebraic constraints. In this paper dynamics of this class of multibody constrained systems is formulated using a Generalized-DCA. In this scheme, introducing dynamically equivalent forcing systems, each generalized constraint force is replaced by its dynamically equivalent spatial constraint force applied from the appropriate parent body to the associated child body at the connecting joint without violating the dynamics of the original system. The handle equations of motion are then formulated considering these dynamically equivalent spatial constraint forces. These equations in the GDCA scheme are used in the assembly and disassembly processes to solve for the states of the system, as well as the generalized constraint forces and/or Lagrange multipliers.  相似文献   
114.
The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. These variables are obtained by a spectrophotometer device. This device measures hundreds of correlated variables related with physicocbemical properties and that can be used to estimate the component of interest. The problem is the selection of a subset of informative and uncorrelated variables that help the minimization of prediction error. Classical algorithms select a subset of variables for each compound considered. In this work we propose the use of the SPEA-II (strength Pareto evolutionary algorithm II). We would like to show that the variable selection algorithm can selected just one subset used for multiple determinations using multiple linear regressions. For the case study is used wheat data obtained by NIR (near-infrared spectroscopy) spectrometry where the objective is the determination of a variable subgroup with information about E protein content (%), test weight (Kg/HI), WKT (wheat kernel texture) (%) and farinograph water absorption (%). The results of traditional techniques of multivariate calibration as the SPA (successive projections algorithm), PLS (partial least square) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 37%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares.  相似文献   
115.
Asimov’s “three laws of robotics” and machine metaethics   总被引:1,自引:1,他引:0  
Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimov’s “three laws of robotics” are an unsatisfactory basis for machine ethics, regardless of the status of the machine.
Susan Leigh AndersonEmail:
  相似文献   
116.
In this paper, we address the problem of agent loss in vehicle formations and sensor networks via two separate approaches: (1) perform a ‘self‐repair’ operation in the event of agent loss to recover desirable information architecture properties or (2) introduce robustness into the information architecture a priori such that agent loss does not destroy desirable properties. We model the information architecture as a graph G(V, E), where V is a set of vertices representing the agents and E is a set of edges representing information flow amongst the agents. We focus on two properties of the graph called rigidity and global rigidity, which are required for formation shape maintenance and sensor network self‐localization, respectively. For the self‐repair approach, we show that while previous results permit local repair involving only neighbours of the lost agent, the repair cannot always be implemented using only local information. We present new results that can be applied to make the local repair using only local information. We describe implementation and illustrate with algorithms and examples. For the robustness approach, we investigate the structure of graphs with the property that rigidity or global rigidity is preserved after removing any single vertex (we call the property as 2‐vertex‐rigidity or 2‐vertex‐global‐rigidity, respectively). Information architectures with such properties would allow formation shape maintenance or self‐localization to be performed even in the event of agent failure. We review a characterization of a class of 2‐vertex‐rigidity and develop a separate class, making significant strides towards a complete characterization. We also present a characterization of a class of 2‐vertex‐global‐rigidity. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
117.
A stability robustness test is developed for internally stable, nominal, linear time‐invariant (LTI) feedback systems subject to structured, linear time‐varying uncertainty. There exists (in the literature) a necessary and sufficient structured small gain condition that determines robust stability in such cases. In this paper, the structured small gain theorem is utilized to formulate a (sufficient) stability robustness condition in a scaled LTI ν‐gap metric framework. The scaled LTI ν‐gap metric stability condition is shown to be computable via linear matrix inequality techniques, similar to the structured small gain condition. Apart from a comparison with a generalized robust stability margin as the final part of the stability test, however, the solution algorithm implemented to test the scaled LTI ν‐gap metric stability robustness condition is shown to be independent of knowledge about the controller transfer function (as opposed to the LMI feasibility problem associated with the scaled small gain condition which is dependent on knowledge about the controller). Thus, given a nominal plant and a structured uncertainty set, the stability robustness condition presented in this paper provides a single constraint on a controller (in terms of a large enough generalized robust stability margin) that (sufficiently) guarantees to stabilize all plants in the uncertainty set. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
118.
Evaluation has been performed of compensators generated by means of a computerized three-dimensional treatment planning system that can utilize either digitized slice profiles or CT scans. Two methods of calculating compensator thickness are used: the modified Batho power law (dSAR) method for digitized profiles and the equivalent TAR (eqTAR) method for CT scans. This system not only compensates for patient surface contours but also compensates for internal inhomogeneities. In addition, any required wedging will be incorporated in the compensator generation. This system has been tested for a number of extreme cases with inhomogeneities and sloping contours. Good agreement was obtained between the measured and computer calculated dose profiles especially along the central axis of the beam. A "Profile Uniformity Index" was defined to quantify the goodness of dose compensation in three dimensions. Compensation using this system can achieve good dose uniformity within the target volume in all clinical cases and is definitely an improvement over systems based solely on tissue deficit.  相似文献   
119.
A new approach using input-output techniques is proposed for the analysis of urban stormwater pollution caused by urban land development. The input-output model provides projections of sectoral outputs within an urban region. By defining land as an input to production, these output projections may be translated into projections of commercial and industrial land development. Furthermore, the closed version of the input-output model is used to project residential land development as a function of projected wage income. The pollutant generation in urban stormwater is related to the quantity of each category of land development by a pollutant coefficient matrix. Thus, the model can be used to predict the impact of various economic growth scenarios on pollution loadings in runoff water. This will help planners in assessing the environmental costs of various scenarios, and in preparing for remedial actions. A numerical example is provided to illustrate the applications of the model.  相似文献   
120.
In this paper we study the problem of asynchronous processors traversing a list with path compression. We show that if an atomic splice operation is available, the worst-case work forp processors traversing a list of length h is (np 1/2). The splice operation can be generalized to removek elements from the list. For thek-splice operation the worst-case work is (np 1/ k+1).This research was supported by an NSF Presidential Young Investigator Award CCR-8657562, Digital Equipment Corporation, NSF CER Grant CCR-861966, and NSF/Darpa Grant CCR-8907960. A preliminary version of this paper was presented at the Fourth Annual ACM Symposium on Parallel Algorithms and Architectures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号