首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6289篇
  免费   115篇
电工技术   73篇
综合类   9篇
化学工业   808篇
金属工艺   86篇
机械仪表   97篇
建筑科学   226篇
矿业工程   17篇
能源动力   98篇
轻工业   605篇
水利工程   61篇
石油天然气   16篇
无线电   621篇
一般工业技术   831篇
冶金工业   2093篇
原子能技术   43篇
自动化技术   720篇
  2022年   38篇
  2021年   71篇
  2020年   39篇
  2019年   84篇
  2018年   81篇
  2017年   62篇
  2016年   64篇
  2015年   62篇
  2014年   87篇
  2013年   240篇
  2012年   157篇
  2011年   220篇
  2010年   151篇
  2009年   166篇
  2008年   217篇
  2007年   200篇
  2006年   194篇
  2005年   164篇
  2004年   151篇
  2003年   126篇
  2002年   133篇
  2001年   134篇
  2000年   108篇
  1999年   142篇
  1998年   504篇
  1997年   324篇
  1996年   226篇
  1995年   191篇
  1994年   179篇
  1993年   172篇
  1992年   103篇
  1991年   77篇
  1990年   96篇
  1989年   103篇
  1988年   80篇
  1987年   78篇
  1986年   71篇
  1985年   89篇
  1984年   77篇
  1983年   62篇
  1982年   56篇
  1981年   48篇
  1980年   55篇
  1979年   62篇
  1978年   54篇
  1977年   77篇
  1976年   158篇
  1975年   51篇
  1974年   38篇
  1972年   37篇
排序方式: 共有6404条查询结果,搜索用时 15 毫秒
71.
Statistical detection of mass malware has been shown to be highly successful. However, this type of malware is less interesting to cyber security officers of larger organizations, who are more concerned with detecting malware indicative of a targeted attack. Here we investigate the potential of statistically based approaches to detect such malware using a malware family associated with a large number of targeted network intrusions. Our approach is complementary to the bulk of statistical based malware classifiers, which are typically based on measures of overall similarity between executable files. One problem with this approach is that a malicious executable that shares some, but limited, functionality with known malware is likely to be misclassified as benign. Here a new approach to malware classification is introduced that classifies programs based on their similarity with known malware subroutines. It is illustrated that malware and benign programs can share a substantial amount of code, implying that classification should be based on malicious subroutines that occur infrequently, or not at all in benign programs. Various approaches to accomplishing this task are investigated, and a particularly simple approach appears the most effective. This approach simply computes the fraction of subroutines of a program that are similar to malware subroutines whose likes have not been found in a larger benign set. If this fraction exceeds around 1.5 %, the corresponding program can be classified as malicious at a 1 in 1000 false alarm rate. It is further shown that combining a local and overall similarity based approach can lead to considerably better prediction due to the relatively low correlation of their predictions.  相似文献   
72.
A new algorithm is presented for the modeling and simulation of multi-flexible-body systems. This algorithm is built upon a divide-and-conquer-based multibody dynamics framework, and it is capable of handling arbitrary large rotations and deformations in articulated flexible bodies. As such, this work extends the current capabilities of the flexible divide-and-conquer algorithm (Mukherjee and Anderson in Comput. Nonlinear Dyn. 2(1):10–21, 2007), which is limited to the use of assumed modes in a floating frame of reference configuration. The present algorithm utilizes the existing finite element modeling techniques to construct the equations of motion at the element level, as well as at the body level. It is demonstrated that these equations can be assembled and solved using a divide-and-conquer type methodology. In this respect, the new algorithm is applied using the absolute nodal coordinate formulation (ANCF) (Shabana, 1996). The ANCF is selected because of its straightforward implementation and effectiveness in modeling large deformations. It is demonstrated that the present algorithm provides an efficient and robust method for modeling multi-flexible-body systems that employ highly deformable bodies. The new algorithm is tested using three example systems employing deformable bodies in two and three spatial dimensions. The current examples are limited to the ANCF line or cable elements, but the approach may be extended to higher order elements. In its basic form, the divide-and-conquer algorithm is time and processor optimal, yielding logarithmic complexity O(log(N b )) when implemented using O(N b ) processors, where N b is the number of bodies in the system.  相似文献   
73.
E-ARK (European Archiving of Records and Knowledge Preservation) is currently introducing standardization, new tools, and infrastructure into pan-European digital archival systems. The results of this pilot project can be applied to higher education where research data centers are required to preserve data for re-use in accordance with new open policies.

We elucidate how the E-ARK best practice survey has informed the development of new information packages for ingest, preservation, and re-use, and show how these can be applied to the Arts, Humanities, and Social Sciences with practical exemplars such as census data.  相似文献   
74.
The digital revolution has changed the daily work of archivists. As a result, archivists have refined their analog practices and initiated new ones to accommodate the unique information creation and access needs of users. While limitless storage and technological obsolescence may be surmountable challenges, appraisal continues to be a necessary component of this preservation enterprise. Macro-appraisal—a top-down functional assessment of an organization’s high-level decision making that defines the enduring value of specific bodies of documentary evidence of these creative or administrative processes—is used to facilitate the management of diverse bodies of born-digital records in archives and special collections.  相似文献   
75.
In this paper, we consider the problem of flocking and shape‐orientation control of multi‐agent systems with inter‐agent and obstacle collision avoidance. We first consider the problem of forcing a set of autonomous agents to form a desired formation shape and orientation while avoiding inter‐agent collision and collision with convex obstacles, and following a trajectory known to only one of the agents, namely the leader of the formation. Then we build upon the solution given to this problem and solve the problem of guaranteeing obstacle collision avoidance by changing the size and the orientation of the formation. Changing the size and the orientation of the formation is helpful when the agents want to go through a narrow passage while the existing size or orientation of the formation does not allow this. We also propose collision avoidance algorithms that temporarily change the shape of the formation to avoid collision with stationary or moving nonconvex obstacles. Simulation results are presented to show the performance of the proposed control laws.  相似文献   
76.
An extension to the divide-and-conquer algorithm (DCA) is presented in this paper to model constrained multibody systems. The constraints of interest are those applied to the system due to the inverse dynamics or control laws rather than the kinematically closed loops which have been studied in the literature. These imposed constraints are often expressed in terms of the generalized coordinates and speeds. A set of unknown generalized constraint forces must be considered in the equations of motion to enforce these algebraic constraints. In this paper dynamics of this class of multibody constrained systems is formulated using a Generalized-DCA. In this scheme, introducing dynamically equivalent forcing systems, each generalized constraint force is replaced by its dynamically equivalent spatial constraint force applied from the appropriate parent body to the associated child body at the connecting joint without violating the dynamics of the original system. The handle equations of motion are then formulated considering these dynamically equivalent spatial constraint forces. These equations in the GDCA scheme are used in the assembly and disassembly processes to solve for the states of the system, as well as the generalized constraint forces and/or Lagrange multipliers.  相似文献   
77.
The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. These variables are obtained by a spectrophotometer device. This device measures hundreds of correlated variables related with physicocbemical properties and that can be used to estimate the component of interest. The problem is the selection of a subset of informative and uncorrelated variables that help the minimization of prediction error. Classical algorithms select a subset of variables for each compound considered. In this work we propose the use of the SPEA-II (strength Pareto evolutionary algorithm II). We would like to show that the variable selection algorithm can selected just one subset used for multiple determinations using multiple linear regressions. For the case study is used wheat data obtained by NIR (near-infrared spectroscopy) spectrometry where the objective is the determination of a variable subgroup with information about E protein content (%), test weight (Kg/HI), WKT (wheat kernel texture) (%) and farinograph water absorption (%). The results of traditional techniques of multivariate calibration as the SPA (successive projections algorithm), PLS (partial least square) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 37%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares.  相似文献   
78.
Seasonal temperature and bioenergetic models were coupled to explore the impacts on juvenile salmonid growth of possible climate‐induced changes to mean annual water temperature and snowpack in four characteristic ecoregions. Increasing mean temperature increases juvenile growth in streams that currently experience cool spring temperatures. In streams with currently warm spring temperatures, an increase shortens the duration of optimal conditions and truncates growth. A loss of snow enhances growth in cool‐summer streams and decreases growth in warm‐summer streams. The relative impacts of such climate change trends will vary significantly across ecoregions. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
79.
Asimov’s “three laws of robotics” and machine metaethics   总被引:1,自引:1,他引:0  
Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make. Finally, it is argued that Asimov’s “three laws of robotics” are an unsatisfactory basis for machine ethics, regardless of the status of the machine.
Susan Leigh AndersonEmail:
  相似文献   
80.
In this paper, we address the problem of agent loss in vehicle formations and sensor networks via two separate approaches: (1) perform a ‘self‐repair’ operation in the event of agent loss to recover desirable information architecture properties or (2) introduce robustness into the information architecture a priori such that agent loss does not destroy desirable properties. We model the information architecture as a graph G(V, E), where V is a set of vertices representing the agents and E is a set of edges representing information flow amongst the agents. We focus on two properties of the graph called rigidity and global rigidity, which are required for formation shape maintenance and sensor network self‐localization, respectively. For the self‐repair approach, we show that while previous results permit local repair involving only neighbours of the lost agent, the repair cannot always be implemented using only local information. We present new results that can be applied to make the local repair using only local information. We describe implementation and illustrate with algorithms and examples. For the robustness approach, we investigate the structure of graphs with the property that rigidity or global rigidity is preserved after removing any single vertex (we call the property as 2‐vertex‐rigidity or 2‐vertex‐global‐rigidity, respectively). Information architectures with such properties would allow formation shape maintenance or self‐localization to be performed even in the event of agent failure. We review a characterization of a class of 2‐vertex‐rigidity and develop a separate class, making significant strides towards a complete characterization. We also present a characterization of a class of 2‐vertex‐global‐rigidity. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号