首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   470篇
  免费   42篇
  国内免费   1篇
电工技术   10篇
综合类   2篇
化学工业   115篇
金属工艺   4篇
机械仪表   20篇
建筑科学   30篇
矿业工程   3篇
能源动力   12篇
轻工业   26篇
水利工程   36篇
石油天然气   1篇
无线电   29篇
一般工业技术   94篇
冶金工业   57篇
原子能技术   1篇
自动化技术   73篇
  2024年   1篇
  2023年   5篇
  2022年   13篇
  2021年   13篇
  2020年   10篇
  2019年   22篇
  2018年   19篇
  2017年   24篇
  2016年   21篇
  2015年   16篇
  2014年   17篇
  2013年   34篇
  2012年   43篇
  2011年   47篇
  2010年   31篇
  2009年   20篇
  2008年   37篇
  2007年   31篇
  2006年   15篇
  2005年   14篇
  2004年   8篇
  2003年   9篇
  2002年   7篇
  2001年   7篇
  2000年   6篇
  1999年   4篇
  1998年   1篇
  1997年   5篇
  1996年   2篇
  1995年   2篇
  1994年   1篇
  1992年   2篇
  1991年   3篇
  1990年   1篇
  1989年   2篇
  1985年   1篇
  1984年   3篇
  1983年   4篇
  1982年   4篇
  1981年   1篇
  1980年   3篇
  1979年   1篇
  1978年   1篇
  1976年   1篇
  1974年   1篇
排序方式: 共有513条查询结果,搜索用时 15 毫秒
111.
112.
Malware classification using machine learning algorithms is a difficult task, in part due to the absence of strong natural features in raw executable binary files. Byte n-grams previously have been used as features, but little work has been done to explain their performance or to understand what concepts are actually being learned. In contrast to other work using n-gram features, in this work we use orders of magnitude more data, and we perform feature selection during model building using Elastic-Net regularized Logistic Regression. We compute a regularization path and analyze novel multi-byte identifiers. Through this process, we discover significant previously unreported issues with byte n-gram features that cause their benefits and practicality to be overestimated. Three primary issues emerged from our work. First, we discovered a flaw in how previous corpora were created that leads to an over-estimation of classification accuracy. Second, we discovered that most of the information contained in n-grams stem from string features that could be obtained in simpler ways. Finally, we demonstrate that n-gram features promote overfitting, even with linear models and extreme regularization.  相似文献   
113.
A recurring problem in 3D applications is nearest-neighbor lookups in 3D point clouds. In this work, a novel method for exact and approximate 3D nearest-neighbor lookups is proposed that allows lookup times that are, contrary to previous approaches, nearly independent of the distribution of data and query points, allowing to use the method in real-time scenarios. The lookup times of the proposed method outperform prior art sometimes by several orders of magnitude. This speedup is bought at the price of increased costs for creating the indexing structure, which, however, can typically be done in an offline phase. Additionally, an approximate variant of the method is proposed that significantly reduces the time required for data structure creation and further improves lookup times, outperforming all other methods and yielding almost constant lookup times. The method is based on a recursive spatial subdivision using an octree that uses the underlying Voronoi tessellation as splitting criteria, thus avoiding potentially expensive backtracking. The resulting octree is represented implicitly using a hash table, which allows finding the leaf node a query point belongs to with a runtime that is logarithmic in the tree depth. The method is also trivially extendable to 2D nearest neighbor lookups.  相似文献   
114.
Nowadays a large variety of applications are based on solid nanoparticles dispersed in liquids—so called nanofluids. The interaction between the fluid and the nanoparticles plays a decisive role in the physical properties of the nanofluid. A novel approach based on the nonradiative energy transfer between two small luminescent nanocrystals (GdVO4:Nd3+ and GdVO4:Yb3+) dispersed in water is used in this work to investigate how temperature affects both the processes of interaction between nanoparticles and the effect of the fluid on the nanoparticles. From a systematic analysis of the effect of temperature on the GdVO4:Nd3+ → GdVO4:Yb3+ interparticle energy transfer, it can be concluded that a dramatic increase in the energy transfer efficiency occurs for temperatures above 45 °C. This change is properly explained by taking into account a crossover existing in diverse water properties that occurs at about this temperature. The obtained results allow elucidation on the molecular arrangement of water molecules below and above this crossover temperature. In addition, it is observed that an energy transfer process is produced as a result of interparticle collisions that induce irreversible ion exchange between the interacting nanoparticles.  相似文献   
115.
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied.LP is the most commonly used operations research methods in mining engineering.After the introductory review of properties and limitations of applying LP,short reviews of the general settings of deterministic and fuzzy LP models are presented.With the purpose of comparative analysis,the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines.After the assessment,LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering.After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models,the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.  相似文献   
116.
117.
We address the problem of minimizing power consumption when broadcasting a message from one node to all the other nodes in a radio network. To enable power savings for such a problem, we introduce a compelling new data streaming problem which we call the Bad Santa problem. Our results on this problem apply for any situation where: (1) a node can listen to a set of n nodes, out of which at least half are non-faulty and know the correct message; and (2) each of these n nodes sends according to some predetermined schedule which assigns each of them its own unique time slot. In this situation, we show that in order to receive the correct message with probability 1, it is necessary and sufficient for the listening node to listen to a \(\Theta(\sqrt{n})\) expected number of time slots. Moreover, if we allow for repetitions of transmissions so that each sending node sends the message O(log?? n) times (i.e. in O(log?? n) rounds each consisting of the n time slots), then listening to O(log?? n) expected number of time slots suffices. We show that this is near optimal.We describe an application of our result to the popular grid model for a radio network. Each node in the network is located on a point in a two dimensional grid, and whenever a node sends a message m, all awake nodes within L distance r receive m. In this model, up to \(t<\frac{r}{2}(2r+1)\) nodes within any 2r+1 by 2r+1 square in the grid can suffer Byzantine faults. Moreover, we assume that the nodes that suffer Byzantine faults are chosen and controlled by an adversary that knows everything except for the random bits of each non-faulty node. This type of adversary models worst-case behavior due to malicious attacks on the network; mobile nodes moving around in the network; or static nodes losing power or ceasing to function. Let n=r(2r+1). We show how to solve the broadcast problem in this model with each node sending and receiving an expected \(O(n\log^{2}{|m|}+\sqrt{n}|m|)\) bits where |m| is the number of bits in m, and, after broadcasting a fingerprint of m, each node is awake only an expected \(O(\sqrt{n})\) time slots. Moreover, for t≤(1?ε)(r/2)(2r+1), for any constant ε>0, we can achieve an even better energy savings. In particular, if we allow each node to send O(log?? n) times, we achieve reliable broadcast with each node sending O(nlog?2|m|+(log?? n)|m|) bits and receiving an expected O(nlog?2|m|+(log?? n)|m|) bits and, after broadcasting a fingerprint of m, each node is awake for only an expected O(log?? n) time slots. Our results compare favorably with previous protocols that required each node to send Θ(|m|) bits, receive Θ(n|m|) bits and be awake for Θ(n) time slots.  相似文献   
118.
This paper concerns the use of feedforward neural networks (FNN) for predicting the nondimensional velocity of the gas that flows along a porous wall. The numerical solution of partial differential equations that govern the fluid flow is applied for training and testing the FNN. The equations were solved using finite differences method by writing a FORTRAN code. The Levenberg–Marquardt algorithm is used to train the neural network. The optimal FNN architecture was determined. The FNN predicted values are in accordance with the values obtained by the finite difference method (FDM). The performance of the neural network model was assessed through the correlation coefficient (r), mean absolute error (MAE) and mean square error (MSE). The respective values of r, MAE and MSE for the testing data are 0.9999, 0.0025 and 1.9998 · 10?5.  相似文献   
119.
120.
The characteristics of fecal sources, and the ways in which they are measured, can profoundly influence the interpretation of which sources are contaminating a body of water. Although feces from various hosts are known to differ in mass and composition, it is not well understood how those differences compare across fecal sources and how differences depend on characterization methods. This study investigated how nine different fecal characterization methods provide different measures of fecal concentration in water, and how results varied across twelve different fecal pollution sources. Sources investigated included chicken, cow, deer, dog, goose, gull, horse, human, pig, pigeon, septage and sewage. A composite fecal slurry was prepared for each source by mixing feces from 6 to 22 individual samples with artificial freshwater. Fecal concentrations were estimated by physical (wet fecal mass added and total DNA mass extracted), culture-based (Escherichia coli and enterococci by membrane filtration and defined substrate), and quantitative real-time PCR (Bacteroidales, E. coli, and enterococci) characterization methods. The characteristics of each composite fecal slurry and the relationships between physical, culture-based and qPCR-based characteristics varied within and among different fecal sources. An in silico exercise was performed to assess how different characterization methods can impact identification of the dominant fecal pollution source in a mixed source sample. A comparison of simulated 10:90 mixtures based on enterococci by defined substrate predicted a source reversal in 27% of all possible combinations, while mixtures based on E. coli membrane filtration resulted in a reversal 29% of the time. This potential for disagreement in minor or dominant source identification based on different methods of measurement represents an important challenge for water quality managers and researchers.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号