首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5546篇
  免费   444篇
  国内免费   58篇
电工技术   71篇
综合类   21篇
化学工业   1273篇
金属工艺   88篇
机械仪表   262篇
建筑科学   101篇
矿业工程   8篇
能源动力   341篇
轻工业   719篇
水利工程   64篇
石油天然气   30篇
武器工业   1篇
无线电   705篇
一般工业技术   1267篇
冶金工业   81篇
原子能技术   51篇
自动化技术   965篇
  2024年   37篇
  2023年   208篇
  2022年   492篇
  2021年   763篇
  2020年   461篇
  2019年   520篇
  2018年   475篇
  2017年   396篇
  2016年   399篇
  2015年   242篇
  2014年   303篇
  2013年   402篇
  2012年   247篇
  2011年   305篇
  2010年   170篇
  2009年   143篇
  2008年   94篇
  2007年   88篇
  2006年   35篇
  2005年   27篇
  2004年   35篇
  2003年   26篇
  2002年   18篇
  2001年   8篇
  2000年   10篇
  1999年   13篇
  1998年   17篇
  1997年   9篇
  1996年   11篇
  1995年   16篇
  1994年   7篇
  1993年   11篇
  1992年   8篇
  1991年   7篇
  1990年   1篇
  1989年   5篇
  1988年   4篇
  1987年   5篇
  1986年   1篇
  1985年   6篇
  1984年   3篇
  1983年   2篇
  1982年   5篇
  1981年   3篇
  1980年   1篇
  1979年   3篇
  1978年   2篇
  1977年   3篇
  1961年   1篇
排序方式: 共有6048条查询结果,搜索用时 15 毫秒
71.
One of the most pressing concerns for the consumer market is the detection of adulteration in meat products due to their preciousness. The rapid and accurate identification mechanism for lard adulteration in meat products is highly necessary, for developing a mechanism trusted by consumers and that can be used to make a definitive diagnosis. Fourier Transform Infrared Spectroscopy (FTIR) is used in this work to identify lard adulteration in cow, lamb, and chicken samples. A simplified extraction method was implied to obtain the lipids from pure and adulterated meat. Adulterated samples were obtained by mixing lard with chicken, lamb, and beef with different concentrations (10%–50% v/v). Principal component analysis (PCA) and partial least square (PLS) were used to develop a calibration model at 800–3500 cm−1. Three-dimension PCA was successfully used by dividing the spectrum in three regions to classify lard meat adulteration in chicken, lamb, and beef samples. The corresponding FTIR peaks for the lard have been observed at 1159.6, 1743.4, 2853.1, and 2922.5 cm−1, which differentiate chicken, lamb, and beef samples. The wavenumbers offer the highest determination coefficient R2 value of 0.846 and lowest root mean square error of calibration (RMSEC) and root mean square error prediction (RMSEP) with an accuracy of 84.6%. Even the tiniest fat adulteration up to 10% can be reliably discovered using this methodology.  相似文献   
72.
Link relative-based approach was used in an article (see reference 1) to enhance the performance of the cumulative sum (CUSUM) control chart. This technique involves the use of firstly, the link relative variable to convert the process observations in a relative to the mean form and secondly, optimal constants to define a new variable which is used as the plotting statistic of the link relative CUSUM chart. In this article, it is proven through simulation study that the optimal constants with fixed values, as reported in the aforementioned article, give different results. Instead, if the regression technique is used, then the same results will be obtained.  相似文献   
73.
The alkylamines and their related boron derivatives demonstrated potent cytotoxicity against the growth of murine and human tissue cultured cells. These agents did not necessarily require the boron atom to possess potent cytotoxic action in certain tumor lines. Their ability to suppress tumor cell growth was based on their inhibition of DNA and protein syntheses. DNA synthesis was reduced because purine synthesis was blocked at the enzyme site of IMP dehydrogenase by the agents. In addition ribonucleotide reductase and nucleoside kinase activities were reduced by the agents which would account for the reduced d[NTP] pools. The DNA template or molecule may be a target of the drugs with regard to binding of the drug to nucleoside bases or intercalaction of the drug between DNA base pairs. Only some Of the agents caused DNA fragmentation with reduced DNA viscosity. These effects would contribute to overall cell death afforded by the agents.  相似文献   
74.
Coordinated controller tuning of the boiler turbine unit is a challenging task due to the nonlinear and coupling characteristics of the system. In this paper, a new variant of binary particle swarm optimization (PSO) algorithm, called probability based binary PSO (PBPSO), is presented to tune the parameters of a coordinated controller. The simulation results show that PBPSO can effectively optimize the control parameters and achieves better control performance than those based on standard discrete binary PSO, modified binary PSO, and standard continuous PSO.  相似文献   
75.
Context:How can quality of software systems be predicted before deployment? In attempting to answer this question, prediction models are advocated in several studies. The performance of such models drops dramatically, with very low accuracy, when they are used in new software development environments or in new circumstances.ObjectiveThe main objective of this work is to circumvent the model generalizability problem. We propose a new approach that substitutes traditional ways of building prediction models which use historical data and machine learning techniques.MethodIn this paper, existing models are decision trees built to predict module fault-proneness within the NASA Critical Mission Software. A genetic algorithm is developed to combine and adapt expertise extracted from existing models in order to derive a “composite” model that performs accurately in a given context of software development. Experimental evaluation of the approach is carried out in three different software development circumstances.ResultsThe results show that derived prediction models work more accurately not only for a particular state of a software organization but also for evolving and modified ones.ConclusionOur approach is considered suitable for software data nature and at the same time superior to model selection and data combination approaches. It is then concluded that learning from existing software models (i.e., software expertise) has two immediate advantages; circumventing model generalizability and alleviating the lack of data in software-engineering.  相似文献   
76.
Robots have played an important role in the automation of computer aided manufacturing. The classical robot control implementation involves an expensive key step of model-based programming. An intuitive way to reduce this expensive exercise is to replace programming with machine learning of robot actions from demonstration where a (learner) robot learns an action by observing a demonstrator robot performing the same. To achieve this learning from demonstration (LFD) different machine learning techniques such as Artificial Neural Networks (ANN), Genetic Algorithms, Hidden Markov Models, Support Vector Machines, etc. can be used. This piece of work focuses exclusively on ANNs. Since ANNs have many standard architectural variations divided into two basic computational categories namely the recurrent networks and feed-forward networks, representative networks from each have been selected for study, i.e. Feed Forward Multilayer Perceptron (FF) network for feed-forward networks category and Elman (EL), and Nonlinear Autoregressive Exogenous Model (NARX) networks for the recurrent networks category. The main objective of this work is to identify the most suitable neural architecture for application of LFD in learning different robot actions. The sensor and actuator streams of demonstrated action are used as training data for ANN learning. Consequently, the learning capability is measured by comparing the error between demonstrator and corresponding learner streams. To achieve fairness in comparison three steps have been taken. First, Dynamic Time Warping is used to measure the error between demonstrator and learner streams, which gives resilience against translation in time. Second, comparison statistics are drawn between the best, instead of weight-equal, configurations of competing architectures so that learning capability of any architecture is not forced handicap. Third, each configuration's error is calculated as the average of ten trials of all possible learning sequences with random weight initialization so that the error value is independent of a particular sequence of learning or a particular set of initial weights. Six experiments are conducted to get a performance pattern of each architecture. In each experiment, a total of nine different robot actions were tested. Error statistics thus obtained have shown that NARX architecture is most suitable for this learning problem whereas Elman architecture has shown the worst suitability. Interestingly the computationally lesser MLP gives much lower and slightly higher error statistics compared to the computationally superior Elman and NARX neural architectures, respectively.  相似文献   
77.
In this paper, we introduce and consider a new class of mixed variational inequalities involving four operators, which are called extended general mixed variational inequalities. Using the resolvent operator technique, we establish the equivalence between the extended general mixed variational inequalities and fixed point problems as well as resolvent equations. We use this alternative equivalent formulation to suggest and analyze some iterative methods for solving general mixed variational inequalities. We study the convergence criteria for the suggested iterative methods under suitable conditions. Our methods of proof are very simple as compared with other techniques. The results proved in this paper may be viewed as refinements and important generalizations of the previous known results.  相似文献   
78.
79.
Two proposed techniques let microprocessors operate at low voltages despite high memory-cell failure rates. They identify and disable defective portions of the cache at two granularities: individual words or pairs of bits. Both techniques use the entire cache during high-voltage operation while sacrificing cache capacity during low-voltage operation to reduce the minimum voltage below 500 mV.  相似文献   
80.
To conserve space and power as well as to harness high performance in embedded systems, high utilization of the hardware is required. This can be facilitated through dynamic adaptation of the silicon resources in reconfigurable systems in order to realize various customized kernels as execution proceeds. Fortunately, the encountered reconfiguration overheads can be estimated. Therefore, if the scheduling of time-consuming kernels considers also the reconfiguration overheads, an overall performance gain can be obtained. We present our policy, experiments, and performance results of customizing and reconfiguring Field-Programmable Gate Arrays (FPGAs) for embedded kernels. Experiments involving EEMBC (EDN Embedded Microprocessor Benchmarking Consortium) and MiBench embedded benchmark kernels show high performance using our main policy, when considering reconfiguration overheads. Our policy reduces the required reconfigurations by more than 50% as compared to brute-force solutions, and performs within 25% of the ideal execution time while conserving 60% of the FPGA resources. Alternative strategies to reduce the reconfiguration overhead are also presented and evaluated.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号