首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4597篇
  免费   288篇
  国内免费   33篇
电工技术   77篇
综合类   9篇
化学工业   1305篇
金属工艺   104篇
机械仪表   142篇
建筑科学   128篇
矿业工程   12篇
能源动力   328篇
轻工业   397篇
水利工程   40篇
石油天然气   90篇
无线电   475篇
一般工业技术   832篇
冶金工业   216篇
原子能技术   59篇
自动化技术   704篇
  2024年   9篇
  2023年   76篇
  2022年   180篇
  2021年   253篇
  2020年   219篇
  2019年   256篇
  2018年   322篇
  2017年   257篇
  2016年   255篇
  2015年   169篇
  2014年   250篇
  2013年   482篇
  2012年   265篇
  2011年   324篇
  2010年   245篇
  2009年   193篇
  2008年   155篇
  2007年   106篇
  2006年   107篇
  2005年   64篇
  2004年   61篇
  2003年   50篇
  2002年   56篇
  2001年   34篇
  2000年   33篇
  1999年   21篇
  1998年   70篇
  1997年   46篇
  1996年   44篇
  1995年   32篇
  1994年   28篇
  1993年   27篇
  1992年   25篇
  1991年   18篇
  1990年   14篇
  1989年   18篇
  1988年   13篇
  1987年   15篇
  1986年   6篇
  1985年   13篇
  1984年   10篇
  1983年   9篇
  1982年   10篇
  1981年   10篇
  1980年   11篇
  1979年   8篇
  1978年   9篇
  1977年   8篇
  1976年   13篇
  1971年   4篇
排序方式: 共有4918条查询结果,搜索用时 93 毫秒
91.
In smart environments, pervasive computing contributes in improving daily life activities for dependent people by providing personalized services. Nevertheless, those environments do not guarantee a satisfactory level for protecting the user privacy and ensuring the trust between communicating entities. In this study, we propose a trust evaluation model based on user past and present behavior. This model is associated with a lightweight authentication key agreement protocol (Elliptic Curve-based Simple Authentication Key Agreement). The aim is to enable the communicating entities to establish a level of trust and then succeed in a mutual authentication using a scheme suitable for low-resource devices in smart environments. An innovation in our trust model is that it uses an accurate approach to calculate trust in different situations and includes a human-based feature for trust feedback, which is user rating. Finally, we tested and implemented our scheme on Android mobile phones in a smart environment dedicated for handicapped people.  相似文献   
92.
Recently, we introduced the sorted Gaussian mixture models (SGMMs) algorithm providing the means to tradeoff performance for operational speed and thus permitting the speed-up of GMM-based classification schemes. The performance of the SGMM algorithm depends on the proper choice of the sorting function, and the proper adjustment of its parameters. In the present work, we employ particle swarm optimization (PSO) and an appropriate fitness function to find the most advantageous parameters of the sorting function. We evaluate the practical significance of our approach on the text-independent speaker verification task utilizing the NIST 2002 speaker recognition evaluation (SRE) database while following the NIST SRE experimental protocol. The experimental results demonstrate a superior performance of the SGMM algorithm using PSO when compared to the original SGMM. For comprehensiveness we also compared these results with those from a baseline Gaussian mixture model-universal background model (GMM-UBM) system. The experimental results suggest that the performance loss due to speed-up is partially mitigated using PSO-derived weights in a sorted GMM-based scheme.  相似文献   
93.
The aim of this paper is to deal with an output controllability problem. It consists in driving the state of a distributed parabolic system toward a state between two prescribed functions on a boundary subregion of the system evolution domain with minimum energy control. Two necessary conditions are derived. The first one is formulated in terms of subdifferential associated with a minimized functional. The second one is formulated as a system of equations for arguments of the Lagrange systems. Numerical illustrations show the efficiency of the second approach and lead to some conjectures. Recommended by Editorial Board member Fumitoshi Matsuno under the direction of Editor Jae Weon Choi. Zerrik El Hassan is a Professor at the university Moulay Ismail of Meknes in Morocco. He was an Assistant Professor in the faculty of sciences of Meknes and researcher at the university of Perpignan (France). He got his doctorat d etat in system regional analysis (1993) at the University Mohammed V of Rabat, Morocco. Professor Zerrik wrote many papers and books in the area of systems analysis and control. Now he is the Head of the research team MACS (Modeling Analysis and Control of Systems) at the university Moulay Ismail of Meknes in Morocco. Ghafrani Fatima is a Researcher at team MACS at the University Moulay Ismail of Meknes in Morocco. She wrote many papers in the area of systems analysis and control.  相似文献   
94.
An original inversion method specifically adapted to the estimation of Poisson coefficient of balls by using their resonance spectra is described. From the study of their elastic vibrations, it is possible to accurately characterize the balls. The proposed methodology can create both spheroidal modes in the balls and detect such vibrations over a large frequency range. Experimentally, by using both an ultrasonic probe for the emission (piezoelectric transducer) and a heterodyne optic probe for the reception (interferometer), it was possible to take spectroscopic measurements of spheroidal vibrations over a large frequency range (100 kHz-45 MHz) in a continuous regime. This method, which uses ratios between wave resonance frequencies, allows the Poisson coefficient to be determined independently of Young's modulus and the ball's radius and density. This has the advantage of providing highly accurate estimations of Poisson coefficient (+/-4.3 x 10(-4)) over a wide frequency range.  相似文献   
95.

Background

The use of crowdsourcing in a pedagogically supported form to partner with learners in developing novel content is emerging as a viable approach for engaging students in higher-order learning at scale. However, how students behave in this form of crowdsourcing, referred to as learnersourcing, is still insufficiently explored.

Objectives

To contribute to filling this gap, this study explores how students engage with learnersourcing tasks across a range of course and assessment designs.

Methods

We conducted an exploratory study on trace data of 1279 students across three courses, originating from the use of a learnersourcing environment under different assessment designs. We employed a new methodology from the learning analytics (LA) field that aims to represent students' behaviour through two theoretically-derived latent constructs: learning tactics and the learning strategies built upon them.

Results

The study's results demonstrate students use different tactics and strategies, highlight the association of learnersourcing contexts with the identified learning tactics and strategies, indicate a significant association between the strategies and performance and contribute to the employed method's generalisability by applying it to a new context.

Implications

This study provides an example of how learning analytics methods can be employed towards the development of effective learnersourcing systems and, more broadly, technological educational solutions that support learner-centred and data-driven learning at scale. Findings should inform best practices for integrating learnersourcing activities into course design and shed light on the relevance of tactics and strategies to support teachers in making informed pedagogical decisions.  相似文献   
96.

The Peer to Peer-Cloud (P2P-Cloud) is a suitable alternative to distributed cloud-based or peer-to-peer (P2P)-based content on a large scale. The P2P-Cloud is used in many applications such as IPTV, Video-On-Demand, and so on. In the P2P-Cloud network, overload is a common problem during overcrowds. If a node receives many requests simultaneously, the node may not be able to respond quickly to user requests, and this access latency in P2P-Cloud networks is a major problem for their users. The replication method in P2P-Cloud environments reduces the time to access and uses network bandwidth by making multiple data copies in diverse locations. The replication improves access to the information and increases the reliability of the system. The data replication's main problem is identifying the best possible placement of replica data nodes based on user requests for data access time and an NP-hard optimization problem. This paper proposes a new replica replacement to improve average access time and replica cost using fuzzy logic and Ant Colony Optimization algorithm. Ants can find the shortest path to discover the optimal node to place the duplicate file with the least access time latency. The fuzzy module evaluates the historical information of each node to analyze the pheromone value per iteration. The fuzzy membership function is also used to determine each node's degree based on the four characteristics. The simulation results showed that the access time and replica cost are improved compared to other replica replacement algorithms.

  相似文献   
97.

In this article, we will present a new set of hybrid polynomials and their corresponding moments, with a view to using them for the localization, compression and reconstruction of 2D and 3D images. These polynomials are formed from the Hahn and Krawtchouk polynomials. The process of calculating these is successfully stabilized using the modified recurrence relations with respect to the n order, the variable x and the symmetry property. The hybrid polynomial generation process is carried out in two forms: the first form contains the separable discrete orthogonal polynomials of Krawtchouk–Hahn (DKHP) and Hahn–Krawtchouk (DHKP). The latter are generated as the product of the discrete orthogonal Hahn and Krawtchouk polynomials, while the second form is the square equivalent of the first form, it consists of discrete squared Krawtchouk–Hahn polynomials (SKHP) and discrete polynomials of Hahn–Krawtchouk squared (SHKP). The experimental results clearly show the efficiency of hybrid moments based on hybrid polynomials in terms of localization property and computation time of 2D and 3D images compared to other types of moments; on the other hand, encouraging results have also been shown in terms of reconstruction quality and compression despite the superiority of classical polynomials.

  相似文献   
98.

The edge computing model offers an ultimate platform to support scientific and real-time workflow-based applications over the edge of the network. However, scientific workflow scheduling and execution still facing challenges such as response time management and latency time. This leads to deal with the acquisition delay of servers, deployed at the edge of a network and reduces the overall completion time of workflow. Previous studies show that existing scheduling methods consider the static performance of the server and ignore the impact of resource acquisition delay when scheduling workflow tasks. Our proposed method presented a meta-heuristic algorithm to schedule the scientific workflow and minimize the overall completion time by properly managing the acquisition and transmission delays. We carry out extensive experiments and evaluations based on commercial clouds and various scientific workflow templates. The proposed method has approximately 7.7% better performance than the baseline algorithms, particularly in overall deadline constraint that gives a success rate.

  相似文献   
99.
The Journal of Supercomputing - This paper designs and develops a computational intelligence-based framework using convolutional neural network (CNN) and genetic algorithm (GA) to detect COVID-19...  相似文献   
100.
Data available in software engineering for many applications contains variability and it is not possible to say which variable helps in the process of the prediction. Most of the work present in software defect prediction is focused on the selection of best prediction techniques. For this purpose, deep learning and ensemble models have shown promising results. In contrast, there are very few researches that deals with cleaning the training data and selection of best parameter values from the data. Sometimes data available for training the models have high variability and this variability may cause a decrease in model accuracy. To deal with this problem we used the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of the best variables to train the model. A simple ANN model with one input, one output and two hidden layers was used for the training instead of a very deep and complex model. AIC and BIC values are calculated and combination for minimum AIC and BIC values to be selected for the best model. At first, variables were narrowed down to a smaller number using correlation values. Then subsets for all the possible variable combinations were formed. In the end, an artificial neural network (ANN) model was trained for each subset and the best model was selected on the basis of the smallest AIC and BIC value. It was found that combination of only two variables’ ns and entropy are best for software defect prediction as it gives minimum AIC and BIC values. While, nm and npt is the worst combination and gives maximum AIC and BIC values.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号